Ends in
00
days
00
hrs
00
mins
00
secs
ENROLL NOW

🎁 Get 20% Off - Christmas Big Sale on All Practice Exams, Video Courses, and eBooks!

Google Kubernetes Engine (GKE)

Home » Google Cloud » Google Kubernetes Engine (GKE)

Google Kubernetes Engine (GKE)

Last updated on July 3, 2023

Google Kubernetes Engine Cheat Sheet

  • Secured and managed Kubernetes services with auto-scaling and multi-cluster support

Features

  • Can be configured to automatically scale node pool and clusters across multiple node pools based on changing workload requirements.
  • Auto-repair can be enabled to do health checks on node
  • Choose clusters tailored to your requirements based on:
    • Availability
    • Version Stability
    • Isolation
    • Pod Traffic requirements
  • Enable Cloud Logging and Cloud Monitoring via simple checkbox configurations.
  • Kubernetes version can be enabled to auto-upgrade with the latest release patch.
  • Supports Docker container format.
  • Integrates with Google Container Registry so you can easily access your private Docker images.
Tutorials dojo strip

Kubernetes Cluster Architecture

  • kubectl
    • Is the main CLI tool for running commands and managing Kubernetes clusters.
  • Cluster
    • All of the Kubernetes objects that represent your containerized applications run on top of a cluster.
  • Node
    • Nodes are the worker machines that run your containerized applications and other workloads.
    • A cluster typically has one or more nodes,
    • Kubernetes runs your workload by placing containers into Pods to run on Nodes.
  • Node Pool
    • A node pool is a set of nodes within a cluster that have similar configurations.
  • Cluster Autoscaler
    • Cluster Autoscaler automatically resizes the number of nodes in a given node pool, based on the demands of your workloads.
  • Horizontal Pod Autoscaling
    • HPA automatically scales the number of pods in response to
      • your workload’s CPU or memory consumption
      • custom metrics reported from within Kubernetes
      • customer metrics reported externally.
    • Cannot be used for workloads that cannot be scaled, such as DaemonSets.

Kubernetes API Objects

  • Pods
    • Are the smallest deployable units of computing that you can create and manage in Kubernetes.
    • Every pod has its own IP address. 
  • Deployment
    • You describe the desired state in a Deployment, and the Deployment Controller changes the actual state to the desired state at a controlled rate.
  • Service
    • Serves as a load balancer to balance traffic across a set of Pods
    • You are allowed to specify which type of Service you would like to use:
      • ClusterIP: Exposes the Service on a cluster-internal IP.
      • NodePort: Exposes the Service on each Node’s IP at a static port (the NodePort).
      • LoadBalancer: Exposes the Service externally using a cloud provider’s load balancer.
  • Daemon Set
    • A DaemonSet ensures that all (or some) Nodes run a copy of a Pod.
  • ConfigMaps
    • ConfigMaps enable you to separate your configurations from your Pods and components, which helps keep your workloads portable.

GKE Sandbox

  • Provides a second layer of security between containerized workloads on GKE.
  • GKE Sandbox uses gVisor.
  • You cannot enable GKE Sandbox on a default node pool.
  • When using Sandbox, you must have at least 2 node pools.
  • It is not possible to use accelerators such as GPUs or TPUs

Pricing

Pricing for Cluster Management

  • One zonal cluster (single-zone or multi-zonal) per billing account is free.
  • The fee is flat, irrespective of cluster size and topology—whether it is a single-zone cluster, multi-zonal cluster or regional cluster, all accrue the same flat fee per cluster.
  • Billing is computed on a per-second basis for each cluster. The total amount is rounded to the nearest cent, at the end of each month.
  • The fee does not apply to Anthos GKE clusters.

Pricing for worker node

  • GKE uses Compute Engine instances for worker nodes in the cluster. You are billed for each of those instances according to Compute Engine’s pricing, until the nodes are deleted. Compute Engine resources are billed on a per-second basis with a one-minute minimum usage cost.

Validate Your Knowledge

Question 1

You are developing your product on a Kubernetes cluster in the Google Cloud Platform. You dedicate one Pod for each of your customers, and they are allowed to deploy untrusted code in their respective Pod. Knowing this, you want to make sure that you isolate the Pods from each other to avoid issues.

What should you do?

  1. Add a custom node pool and configure the Enable sandbox with gVisor option. Add the runtimeClassName:gvisor parameter to each of your customers’ Pods.
  2. Whitelist the container images used by your customers’ Pods using Binary Authorization.
  3. Identify security vulnerabilities among the containers used by your customers’ Pods using the Container Analysis API.
  4. Utilize the cos_containerd image when creating GKE nodes. Add a nodeSelector field to your pod configuration with the value of cloud.google.com/gke-os-distribution: cos_containerd.

Correct Answer: 1

The Google Kubernetes Engine Sandbox provides an extra layer of security to prevent untrusted code from affecting the host kernel on your cluster nodes. 

To force a Deployment to run on a node with GKE Sandbox enabled, set its spec.template.spec.runtimeClassName to gvisor, as shown by this manifest for a Deployment:

Binary Authorization is a deploy-time security control that ensures only trusted container images are deployed on Google Kubernetes Engine (GKE).

Container Analysis API is an implementation of the Grafeas API, which stores and enables querying and retrieval of critical metadata about all of your software artifacts.

NodeSelector is the simplest recommended form of node selection constraint. Basically, NodeSelector is a field of PodSpec. It specifies a map of key-value pairs. For the pod to be eligible to run on a node, the node must have each of the indicated key-value pairs as labels. The most common usage is one key-value pair.

Hence, the correct answer is: In the Cloud Console, add a custom node pool and configure the Enable sandbox with gVisor option. Add the runtimeClassName:gvisor parameter to each of your customers’ Pods.

The option that says: Whitelist the container images used by your customers’ Pods using Binary Authorization is incorrect because this just ensures that only trusted container images are deployed on GKE. What we need is to isolate the Pods from each other instead of enforcing container validation on Pods.

The option that says: Identify security vulnerabilities among the containers used by your customers’ Pods using the Container Analysis API is incorrect because this simply allows you to query and retrieve critical metadata about your software artifacts. This will not help you isolate the Pods in the cluster.

The option that says: Utilize the cos_containerd image when creating GKE nodes. Add a nodeSelector field to your pod configuration with the value of cloud.google.com/gke-os-distribution: cos_containerd is incorrect because this just helps you define which node your Pods will be assigned. This is best used when you want a Pod to run on a specific node but not for isolating your Pods.

References:
https://cloud.google.com/kubernetes-engine/docs/concepts/sandbox-pods
https://cloud.google.com/kubernetes-engine/docs/how-to/sandbox-pods
https://cloud.google.com/binary-authorization

Note: This question was extracted from our Google Certified Associate Cloud Engineer Practice Exams.

Free AWS Courses

Question 2

Your company decided to use the Google Kubernetes Engine service with local PersistentVolumes to handle its batch processing jobs. These jobs only run overnight to process non-critical workloads and can be restarted at any time. You are tasked to deploy the most cost-effective solution.

What should you do?

  1. Create a Google Kubernetes Engine Cluster and enable Vertical Pod Autoscaling using the VerticalPodAutoscaler custom resource.
  2. Create a Google Kubernetes Engine Cluster and enable the node auto-provisioning feature.
  3. Create a Google Kubernetes Engine Cluster. Create a node pool and select the Enable preemptible nodes checkbox.
  4. Create a Google Kubernetes Engine Cluster. Enable autoscaling to automatically create and delete nodes.

Correct Answer: 3

Preemptible VMs are Compute Engine VM instances that last a maximum of 24 hours in general and provide no availability guarantees. Preemptible VMs are priced lower than standard Compute Engine VMs and offer the same machine types and options.

You can use preemptible VMs in your GKE clusters or node pools to run batch or fault-tolerant jobs that are less sensitive to the ephemeral, non-guaranteed nature of preemptible VMs.

When GKE clusters or node pools create Compute Engine VMs, the VMs behave like a managed instance group. Preemptible VMs in GKE are subject to the same limitations as preemptible instances in a managed instance group. Preemptible instances terminate after 30 seconds upon receiving a preemption notice.

To use preemptible VMs as node pool, select the Enable preemptible nodes checkbox on the node pool creation.

You can create a cluster or node pool with preemptible VMs by specifying the --preemptible flag.

gcloud container clusters create cluster-name --preemptible

gcloud container node-pools create pool-name --preemptible \

--cluster cluster-name

Hence the correct answer is: Create a Google Kubernetes Engine Cluster. Create a node pool and select the Enable preemptible nodes checkbox.

The option that says: Create a Google Kubernetes Engine Cluster and enable Vertical Pod Autoscaling using the VerticalPodAutoscaler custom resource is incorrect because the Vertical Pod Scaling service is primarily used to automate the configuration of your container’s CPU and memory request limits. It doesn’t lower down the cost of your batch jobs unlike what the Enable preemptible nodes feature can do.

The option that says: Create a Google Kubernetes Engine Cluster and enable the node auto-provisioning feature is incorrect because the Node Auto-provisioning feature just automatically manages a set of node pools in the Google Kubernetes Engine cluster on your behalf. It is stated in the scenario that the cluster is using local PersistentVolumes, which doesn’t support the node auto-provisioning feature.

The option that says: Create a Google Kubernetes Engine Cluster. Enable autoscaling to automatically create and delete nodes is incorrect. Even though this approach can save costs by automatically deleting unused nodes, using preemptible VMs still provides a bigger cost reduction. 

References: 
https://cloud.google.com/kubernetes-engine/docs/how-to/preemptible-vms
https://cloud.google.com/blog/products/containers-kubernetes/cutting-costs-with-google-kubernetes-engine-using-the-cluster-autoscaler-and-preemptible-vms
https://cloud.google.com/kubernetes-engine/docs/how-to/node-auto-provisioning

Check out this Google Kubernetes Engine and Google Compute Engine Cheat Sheet:
https://tutorialsdojo.com/google-kubernetes-engine-gke/
https://tutorialsdojo.com/google-compute-engine-gce/

Note: This question was extracted from our Google Certified Associate Cloud Engineer Practice Exams.

For more Google Cloud practice exam questions with detailed explanations, check out the Tutorials Dojo Portal:

Google Certified Associate Cloud Engineer Practice Exams

Google Kubernetes Engine Cheat Sheet References:

https://cloud.google.com/kubernetes-engine
https://cloud.google.com/kubernetes-engine/docs/concepts/cluster-architecture
https://kubernetes.io/docs/concepts/services-networking/service/

Get 20% Off – Christmas Big Sale on All Practice Exams, Video Courses, and eBooks!

Tutorials Dojo portal

Learn AWS with our PlayCloud Hands-On Labs

Tutorials Dojo Exam Study Guide eBooks

tutorials dojo study guide eBook

FREE AWS Exam Readiness Digital Courses

FREE AWS, Azure, GCP Practice Test Samplers

Subscribe to our YouTube Channel

Tutorials Dojo YouTube Channel

Follow Us On Linkedin

Recent Posts

Written by: Jon Bonso

Jon Bonso is the co-founder of Tutorials Dojo, an EdTech startup and an AWS Digital Training Partner that provides high-quality educational materials in the cloud computing space. He graduated from Mapúa Institute of Technology in 2007 with a bachelor's degree in Information Technology. Jon holds 10 AWS Certifications and is also an active AWS Community Builder since 2020.

AWS, Azure, and GCP Certifications are consistently among the top-paying IT certifications in the world, considering that most companies have now shifted to the cloud. Earn over $150,000 per year with an AWS, Azure, or GCP certification!

Follow us on LinkedIn, YouTube, Facebook, or join our Slack study group. More importantly, answer as many practice exams as you can to help increase your chances of passing your certification exams on your first try!

View Our AWS, Azure, and GCP Exam Reviewers Check out our FREE courses

Our Community

~98%
passing rate
Around 95-98% of our students pass the AWS Certification exams after training with our courses.
200k+
students
Over 200k enrollees choose Tutorials Dojo in preparing for their AWS Certification exams.
~4.8
ratings
Our courses are highly rated by our enrollees from all over the world.

What our students say about us?