Ends in
00
days
00
hrs
00
mins
00
secs
ENROLL NOW

Get any AWS Specialty Mock Test for FREE when you Buy 2 AWS Pro-Level Practice Tests – as LOW as $10.49 USD each ONLY!

Google Cloud Load Balancing

Home » Google Cloud » Google Cloud Load Balancing

Google Cloud Load Balancing

Last updated on March 26, 2023

Google Cloud Load Balancing Cheat Sheet

  • Google Cloud Load Balancing allows you to put your resources behind a single IP address.

Features

  • Can be set to be available externally or internally with your Virtual Private Network (VPC).
  • HTTP(S) load balancing can balance HTTP and HTTPS traffic across multiple backend instances, across multiple regions. 
  • Enable Cloud CDN for HTTP(S) load balancing to optimize application delivery for your users with a single checkbox.
  • You can define the autoscaling policy and the autoscaler performs automatic scaling based on the measured load. No pre-warming required — go from zero to full throttle in seconds.
  • Manage SSL certificates and decryption.

Types of Google Cloud Load Balancers

  • External Load Balancer
    • External HTTP(s)
      • Supports HTTP/HTTP(s) traffic
      • Distributes traffic for the following backend types:
        • Instance groups
        • Zonal network endpoint groups (NEGs)
        • Serverless NEGs: One or more App Engine, Cloud Run, or Cloud Functions services
        • Internet NEGs, for endpoints that are outside of Google Cloud (also known as custom origins)
        • Buckets in Cloud Storage
      • Scope is global
      • Destination ports
        • HTTP on 80 or 8080
        • HTTPS on 443
      • On each backend service, you can optionally enable Cloud CDN and Google Cloud Armor.
    • External Network TCP/UDP
      • A network load balancer that distributes TCP or UDP traffic among virtual machines in the same region.
      • Regional in scope
      • Can receive traffic from:
        • Any client on the Internet
        • Google Cloud VMs with external IP
        • Google Cloud VMs that have Internet access through Cloud NAT or instance-based NAT
      • Network load balancers are not proxies.
        • Load-balanced packets are received by backend VMs with their source IP unchanged.
        • Load-balanced connections are terminated by the backend VMs.
        • Responses from the backend VMs go directly to the clients, not back through the load balancer. The industry term for this is direct server return.
    • SSL Proxy Load Balancer
      • Supports TCP with SSL offload traffic.
      • It is intended for non-HTTP(S) traffic.
      • Scope is global.
      • By using SSL Proxy Load Balancing, SSL connections are terminated at the load balancing layer, and then proxied to the closest available backend.
      • Destination ports
        • 5, 43, 110, 143, 195, 443, 465, 587, 700, 993, 995, 1883, 3389, 5222, 5432, 5671, 5672, 5900, 5901, 6379, 8085, 8099, 9092, 9200, and 9300
    • TCP Proxy
      • Traffic coming over a TCP connection is terminated at the load balancing layer, and then proxied to the closest available backend.
      • Destination Ports
        • 25, 43, 110, 143, 195, 443, 465, 587, 700, 993, 995, 1883, 3389, 5222, 5432, 5671, 5672, 5900, 5901, 6379, 8085, 8099, 9092, 9200, and 9300.
      • Can be configured as a global service where you can deploy your backends in multiple regions and it automatically directs traffic to the region closest to the user.
  • Internal Load Balancer
    • Internal HTTP(s) 
      • A proxy-based, regional Layer 7 load balancer that enables you to run and scale your services behind an internal IP address.
      • Supports HTTP/HTTP(s) traffic.
      • Distributes traffic to backends hosted on Google Compute Engine (GCE) and Google Kubernetes Engine (GKE).
      • Scope is regional.
      • Load Balancer destination ports
        • HTTP on 80 or 8080
        • HTTPS on 443
    • Internal TCP or UDP
      • A regional load balancer that allows you to run and scale your services behind an internal load balancing IP address that is accessible only to your internal virtual machine instances.
      • Distributes traffic among virtual machine instances in the same region in a Virtual Private cloud network by using an internal IP address.
      • Does not support:
        • Backend virtual machines in multiple regions
        • Balancing traffic that originates from the Internet
  • Tutorials dojo strip

Validate Your Knowledge

Question 1

You deploy a web application running on a Cloud Engine instance in the asia-northeast1-a zone. You want to eliminate the risk of possible downtime due to the failure of a single Compute Engine zone while minimizing costs.

What should you do?

  1. Deploy another instance in asia-northeast1-b. Balance the load in asia-northeast1-a, and asia-northeast1-b using an Internal Load Balancer (ILB).
  2. Deploy multiple instances on asia-northeast1-a, asia-northeast1-b, and asia-northeast1-c. Balance the load across all zones using an Internal Load Balancer (ILB).
  3. Create an instance template and deploy a managed instance group in a single zone. Configure a health check to monitor the instances.
  4. Create a snapshot schedule for your instance. Set up a Cloud Monitoring Alert to monitor the instance. Restore the instance using the snapshot when the instance goes down.

Correct Answer: 1

You can host your Compute Engine resources in different geographical locations called regions. A region is composed of three or more zones. Zones are isolated locations within a region. Compute resources like VMs and persistent disks are hosted on these regions.

Google recommends deploying applications in multiple regions and zones to protect them from unforeseen component failures or even sudden zonal and regional outages. This makes your application fault-tolerant and highly available.

Hence, the correct answer is: Deploy another instance in asia-northeast1-b. Balance the load in asia-northeast1-a  and asia-northeast1-b using an Internal Load Balancer (ILB).

The option that says: Create an instance template and deploy a managed instance group in a single zone. Configure a health check to monitor the instances is incorrect because this doesn’t make the application withstand a zonal outage. When the single zone is selected as the location of the managed instance group, it only spawns instances on a single zone.

The option that says: Deploy multiple instances on asia-northeast1-a, asia-northeast1-b and asia-northeast1-c. Balance the load across all zones using an Internal Load Balancer (ILB) is incorrect because having the application deployed on all zones (asia-northeast1-a, asia-northeast1-b, and asia-northeast1-c) will incur more cost. Deploying the application on two zones is more cost-effective and will already satisfy the given requirements.

The option that says: Create a snapshot schedule for your instance. Set up a Cloud Monitoring Alert to monitor the instance. Restore the instance using the snapshot when the instance goes down is incorrect because this solution can’t protect the application from zonal failure. This only gives you an application backup that requires a lot of time to restore.

Reference: 
https://cloud.google.com/compute/docs/load-balancing-and-autoscaling
https://cloud.google.com/compute/docs/regions-zones

Note: This question was extracted from our Google Certified Associate Cloud Engineer Practice Exams.

Question 2

Your team maintains an application that receives SSL/TLS-encrypted traffic on port 443. Your customers from various parts of the globe report latency issues when accessing your application. 

What should you do?

  1. Use an External HTTP(S) Load Balancer in front of your application.
  2. Use an SSL Proxy Load Balancer in front of your application.
  3. Use a TCP Proxy in front of your application.
  4. Use an Internal HTTP(S) Load Balancer in front of your application.

Correct Answer: 2

A load balancer distributes user traffic across multiple instances of your applications. By spreading the load, load balancing reduces the risk that your applications experience performance issues.

The external HTTP(S) load balancer and SSL proxy load balancer terminate Transport Layer Security (TLS) in locations that are distributed globally, so as to minimize latency between clients and the load balancer. If you require geographic control over where TLS is terminated, you should use Network Load Balancing instead, and terminate TLS on backends that are located in regions appropriate to your needs.

AWS Exam Readiness Courses

It is stated in the scenario that your application must handle SSL/TLS-encrypted traffic on port 443 from your global customers.

Hence, the correct answer is: Use an SSL Proxy Load Balancer in front of your application.

The option that says: Use an External HTTP(S) Load Balancer in front of your application is incorrect because this type of load balancer only handles HTTP and HTTPS traffic. It is not capable of offloading SSL/TLS-encrypted traffic.

The option that says: Use a TCP Proxy in front of your application is incorrect. Although it handles TCP traffic, it is not designed to offload SSL connections.

The option that says: Use an Internal HTTP(S) Load Balancer in front of your application is incorrect because an internal load balancer doesn’t expose your application on the public Internet. In addition, Internal HTTP(S) Load Balancers only handles HTTP and HTTPS traffic.

References:
https://cloud.google.com/load-balancing/docs/choosing-load-balancer
https://cloud.google.com/load-balancing

Note: This question was extracted from our Google Certified Associate Cloud Engineer Practice Exams.

For more Google Cloud practice exam questions with detailed explanations, check out the Tutorials Dojo Portal:

Google Certified Associate Cloud Engineer Practice Exams

Google Cloud Load Balancing Cheat Sheet References:

https://cloud.google.com/load-balancing/docs/concepts
https://cloud.google.com/load-balancing/docs/load-balancing-overview

Get any AWS Specialty Mock Test for FREE when you Buy 2 AWS Pro-Level Practice Tests – as LOW as $10.49 USD each ONLY!

Tutorials Dojo portal

Learn AWS with our PlayCloud Hands-On Labs

Tutorials Dojo Exam Study Guide eBooks

tutorials dojo study guide eBook

FREE AWS Exam Readiness Digital Courses

Subscribe to our YouTube Channel

Tutorials Dojo YouTube Channel

FREE AWS, Azure, GCP Practice Test Samplers

Follow Us On Linkedin

Recent Posts

Written by: Jon Bonso

Jon Bonso is the co-founder of Tutorials Dojo, an EdTech startup and an AWS Digital Training Partner that provides high-quality educational materials in the cloud computing space. He graduated from Mapúa Institute of Technology in 2007 with a bachelor's degree in Information Technology. Jon holds 10 AWS Certifications and is also an active AWS Community Builder since 2020.

AWS, Azure, and GCP Certifications are consistently among the top-paying IT certifications in the world, considering that most companies have now shifted to the cloud. Earn over $150,000 per year with an AWS, Azure, or GCP certification!

Follow us on LinkedIn, YouTube, Facebook, or join our Slack study group. More importantly, answer as many practice exams as you can to help increase your chances of passing your certification exams on your first try!

View Our AWS, Azure, and GCP Exam Reviewers Check out our FREE courses

Our Community

~98%
passing rate
Around 95-98% of our students pass the AWS Certification exams after training with our courses.
200k+
students
Over 200k enrollees choose Tutorials Dojo in preparing for their AWS Certification exams.
~4.8
ratings
Our courses are highly rated by our enrollees from all over the world.

What our students say about us?