Ends in
00
days
00
hrs
00
mins
00
secs
ENROLL NOW

Get $4 OFF in AWS Solutions Architect & Data Engineer Associate Practice Exams for $10.99 each ONLY!

Google Cloud Spanner

Home » Google Cloud » Google Cloud Spanner

Google Cloud Spanner

Last updated on March 26, 2023

Google Cloud Spanner Cheat Sheet

  • A fully managed relational database service that scales horizontally with strong consistency.

Features

  • SLA availability up to 99.999% for multi-regional instances with 10x less downtime than four nines.
  • Provides transparent, synchronous replication across region and multi-region configurations.
  • Optimizes performance by automatically sharding the data based on request load and size of data so you can spend less time thinking about scaling your database and more time scaling your business.
  • You can run instances on a regional scope or multi-regional where your database is able to survive regional failure. 
  • All tables must have a declared primary key (PK), which can be composed of multiple table columns.
  • Can make schema changes like adding a column or adding an index while serving live traffic with zero downtime.
Tutorials dojo strip

Pricing

  • Pricing for Cloud Spanner is simple and predictable. You are only charged for:
    • number of nodes in your instance
    • amount of storage that your tables and secondary indexes use (not pre-provisioned)
    • amount of network bandwidth (egress) used
  • Note that there is no additional charge for replication.

Validate Your Knowledge

Question 1

A company has an application that uses Cloud Spanner as its backend database. After a few months of monitoring your Cloud Spanner resource, you noticed that the incoming traffic of the application has a predictable pattern. You need to set up automatic scaling that will scale up or scale down your Spanner nodes based on the incoming traffic. You don’t want to use an open-source tool as much as possible.

What should you do?

  1. Set up an Autoscaler infrastructure in the same project where the Cloud Spanner is deployed to automatically scale the Cloud Spanner resources according to its CPU metric.
  2. Set up an alerting policy on Cloud Monitoring that sends an email alert to on-call Site Reliability Engineers (SRE) when the Cloud Spanner CPU metric exceeds the desired threshold. The SREs shall scale the resources up or down appropriately.
  3. Set up an alerting policy on Cloud Monitoring that sends an alert to a webhook when the Cloud Spanner CPU metric is over or under your desired threshold. Create a Cloud Function that listens to this HTTP webhook and resizes Spanner resources appropriately.
  4. Set up an alerting policy on Cloud Monitoring that sends an email alert to Google Cloud Support email when the Cloud Spanner CPU metric exceeds the desired threshold. The Google Support team shall scale the resources up or down appropriately.

Correct Answer: 3

When you create a Cloud Spanner instance, you choose the number of nodes that provide compute resources for the instance. As the instance’s workload changes, Cloud Spanner does not automatically adjust the number of nodes in the instance. As a result, you need to set up several alerts or use an Autoscaler tool to ensure that the instance stays within the recommended maximums for CPU utilization and the recommended limit for storage per node.

You can invoke Cloud Functions with an HTTP request using the POST, PUT, GET, DELETE, and OPTIONS HTTP methods. To create an HTTP endpoint for your function, specify –trigger-http as the trigger type when deploying your function. From the caller’s perspective, HTTP invocations are synchronous, meaning the result of the function execution will be returned in the response to the HTTP request.

Cloud Spanner (Autoscaler)

Autoscaler tool for Cloud Spanner (Autoscaler), an open-source tool that you can use as a companion tool to Cloud Spanner. This tool lets you automatically increase or reduce the number of nodes or processing units in one or more Spanner instances based on how their capacity is being used.

Autoscaler monitors your instances and automatically adds or removes nodes or processing units to help ensure that they stay within the following parameters:

– The recommended maximums for CPU utilization.

– The recommended limit for storage per node, plus or minus a configurable margin.

To deploy Autoscaler, decide which of the following topologies is best to fulfill your technical and operational needs:

Per-project topology: The Autoscaler infrastructure is deployed in the same project as Cloud Spanner that needs to be autoscaled.

Centralized topology: Autoscaler is deployed in one project and manages one or more Cloud Spanner instances in different projects.

Distributed topology:: Most of the Autoscaler infrastructure is deployed in one project, but some infrastructure components are deployed with the Cloud Spanner instances being autoscaled in different projects.

In the scenario, you have to find a method where you can automatically scale your Cloud Spanner resources based on a traffic pattern. As much as possible, you also don’t want to use an open-source tool. Since Cloud Spanner does not scale automatically, you have to check for CPU usage of your Spanner instances and find a way to trigger your Cloud Spanner database to scale its resources accordingly. Moreover, you have to ensure that these steps are done automatically.

Hence the correct answer is: Set up an alerting policy on Cloud Monitoring that sends an alert to a webhook when the Cloud Spanner CPU metric is over or under your desired threshold. Create a Cloud Function that listens to this HTTP webhook and resizes Spanner resources appropriately.

The option that says: Set up an Autoscaler infrastructure in the same project where the Cloud Spanner is deployed to automatically scale the Cloud Spanner resources according to its CPU metric is incorrect because the Autoscaler tool for Cloud Spanner (Autoscaler) is an open-source tool.

The option that says: Set up an alerting policy on Cloud Monitoring that sends an email alert to on-call Site Reliability Engineers (SRE) when the Cloud Spanner CPU metric exceeds the desired threshold. The SREs shall scale the resources up or down appropriately is incorrect because this method requires an on-call SRE every time there is an alert which means that the scaling will be done manually rather than automatically.

The option that says: Set up an alerting policy on Cloud Monitoring that sends an email alert to Google Cloud Support email when the Cloud Spanner CPU metric exceeds the desired threshold. The Google Support team shall scale the resources up or down appropriately is incorrect because this does not satisfy the requirement to scale the Cloud Spanner resources automatically. In this method, you will delegate the task of scaling the Spanner resources to a Technical Account Manager every time an alert is triggered.

References:

AWS Exam Readiness Courses

https://cloud.google.com/spanner/docs/monitoring-cloud
https://cloud.google.com/functions/docs/writing/http
https://cloud.google.com/architecture/autoscaling-cloud-spanner
https://github.com/cloudspannerecosystem/autoscaler

Note: This question was extracted from our Google Certified Associate Cloud Engineer Practice Exams.

For more Google Cloud practice exam questions with detailed explanations, check out the Tutorials Dojo Portal:

Google Certified Associate Cloud Engineer Practice Exams

Google Cloud Spanner Cheat Sheet Reference:

https://cloud.google.com/spanner

Get $4 OFF in AWS Solutions Architect & Data Engineer Associate Practice Exams for $10.99 ONLY!

Tutorials Dojo portal

Be Inspired and Mentored with Cloud Career Journeys!

Tutorials Dojo portal

Enroll Now – Our Azure Certification Exam Reviewers

azure reviewers tutorials dojo

Enroll Now – Our Google Cloud Certification Exam Reviewers

Tutorials Dojo Exam Study Guide eBooks

tutorials dojo study guide eBook

FREE AWS Exam Readiness Digital Courses

Subscribe to our YouTube Channel

Tutorials Dojo YouTube Channel

FREE Intro to Cloud Computing for Beginners

FREE AWS, Azure, GCP Practice Test Samplers

Recent Posts

Written by: Jon Bonso

Jon Bonso is the co-founder of Tutorials Dojo, an EdTech startup and an AWS Digital Training Partner that provides high-quality educational materials in the cloud computing space. He graduated from Mapúa Institute of Technology in 2007 with a bachelor's degree in Information Technology. Jon holds 10 AWS Certifications and is also an active AWS Community Builder since 2020.

AWS, Azure, and GCP Certifications are consistently among the top-paying IT certifications in the world, considering that most companies have now shifted to the cloud. Earn over $150,000 per year with an AWS, Azure, or GCP certification!

Follow us on LinkedIn, YouTube, Facebook, or join our Slack study group. More importantly, answer as many practice exams as you can to help increase your chances of passing your certification exams on your first try!

View Our AWS, Azure, and GCP Exam Reviewers Check out our FREE courses

Our Community

~98%
passing rate
Around 95-98% of our students pass the AWS Certification exams after training with our courses.
200k+
students
Over 200k enrollees choose Tutorials Dojo in preparing for their AWS Certification exams.
~4.8
ratings
Our courses are highly rated by our enrollees from all over the world.

What our students say about us?