Ends in
00
days
00
hrs
00
mins
00
secs
ENROLL NOW

🎁 Get 20% Off - Christmas Big Sale on All Practice Exams, Video Courses, and eBooks!

Google Container Registry

Home » Google Cloud » Google Container Registry

Google Container Registry

Last updated on November 6, 2024

Google Container Registry Cheat Sheet

  • Container Registry is a container image repository to manage Docker images, perform vulnerability analysis, and define fine-grained access control.

Features

  • Automatically build and push images to a private registry when you commit code to Cloud Source Repositories, GitHub, or Bitbucket.
  • You can push and pull Docker images to your private Container Registry utilizing the standard Docker command-line interface.
  • The system creates a Cloud Storage bucket to store all of your images the first time you push an image to Container Registry
  • You have the ability to maintain control over who can access, view, or download images.

Pricing

  • Container Registry charges for the following:
    • Storing images on Cloud Storage
    • Network egress for containers stored in the registry.
  • Tutorials dojo strip
  • Network ingress is free.
  • If the Container Scanning API is enabled in either Container Registry, vulnerability scanning is turned on and billed for both products.

Validate Your Knowledge

Question 1

Your company stores all of its container images on Google Artifact Registry in a project called td-devops. The development team created a Google Kubernetes Engine (GKE) cluster on a separate project and needs to download container images from the td-devops project.

What should you do to ensure that Kubernetes can download the images from Artifact Registry securely?

  1. In the td-devops project, assign the Storage Object Viewer IAM role to the service account used by the GKE nodes.
  2. Upon creating the GKE cluster, set the Access Scopes setting under Node Security to Allow Full Access to all Cloud APIs.
  3. Generate a P12 key for a new service account. Use the generated key as an imagePullSecrets in Kubernetes to access the private registry.
  4. In the Google Cloud Storage, configure the ACLs on each container image stored and provide read-write access to the service account used by the GKE nodes.

Correct Answer: 1

Google Artifact Registry manages container images directly without relying on individual Cloud Storage buckets for each image. Instead, Artifact Registry is a unified platform that provides repository-level permissions, allowing more granular access control over specific repositories and images. To enable a Kubernetes Engine (GKE) cluster to access images stored in the Artifact Registry from another project, the appropriate IAM permissions must be set for the service account used by the GKE nodes.

In this case, assigning the Artifact Registry Reader or Storage Object Viewer role (depending on the level of access needed) at the repository level in the td-devops project to the GKE service account will allow the nodes to securely pull images as needed. This approach eliminates the need for bucket-specific or object-level permissions associated with Cloud Storage, providing a simpler and more secure method for controlling access to container images.

https://media.tutorialsdojo.com/google_cloud_storage_object_viewer_service_account.PNG

Hence, the correct answer is: In the td-devops project, assign the Storage Object Viewer IAM role to the service account used by the GKE nodes. We can further improve this by limiting access to the specific bucket, which is used by the GCR to store the container images.

The option that says: Upon creating the GKE cluster, set the Access Scopes setting under Node Security to Allow Full Access to all Cloud APIs is incorrect because this does not give permissions to access Container Registry images on a different project. GKE nodes, by default, are configured with appropriate permissions to push or pull images from an Artifact Registry storage bucket but only on the same project.

The option that says: Generate a P12 key for a new service account. Use the generated key as an imagePullSecrets in Kubernetes to access the private registry is incorrect because it is not possible to authenticate using P12 keys in the Artifact Registry. Rather, you can use JSON keys, access tokens, standalone, and gcloud credential helpers. Using these authentication methods are best for accessing Artifact Registry images from a non-GKE environment.

The option that says: In the Google Cloud Storage, configure the ACLs on each container image stored and provide read-write access to the service account used by the GKE nodes is incorrect because Artifact Registry does not use Cloud Storage ACLs for access control. Instead, it relies on IAM roles set at the repository level to control permissions for container images. Assigning the Storage Object Viewer or Artifact Registry Reader IAM role to the service account used by the GKE nodes is the proper way to provide secure access to the images stored in the Artifact Registry.

References:

https://cloud.google.com/container-registry/docs/access-contro
https://cloud.google.com/container-registry/docs/advanced-authentication#methods

Check out our Google Container Registry Cheat Sheets:

https://tutorialsdojo.com/google-container-registry/

Note: This question was extracted from our Google Certified Associate Cloud Engineer Practice Exams.

For more Google Cloud practice exam questions with detailed explanations, check out the Tutorials Dojo Portal:

Google Certified Associate Cloud Engineer Practice Exams

Google Container Registry Cheat Sheet Reference:

https://cloud.google.com/container-registry

Get 20% Off – Christmas Big Sale on All Practice Exams, Video Courses, and eBooks!

Tutorials Dojo portal

Learn AWS with our PlayCloud Hands-On Labs

Tutorials Dojo Exam Study Guide eBooks

tutorials dojo study guide eBook

FREE AWS Exam Readiness Digital Courses

FREE AWS, Azure, GCP Practice Test Samplers

Subscribe to our YouTube Channel

Tutorials Dojo YouTube Channel

Follow Us On Linkedin

Recent Posts

Written by: Jon Bonso

Jon Bonso is the co-founder of Tutorials Dojo, an EdTech startup and an AWS Digital Training Partner that provides high-quality educational materials in the cloud computing space. He graduated from Mapúa Institute of Technology in 2007 with a bachelor's degree in Information Technology. Jon holds 10 AWS Certifications and is also an active AWS Community Builder since 2020.

AWS, Azure, and GCP Certifications are consistently among the top-paying IT certifications in the world, considering that most companies have now shifted to the cloud. Earn over $150,000 per year with an AWS, Azure, or GCP certification!

Follow us on LinkedIn, YouTube, Facebook, or join our Slack study group. More importantly, answer as many practice exams as you can to help increase your chances of passing your certification exams on your first try!

View Our AWS, Azure, and GCP Exam Reviewers Check out our FREE courses

Our Community

~98%
passing rate
Around 95-98% of our students pass the AWS Certification exams after training with our courses.
200k+
students
Over 200k enrollees choose Tutorials Dojo in preparing for their AWS Certification exams.
~4.8
ratings
Our courses are highly rated by our enrollees from all over the world.

What our students say about us?