Jon Bonso, Author at Tutorials Dojo https://tutorialsdojo.com Your One-Stop Learning Portal Wed, 05 Feb 2025 08:22:23 +0000 en-US hourly 1 https://tutorialsdojo.com/wp-content/uploads/2018/09/cropped-tutorialsdojo_logo.008-1-32x32.png Jon Bonso, Author at Tutorials Dojo https://tutorialsdojo.com 32 32 209025254 AWS Certified Machine Learning Engineer Associate MLA-C01 BETA Exam Guide https://tutorialsdojo.com/aws-certified-machine-learning-engineer-associate-mla-c01-beta-exam-guide/ https://tutorialsdojo.com/aws-certified-machine-learning-engineer-associate-mla-c01-beta-exam-guide/#respond Wed, 19 Jun 2024 02:34:57 +0000 https://tutorialsdojo.com/?p=26690   The AWS Training and Certification team recently released the new AWS Certified Machine Learning Engineer Associate MLA-C01 exam along with the AWS Certified AI Practitioner AIF-C01 test this June 2024. This new role-based, Associate-level AWS certification exam will allow you to validate your machine learning skills to your current or future employer as [...]

The post AWS Certified Machine Learning Engineer Associate MLA-C01 BETA Exam Guide appeared first on Tutorials Dojo.

]]>

 

The AWS Training and Certification team recently released the new AWS Certified Machine Learning Engineer Associate MLA-C01 exam along with the AWS Certified AI Practitioner AIF-C01 test this June 2024. This new role-based, Associate-level AWS certification exam will allow you to validate your machine learning skills to your current or future employer as well as improve your AI know-how in relation to AWS. You can be among the first-ever batch of IT professionals worldwide to earn this new certification when registration opens on August 13, 2024.

AWS Certified Machine Learning Engineer Associate MLA-C01 Beta Exam Overview

The AWS Certified Machine Learning Engineer Associate MLA-C01 exam is suitable for individuals who have at least 1 year of experience in machine learning (ML) engineering or a related field. Having a year of hands-on experience with various ML-related AWS services is also recommended. If you do not have prior machine learning experience, you can take the recommended courses available in the Exam Prep section of this article to help you kick off your training.
 
The beta version of the AWS Certified Machine Learning Engineer Associate MLA-C01 exam lasts 170 minutes and consists of 85 questions. The exam costs 75 USD or 10,000 JPY but may vary to other countries. The MLA-C01 certification exam is ideal for professionals in IT roles such as backend software developers, DevOps engineers, data engineers, MLOps engineers, and data scientists. You can take this new exam either at any Pearson VUE testing center or as an online proctored exam using the OnVue app. The MLA-C01 certification is available in both English and Japanese for now, but other languages will be available in the future.
 
mla-c01 AWS certified machine learning engineer associate examtopics and guide
 
Acquiring this certification will truly validate your technical ability to implement ML workloads in production as well as to operationalize or improve your ML pipeline. Your career profile and credibility would be upgraded if you are able to position yourself for in-demand machine learning job roles much earlier.

 

AWS Certified Machine Learning Engineer Associate MLA-C01 Beta Exam Topics

The items below are the relevant AWS Certified Machine Learning Engineer Associate MLA-C01 exam topics that you should know before taking the exam:

  •  Data preparation for ML models
  • Feature engineering
  • Model training
  • Model performance and metrics
  • Model integration and deployment
  • Performance and cost optimization
  • Security

The exam covers four main areas. The first with the highest percentile is Data Preparation for Machine Learning (ML), which covers 28%, followed by the ML Model Development with 26%. ML Solution Monitoring, Maintenance, and Security for 24%, and the lowest percentile domain is Deployment and Orchestration of ML Workflows with 22%.

Is the AWS Certified Machine Learning Engineer Associate MLA-C01 Beta Exam Worth it?

There are many IT professionals, or even students/career shifters, who claim that they are knowledgeable about the concepts of Artificial Intelligence but could not even build the most basic Machine Learning solution. Most people would know how to use it, especially ChatGPT, but it does not mean these individuals are capable of building applications that are beneficial to companies.

This certification can serve as a tool to easily filter dodgy candidates/job seekers who just put “AI”, “Machine Learning,” and other buzz words in their CVs yet don’t possess the relevant knowledge or skill about the subject matter. The AWS Certified AI Practitioner AIF-C01 exam also provides an entry point for interested career shifters or even experienced IT folks to truly learn about AI as well as how to easily build ML solutions by leveraging the power of cloud computing.

 

Exam Prep Materials for the AWS Certified Machine Learning Engineer Associate MLA-C01 Exam

You are in luck, as there are a lot of free resources that you can use to prepare for this exam. Interested IT professionals can enroll in various free and premium digital courses to fill gaps in their knowledge and skills. Our team has compiled a list of recommended courses that you can check out which we will update regularly.

Additionally, visit the official AWS Certification page for the AWS Certified Machine Learning Engineer Associate MLA-C01 beta exam. This page provides the most up-to-date information, including the link to schedule your beta exam, as well as access to the official Exam Guide and Sample Questions

There are digital courses for Machine Learning available in the Tutorials Dojo portal (in collaboration with AWS): https://portal.tutorialsdojo.com/product-tag/machine-learning/

 

Courses from the AWS Skill Builder site:

  • MLA-C01 Standard Exam Prep PlanIncludes only free resources.
  • MLA-C01 Enhanced Exam Prep Plan – (Paid) Includes free resources and additional content for AWS Skill Builder subscribers, such as AWS Builder Labs, game-based learning, Official Pretests, and more exam-style questions. Available starting August 13, 2024.

Heads Up! AWS has recently announced that AWS Certification is introducing three new question types: ordering, matching, and case study. These will complement the existing multiple-choice and multiple-response questions, helping to reduce your reading time and assess additional critical concepts.

For more details, read the blog post: AWS Certification: New Exam Question Types.

How Different is the Existing MLS-C01 AWS Specialty Certification from the New MLA-C01 Associate Exam?

The new AWS Certified Machine Learning Engineer – Associate MLA-C01 exam is an ML role-based certification designed for IT Professionals such as MLOps engineers with at least a year of experience. On the other hand, the AWS Certified Machine Learning – Specialty MLS-C01 is a specialty certification covering much more advanced ML topics across data engineering, data analysis, modeling, and ML implementation and ops. The latter is more suitable for individuals with more than 2 years of experience developing, architecting, and running ML workloads on AWS.

 

How Will the AWS Certified Machine Learning Engineer Associate MLA-C01 Help My Career?

Based on the recent World Economic Forum Future of Jobs Report in 2023:

  • The demand for AI and Machine Learning Specialists is likely to grow by 40% in the next couple of years.
  • 70% of IT leaders in North America have expressed difficulty filling AI/ML specialist roles in their respective organizations.

In a related November 2023 research conducted by AWS, companies are willing to pay:

  • 43% more for ML-skilled workers in areas of sales and marketing,
  • 42% more for those in the finance/banking industry
  • 41% more for business enterprise operations,
  • 47% more for the general IT professional salary range

The MLA-C01 certification can really position you for in-demand machine learning jobs, especially for opportunities that require extensive experience in the AWS Cloud.

 

What other AWS Certifications Should I Earn Next?

Achieve greater heights for your career with an AWS Certified Machine Learning Engineer Associate MLA-C01 certification!

 

💝 Valentine’s Sale! Get 30% OFF Any Reviewer. Use coupon code: PASSION-4-CLOUD & 10% OFF Store Credits/Gift Cards

Tutorials Dojo portal

Learn AWS with our PlayCloud Hands-On Labs

FREE AWS Exam Readiness Digital Courses

Tutorials Dojo Exam Study Guide eBooks

tutorials dojo study guide eBook

FREE AWS, Azure, GCP Practice Test Samplers

Subscribe to our YouTube Channel

Tutorials Dojo YouTube Channel

Follow Us On Linkedin

Recent Posts

The post AWS Certified Machine Learning Engineer Associate MLA-C01 BETA Exam Guide appeared first on Tutorials Dojo.

]]>
https://tutorialsdojo.com/aws-certified-machine-learning-engineer-associate-mla-c01-beta-exam-guide/feed/ 0 26690
New AWS Certified AI Practitioner AIF-C01 BETA Exam Guide https://tutorialsdojo.com/aws-certified-ai-practitioner-aif-c01-beta/ https://tutorialsdojo.com/aws-certified-ai-practitioner-aif-c01-beta/#respond Wed, 19 Jun 2024 00:59:17 +0000 https://tutorialsdojo.com/?p=26657 Do you always sharpen your competitive edge in the highly competitive IT industry? Are you planning to position yourself for career growth and reach greater remuneration? Generative AI has been making waves in almost every aspect of the economy and job market to the point that the skills you have today could be worth [...]

The post New AWS Certified AI Practitioner AIF-C01 BETA Exam Guide appeared first on Tutorials Dojo.

]]>

Do you always sharpen your competitive edge in the highly competitive IT industry? Are you planning to position yourself for career growth and reach greater remuneration? Generative AI has been making waves in almost every aspect of the economy and job market to the point that the skills you have today could be worth less, or even worth absolutely nothing by nascent AI-powered tools from OpenAI’s ChatGPT, Meta’s LLaMa, Google’s Gemini and the suite of AI services from major cloud service providers such as Azure and AWS.

In June 2024, the AWS Training and Certification team announced yet another AWS certification offering to its existing lineup. This new one is called the AWS Certified AI Practitioner with an exam code of AIF-C01, which is a certification test that validates the candidate’s knowledge of artificial intelligence (AI), machine learning (ML), and generative AI concepts and other use cases. You can be among the very first ones to earn this new certification when registration opens on August 13, 2024.

AWS Certified AI Practitioner AIF-C01 Beta Exam Overview

The beta version of the AWS Certified AI Practitioner AIF-C01 exam has an exam duration of 120 minutes, consisting of 85 questions. The exam costs 75 USD. It is ideal for individuals in roles such as business analysts, IT support, marketing professionals, product or project managers, line-of-business or IT managers, and sales professionals. Candidates can take the exam either at a Pearson VUE testing center or as an online proctored exam. The certification is available in both English and Japanese.

aif-c01 AWS certified AI practitioner foundational certification examtopics and guide

 

AWS Certified AI Practitioner AIF-C01 Beta Exam Topics

The AWS Certified AI Practitioner exam covers essential topics, including:

  • Fundamental AI Concepts and Terminologies: Understand the foundational concepts and basics of AI, ML, and generative AI.
  • Use Cases: Explore practical applications of AI, ML, and generative AI in various industries.
  • Design Considerations for Foundation Models: Learn the principles of designing robust AI models.
  • Model Training and Fine-Tuning: Gain insights into the processes of training and refining AI models.
  • Prompt Engineering: Master the techniques of creating effective AI prompts.
  • Foundation Model Evaluation Criteria: Evaluate AI models based on key performance indicators.
  • Responsible AI: Ensure ethical AI practices and fairness in AI systems.
  • Security and Compliance: Maintain the security and compliance of AI solutions.

The exam covers five main areas. The first with the highest percentile is Applications of Foundation Models, which covers 28% followed by the Fundamentals of Generative AI with 24%. Fundamentals of AI and ML for 20%, while the remaining domains, Guidelines for Responsible AI and Security, Compliance, and Governance for AI Solutions, each cover 14%.

Is the AWS Certified AI Practitioner AIF-C01 Beta Exam Worth it?

There are many IT professionals, or even students/career shifters, who claim that they are knowledgeable about the concepts of Artificial Intelligence but could not even build the most basic Machine Learning solution. Most people would know how to use it, especially ChatGPT, but it does not mean these individuals are capable of building applications that are beneficial to companies.

This certification can serve as a tool to easily filter dodgy candidates/job seekers who just put “AI”, “Machine Learning,” and other buzz words in their CVs yet don’t possess the relevant knowledge or skill about the subject matter. The AWS Certified AI Practitioner AIF-C01 exam also provides an entry point for interested career shifters or even experienced IT folks to truly learn about AI as well as how to easily build ML solutions by leveraging the power of cloud computing.

 

Exam Prep Materials for the new AIF-C01 AWS Certification

You can enroll in various free and premium digital courses to fill gaps in your knowledge and skills. We have compiled a short list of recommended courses that you can check out.

Additionally, visit the official AWS Certification page for the AWS Certified AI Practitioner (AIF-C01). This page provides the most up-to-date information, including the link to schedule your AIF-C01 exam and access to the official Exam Guide and Sample Questions.

There are digital courses for Machine Learning available in the Tutorials Dojo portal (in collaboration with AWS): https://portal.tutorialsdojo.com/product-tag/machine-learning/

 

Courses from the AWS Skill Builder site:

  • Standard Exam Prep PlanIncludes only free resources.
  • Enhanced Exam Prep Plan – (Paid) Includes free resources and additional content for AWS Skill Builder subscribers, such as AWS Builder Labs, game-based learning, Official Pretests, and more exam-style questions. Available starting August 13, 2024.

Heads Up! AWS has recently announced that AWS Certification is introducing three new question types: ordering, matching, and case study. These will complement the existing multiple-choice and multiple-response questions, helping to reduce your reading time and assess additional critical concepts.

For more details, read the blog post: AWS Certification: New Exam Question Types.

Who Should Take the AWS Certified AI Practitioner AIF-C01 Exam?

This certification is ideal for individuals familiar with AI/ML technologies on AWS who use but do not necessarily build AI/ML solutions.

  • New to IT and AWS Cloud: Start with foundational cloud courses like AWS Cloud Essentials and AWS Technical Essentials included in the Exam Prep Plans.
  • Certified AWS Professionals: If you already possess the AWS Certified Cloud Practitioner CLF-c01 or any of the 4 Associate-level AWS Certification, you can start with the AI foundational training included in the provided exam prep plan above.

How Will the AWS Certified AI Practitioner AIF-C01 Credential Help My Career?

Professionals in various roles, such as sales, marketing, and product management, will benefit significantly from this certification. Building skills through training and validating knowledge through certifications like AWS Certified AI Practitioner can lead to better job performance and career advancement.

According to a November 2023 conducted by AWS, employers are willing to pay:

  • 43% more for AI-skilled workers in sales and marketing,
  • 42% more for those in finance,
  • 41% more for business operations,
  • 47% more for IT professionals.

 

What AWS Certification Should I Earn Next?

 

Don’t miss this opportunity to advance your career with the AWS Certified AI Practitioner AIF-C01!

 

 

💝 Valentine’s Sale! Get 30% OFF Any Reviewer. Use coupon code: PASSION-4-CLOUD & 10% OFF Store Credits/Gift Cards

Tutorials Dojo portal

Learn AWS with our PlayCloud Hands-On Labs

FREE AWS Exam Readiness Digital Courses

Tutorials Dojo Exam Study Guide eBooks

tutorials dojo study guide eBook

FREE AWS, Azure, GCP Practice Test Samplers

Subscribe to our YouTube Channel

Tutorials Dojo YouTube Channel

Follow Us On Linkedin

Recent Posts

The post New AWS Certified AI Practitioner AIF-C01 BETA Exam Guide appeared first on Tutorials Dojo.

]]>
https://tutorialsdojo.com/aws-certified-ai-practitioner-aif-c01-beta/feed/ 0 26657
My AWS Certified Data Engineer Associate DEA-C01 Exam Experience 2024 https://tutorialsdojo.com/my-aws-certified-data-engineer-associate-dea-c01-exam-experience-2024/ https://tutorialsdojo.com/my-aws-certified-data-engineer-associate-dea-c01-exam-experience-2024/#respond Fri, 22 Mar 2024 17:21:30 +0000 https://tutorialsdojo.com/?p=25531 I recently took the actual exam of the AWS Certified Data Engineer - Associate DEA-C01 online. Interestingly, it has some resemblances with the exam content of the AWS Certified Data Analytics Specialty test, albeit not entirely the same in terms of depth. This new Associate-level exam aims to validate the skills and knowledge of [...]

The post My AWS Certified Data Engineer Associate DEA-C01 Exam Experience 2024 appeared first on Tutorials Dojo.

]]>

I recently took the actual exam of the AWS Certified Data Engineer – Associate DEA-C01 online. Interestingly, it has some resemblances with the exam content of the AWS Certified Data Analytics Specialty test, albeit not entirely the same in terms of depth. This new Associate-level exam aims to validate the skills and knowledge of IT Professionals in core data-related AWS services, such as the ability to implement data pipelines, perform cost optimization, troubleshoot data workflow issues, and apply data engineering best practices in AWS.

 

AWS Certified Data Engineer – Associate Exam Details

The AWS Certified Data Engineer – Associate exam has an exam code of DE1-C01 and costs $150 USD. It has 65 questions in either a multiple choice or multiple response format that you should complete within 3 hours or 180 minutes. You can take the exam at a Pearson VUE testing center or through an online proctored exam. I took the exam using the latter option on a Sunday morning, and my online test went smoothly without any issues on Pearson’s OnVue app. This has a passing score of 720 out of 1000 based on a scaled scoring model.

Your test results for the DEA-C01 exam will be available after a few days days, and if you pass it, the AWS Certified Data Engineer Associate certification will be credited to your account immediately.  This new Associate-level exam is somewhat a replacement for the AWS Certified Data Analytics – Specialty test, which will be decommissioned on April 8, 2024. The AWS Certified Data Engineer Associate test was initially slated to go live in April but was rescheduled a month earlier ( March 2024 ), several weeks before the Data Analytics – Specialty DAS-C01, Database DAS-C01, and SAP Specialty exams were completely phased out.

In comparison with the AWS Certified Data Analytics – Specialty exam, this new Data Engineer – Associate certification test only covers the high-level overview of data engineering. Data ingestion, data transformation, data store management, data security, and data governance are in scope, but the DEA-C01 exam does not delve deep into the advanced concepts of Observability, Data Collection, Visualization, and the like. There are also quite a few questions related to Machine Learning. Despite this difference, I would say (in my personal estimation) that these two exams are about 70% to 80% similar to each other because the list of related AWS services between the two is almost the same.

DEA-C01

Exam Resources for AWS Certified Data Engineer – Associate DEA-C01 Exam 

Here’s a list of exam resources that I used to prepare for the exam of the AWS Certified Data Engineer – Associate exam:

 

DEA-C01 AWS Certified Data Engineer – Associate Exam Topics

Most of the questions that I encountered are mentioned in the official Exam Guide, but mostly, the DEA-C01 exam topics revolve around AWS Glue, Amazon Athena, AWS Lake Formation and a little bit of Amazon SageMaker. Here’s the list of relevant AWS services that you should focus on:

Compute:

  • AWS Batch
  • Amazon EC2
  • AWS Lambda
  • AWS Serverless Application Model (AWS SAM)

Analytics:

  • Amazon Athena
  • Amazon EMR
  • AWS Glue
  • AWS Glue DataBrew
  • AWS Lake Formation
  • Amazon Kinesis Data Analytics
  • Amazon Kinesis Data Firehose
  • Amazon Kinesis Data Streams
  • Amazon Managed Streaming for Apache Kafka (Amazon MSK)
  • Amazon OpenSearch Service
  • Amazon QuickSight

 

Application Integration:

  • Amazon AppFlow
  • Amazon EventBridge
  • Amazon Managed Workflows for Apache Airflow (Amazon MWAA)
  • Amazon Simple Notification Service (Amazon SNS)
  • Amazon Simple Queue Service (Amazon SQS)
  • AWS Step Functions

Cloud Financial Management:

  • AWS Budgets
  • AWS Cost Explorer

Containers:

  • Amazon Elastic Container Registry (Amazon ECR)
  • Amazon Elastic Container Service (Amazon ECS)
  • Amazon Elastic Kubernetes Service (Amazon EKS)

Database:

  • Amazon DocumentDB (with MongoDB compatibility)
  • Amazon DynamoDB
  • Amazon Keyspaces (for Apache Cassandra)
  • Amazon MemoryDB for Redis
  • Amazon Neptune
  • Amazon RDS
  • Amazon Redshift

 

Developer Tools:

  • AWS CLI
  • AWS Cloud9
  • AWS Cloud Development Kit (AWS CDK)
  • AWS CodeBuild
  • AWS CodeCommit
  • AWS CodeDeploy
  • AWS CodePipeline

 

Frontend Web and Mobile:

  • Amazon API Gateway

Machine Learning:

  • Amazon SageMaker

 

DEA-C01 Exam Domains 

This official exam guide of the AWS Certified Data Engineer Associate test includes the exam weightings, content domains, and task statements for this certification. Each of these exam domains has its corresponding task statements, which cover the relevant knowledge and skills. However, it is not an exhaustive list and you should only use this as a general guide.

dea-c01 exam domains examtopics aws certified data engineer associate 2024 pdf

The exam has the following content domains and weightings:

  • Domain 1: Data Ingestion and Transformation (34% of scored content)
  • Domain 2: Data Store Management (26% of scored content)
  • Domain 3: Data Operations and Support (22% of scored content)
  • Domain 4: Data Security and Governance (18% of scored content)

 

AWS Services to Focus On for the DEA-C01 Exam

The AWS Data Engineer Associate exam covers a wide range of concepts and AWS services; which is why you have to know the specific items that you have to focus on so you won’t waste time studying the things that won’t likely be asked in the actual exam. You have to spend more time in learning the following AWS services:

  • Amazon Athena
  • Amazon Redshift
  • Amazon QuickSight
  • Amazon EMR (Amazon Elastic MapReduce)
  • AWS LakeFormation
  • AWS EventBridge
  • AWS Glue
    • AWS Glue DataBrew
    • All AWS Glue features
  • Amazon Kinesis
    • Amazon Kinesis Data Firehose
    • Amazon Kinesis Data Streams 
  • Amazon Managed Service for Apache Flink •
  • Amazon Managed Streaming for Apache Kafka (Amazon MSK)
  • Amazon OpenSearch Service

 

Exam Tips for the AWS Certified Data Engineer – Associate DEA-C01 Beta Exam

If you are quite new in AWS and Data Engineering, you might find the DEA-C01 exam topics to be challenging as it has a mix of open-source Apache technologies and AWS services. The Apache technologies that are enumerated in the official exam guide, which you can also see in the actual exam, are all related to data engineering, such as Apache Flink, Apache Kafka, Apache Ranger, Apache Pig, Apache Hive, Apache Airflow, Apache Spark, Apache Cassandra et cetera. Different data formats are also included, like Apache ORC (Optimized Row Columnar), Apache Parquet, Apache Avro, and more. Therefore, you have to brush up with all of the relevant open-source programs, data formats and tools in the area of data engineering.

You should also allocate more time to learning and doing hands-on labs for the relevant AWS services such as Amazon Athena, Amazon Redshift, Amazon QuickSight, Amazon EMR (Amazon Elastic MapReduce), AWS LakeFormation, AWS EventBridge, AWS Glue, AWS Glue DataBrew, Amazon Kinesis Data Firehose, Amazon Kinesis Data Streams Amazon Managed Service for Apache Flink, Amazon Managed Streaming for Apache Kafka (Amazon MSK), Amazon OpenSearch Service and others.

It’s also recommended to read the entire exam guide from cover to cover and take note of the relevant data engineering concepts and other AWS services that you have to focus on. You can check out this AWS Certified Data Engineer DEA-C01 Study Guide for more information and our DEA-C01 practice exams reviewer that comes with detailed explanation, flashcards and several test modes (Timed/Review/Section-based):

 

DEA-C01 practice exam

💝 Valentine’s Sale! Get 30% OFF Any Reviewer. Use coupon code: PASSION-4-CLOUD & 10% OFF Store Credits/Gift Cards

Tutorials Dojo portal

Learn AWS with our PlayCloud Hands-On Labs

FREE AWS Exam Readiness Digital Courses

Tutorials Dojo Exam Study Guide eBooks

tutorials dojo study guide eBook

FREE AWS, Azure, GCP Practice Test Samplers

Subscribe to our YouTube Channel

Tutorials Dojo YouTube Channel

Follow Us On Linkedin

Recent Posts

The post My AWS Certified Data Engineer Associate DEA-C01 Exam Experience 2024 appeared first on Tutorials Dojo.

]]>
https://tutorialsdojo.com/my-aws-certified-data-engineer-associate-dea-c01-exam-experience-2024/feed/ 0 25531
3 Ways to Fast Track your Cloud Career Journey this 2024 https://tutorialsdojo.com/3-ways-to-fast-track-your-cloud-career-journey-this-2024/ https://tutorialsdojo.com/3-ways-to-fast-track-your-cloud-career-journey-this-2024/#respond Thu, 01 Feb 2024 17:13:24 +0000 https://tutorialsdojo.com/?p=25065   I usually have a countdown to the end of the year (even as early as the 1st quarter of the year) that acts as a timebox for me to track my goal progression or even catch up with the old aspirations until the earth completes yet another revolution around the sun. We may [...]

The post 3 Ways to Fast Track your Cloud Career Journey this 2024 appeared first on Tutorials Dojo.

]]>

 

I usually have a countdown to the end of the year (even as early as the 1st quarter of the year) that acts as a timebox for me to track my goal progression or even catch up with the old aspirations until the earth completes yet another revolution around the sun. We may not accomplish 100% of our targets, but if we keep a positive disposition in life and persistently work on achieving our objectives each and every day, the chances are we’ll eventually arrive at 80% or 90% completion of our desired outcome. That’s way lot better than having 0% or no progress at all!

In this edition, we would like to share the 3 ways to supercharge your career and earning potential to make 2024 a year chock-full of personal achievements! You have to be inspired constantly, be in the know and be of service to fast-track your tech career.

Be Inspired

‘Instruction does much, but encouragement everything” – says Johann Wolfgang von Goethe, the most widely regarded and influential writer in German literature. Change always starts from within, and you must be continuously encouraged and inspired to help you take the leap of faith.

Inspiration can come in many forms such as a simple conversation, a noteworthy story, a memorable movie, a personal experience or an inspiring book. You can read the stories of remarkable individuals in the technology space and learn from their experiences which you can then apply to your own career journey. You can check out the recently released Cloud Careers Journey book which is a compilation of personal stories of notable cloud influencers in the industry. 
Cloud Career Journeys
 
I shared my exciting adventure in this book too – from my humble beginnings in the Philippines, my career in tech working in Singapore and Australia as well as how I started and scaled up Tutorials Dojo to what it is today. The book is authored by Ashish Prajapati and Prasad Rao who both work at AWS. Following these exceptional individuals on Linkedin can also help you get your daily dose of inspiration.

 

Be In the Know

One of the ways to supercharge your career and earning potential is by always being on the lookout for trends in the job market. Check the available jobs posted on the popular job sites in your country and take note of the trending skills or on-demand technologies that are being sought-after by employers.
You can also read the collection of high-quality articles on the Tutorials Dojo Blog to help you keep updated with the latest releases in AWS and beyond. Check out our new articles here:
 
With over 100 FREE digital courses, you can also expand and upgrade your skills on our learning portal at absolutely no cost.
aws certification
 
You can learn Machine Learning, Advanced Networking, Security, Data Analytics, Database Migration, and so much more at the Tutorials Dojo Portal.

 

Be of Service

Being cognizant of the specific must-haves of your current and soon-to-be employer can future-proof your career, more so if you can provide that much-needed service that they require. The critical thing to note here is that you can only meet the needs of your company and its end customers if you possess the knowledge and skills necessary to complete the task.
 
You have to be highly knowledgeable and capable on certain technologies to provide an excellent service to your company by lowering down their recurring monthly operating costs, scaling up their systems and launching operationally performant solutions. This is where your cloud training, hands-on labs and certifications come into play.
dea-c01 exam guide aws playcloud
 
You will be able to cater to the needs of your current company if you have undergone proper cloud training. Many people have achieved a new set of skills and expertise by studying and attaining several industry certifications. This credential validates that you are indeed an expert in your chosen field, which can be obtained by studying the recommended materials as well as doing hands-on labs using your own account or via the Tutorials Dojo’s PlayCloud Sandbox/Guided Labs.
 
Having an extensive experience in the IT industry is great, but it does have its limitations. As technology advances at the speed of light, the skills that we have right now are at risk of being totally replaced by Artificial Intelligence. You need to upskill, especially if you just stayed in the same company over the years working on obsolete tools with a limited tech stack.
 
That’s a wrap! May you always be inspired, be in the know, and be of service in order for you to fast-track your cloud career journey this 2024!
 

Don’t give up on your dreams! You’ll eventually reap both the professional and financial rewards sooner than you thought!

 

💝 Valentine’s Sale! Get 30% OFF Any Reviewer. Use coupon code: PASSION-4-CLOUD & 10% OFF Store Credits/Gift Cards

Tutorials Dojo portal

Learn AWS with our PlayCloud Hands-On Labs

FREE AWS Exam Readiness Digital Courses

Tutorials Dojo Exam Study Guide eBooks

tutorials dojo study guide eBook

FREE AWS, Azure, GCP Practice Test Samplers

Subscribe to our YouTube Channel

Tutorials Dojo YouTube Channel

Follow Us On Linkedin

Recent Posts

The post 3 Ways to Fast Track your Cloud Career Journey this 2024 appeared first on Tutorials Dojo.

]]>
https://tutorialsdojo.com/3-ways-to-fast-track-your-cloud-career-journey-this-2024/feed/ 0 25065
My AWS Certified Data Engineer Associate DEA-C01 BETA Exam Experience https://tutorialsdojo.com/my-aws-certified-data-engineer-associate-dea-c01-beta-exam-experience/ https://tutorialsdojo.com/my-aws-certified-data-engineer-associate-dea-c01-beta-exam-experience/#respond Tue, 26 Dec 2023 10:28:06 +0000 https://tutorialsdojo.com/?p=24546 I recently took the beta exam of the AWS Certified Data Engineer - Associate DEA-C01 online, and from the get-go, I can see its resemblance to the AWS Certified Data Analytics Specialty test, albeit not entirely the same in terms of depth. This new Associate-level exam aims to validate the skills and knowledge of [...]

The post My AWS Certified Data Engineer Associate DEA-C01 BETA Exam Experience appeared first on Tutorials Dojo.

]]>

I recently took the beta exam of the AWS Certified Data Engineer – Associate DEA-C01 online, and from the get-go, I can see its resemblance to the AWS Certified Data Analytics Specialty test, albeit not entirely the same in terms of depth. This new Associate-level exam aims to validate the skills and knowledge of IT Professionals in core data-related AWS services, such as the ability to implement data pipelines, perform cost optimization, troubleshoot data workflow issues, and apply data engineering best practices in AWS.

 

AWS Certified Data Engineer – Associate BETA Exam Details

The beta exams in the AWS Certification program are primarily used to test the initial set of “exam items” or exam questions in terms of performance before it is used in the official exam release. Take note that the beta exam process may or may not be conducted when an official AWS exam has been updated. For instance, the new iteration of the AWS Certified Cloud Practitioner test from CLF-C01 to CLF-C02 last September 2023 has no beta offering. If you have successfully passed the beta exam, you will be among the first ones to hold the new certification. Beta exams are usually 50% cheaper as well, although the number of questions is a bit higher than that of the upcoming live version.

DEA-C01 exam details

The AWS Certified Data Engineer – Associate BETA exam has an exam code of DE1-C01 and costs $75 USD. It has 85 questions of either multiple choice or multiple response format that you should complete within 3 hours or 180 minutes. You can take the beta exam at a Pearson VUE testing center or through an online proctored exam. I took the exam using the latter option on a Sunday morning, and my online test went smoothly without any issues on Pearson’s OnVue app. This has a passing score of 720 which is exactly the same as what the upcoming live version would be. Keep in mind that the BETA exam will run from November 27 to January 12 of next year only, so make sure that you book your exam as early as you can to avoid missing out on available slots.

Your test results for the DEA-C01 beta exam will be available after 90 days, and if you pass it, the AWS Certified Data Engineer Associate certification will be credited to your account immediately.  This new Associate-level exam is somewhat a replacement for the AWS Certified Data Analytics – Specialty test, which will be decommissioned on April 8, 2024 next year. It is expected that the AWS Certified Data Engineer Associate test will go live in April, right after the Data Analytics – Specialty DAS-C01 exam has been completely phased out.

In comparison with the AWS Certified Data Analytics – Specialty exam, this new Data Engineer – Associate certification test only covers the high-level overview of data engineering. Data ingestion, data transformation, data store management, data security, and data governance are in scope, but the DEA-C01 exam does not delve deep into the advanced concepts of Observability, Data Collection, Visualization, and the like. There are also quite a few questions related to Machine Learning. Despite this difference, I would say (in my personal estimation) that these two exams are about 70% to 80% similar to each other because the list of related AWS services between the two are almost the same.

 

Exam Resources for AWS Certified Data Engineer – Associate DEA-C01 Exam 

Here’s a list of exam resources that I used to prepare for the BETA exam of the AWS Certified Data Engineer – Associate exam:

 

DEA-C01 AWS Certified Data Engineer – Associate Exam Topics

Most of the questions that I encountered are mentioned in the official Exam Guide, but mostly, the DEA-C01 exam topics revolve around AWS Glue, Amazon Athena, AWS Lake Formation and a little bit of Amazon SageMaker. Here’s the list of relevant AWS services that you should focus on:

Compute:

  • AWS Batch
  • Amazon EC2
  • AWS Lambda
  • AWS Serverless Application Model (AWS SAM)

Analytics:

  • Amazon Athena
  • Amazon EMR
  • AWS Glue
  • AWS Glue DataBrew
  • AWS Lake Formation
  • Amazon Kinesis Data Analytics
  • Amazon Kinesis Data Firehose
  • Amazon Kinesis Data Streams
  • Amazon Managed Streaming for Apache Kafka (Amazon MSK)
  • Amazon OpenSearch Service
  • Amazon QuickSight

 

Application Integration:

  • Amazon AppFlow
  • Amazon EventBridge
  • Amazon Managed Workflows for Apache Airflow (Amazon MWAA)
  • Amazon Simple Notification Service (Amazon SNS)
  • Amazon Simple Queue Service (Amazon SQS)
  • AWS Step Functions

Cloud Financial Management:

  • AWS Budgets
  • AWS Cost Explorer

Containers:

  • Amazon Elastic Container Registry (Amazon ECR)
  • Amazon Elastic Container Service (Amazon ECS)
  • Amazon Elastic Kubernetes Service (Amazon EKS)

Database:

  • Amazon DocumentDB (with MongoDB compatibility)
  • Amazon DynamoDB
  • Amazon Keyspaces (for Apache Cassandra)
  • Amazon MemoryDB for Redis
  • Amazon Neptune
  • Amazon RDS
  • Amazon Redshift

 

Developer Tools:

  • AWS CLI
  • AWS Cloud9
  • AWS Cloud Development Kit (AWS CDK)
  • AWS CodeBuild
  • AWS CodeCommit
  • AWS CodeDeploy
  • AWS CodePipeline

 

Frontend Web and Mobile:

  • Amazon API Gateway

Machine Learning:

  • Amazon SageMaker

 

DEA-C01 Exam Domains 

This official exam guide of the AWS Certified Data Engineer Associate test includes the exam weightings, content domains, and task statements for this certification. Each of these exam domains has its corresponding task statements, which cover the relevant knowledge and skills. However, it is not an exhaustive list and you should only use this as a general guide.

dea-c01 exam domains examtopics aws certified data engineer associate 2024 pdf

The exam has the following content domains and weightings:

  • Domain 1: Data Ingestion and Transformation (34% of scored content)
  • Domain 2: Data Store Management (26% of scored content)
  • Domain 3: Data Operations and Support (22% of scored content)
  • Domain 4: Data Security and Governance (18% of scored content)

 

AWS Services to Focus On for the DEA-C01 Exam

The AWS Data Engineer Associate exam covers a wide range of concepts and AWS services; which is why you have to know the specific items that you have to focus on so you won’t waste time studying the things that won’t likely be asked in the actual exam. You have to spend more time in learning the following AWS services:

  • Amazon Athena
  • Amazon Redshift
  • Amazon QuickSight
  • Amazon EMR (Amazon Elastic MapReduce)
  • AWS LakeFormation
  • AWS EventBridge
  • AWS Glue
    • AWS Glue DataBrew
    • All AWS Glue features
  • Amazon Kinesis
    • Amazon Kinesis Data Firehose
    • Amazon Kinesis Data Streams 
  • Amazon Managed Service for Apache Flink •
  • Amazon Managed Streaming for Apache Kafka (Amazon MSK)
  • Amazon OpenSearch Service

 

Exam Tips for the AWS Certified Data Engineer – Associate DEA-C01 Beta Exam

If you are quite new in AWS and Data Engineering, you might find the DEA-C01 exam topics to be challenging as it has a mix of open-source Apache technologies and AWS services. The Apache technologies that are enumerated in the official exam guide, which you can also see in the actual exam, are all related to data engineering, such as Apache Flink, Apache Kafka, Apache Ranger, Apache Pig, Apache Hive, Apache Airflow, Apache Spark, Apache Cassandra et cetera. Different data formats are also included, like Apache ORC (Optimized Row Columnar), Apache Parquet, Apache Avro, and more. Therefore, you have to brush up with all of the relevant open-source programs, data formats and tools in the area of data engineering.

You should also allocate more time to learning and doing hands-on labs for the relevant AWS services such as Amazon Athena, Amazon Redshift, Amazon QuickSight, Amazon EMR (Amazon Elastic MapReduce), AWS LakeFormation, AWS EventBridge, AWS Glue, AWS Glue DataBrew, Amazon Kinesis Data Firehose, Amazon Kinesis Data Streams Amazon Managed Service for Apache Flink, Amazon Managed Streaming for Apache Kafka (Amazon MSK), Amazon OpenSearch Service and others.

It’s also recommended to read the entire exam guide from cover to cover and take note of the relevant data engineering concepts and other AWS services that you have to focus on. You can check out this AWS Certified Data Engineer DEA-C01 Study Guide for more information.

💝 Valentine’s Sale! Get 30% OFF Any Reviewer. Use coupon code: PASSION-4-CLOUD & 10% OFF Store Credits/Gift Cards

Tutorials Dojo portal

Learn AWS with our PlayCloud Hands-On Labs

FREE AWS Exam Readiness Digital Courses

Tutorials Dojo Exam Study Guide eBooks

tutorials dojo study guide eBook

FREE AWS, Azure, GCP Practice Test Samplers

Subscribe to our YouTube Channel

Tutorials Dojo YouTube Channel

Follow Us On Linkedin

Recent Posts

The post My AWS Certified Data Engineer Associate DEA-C01 BETA Exam Experience appeared first on Tutorials Dojo.

]]>
https://tutorialsdojo.com/my-aws-certified-data-engineer-associate-dea-c01-beta-exam-experience/feed/ 0 24546
AWS Certified Cloud Practitioner Exam Guide Study Path CLF-C02 https://tutorialsdojo.com/aws-cloud-practitioner-clf-c02-exam-guide/ https://tutorialsdojo.com/aws-cloud-practitioner-clf-c02-exam-guide/#respond Sat, 16 Sep 2023 04:58:36 +0000 https://tutorialsdojo.com/?p=4496 Bookmarks What to Review What AWS services are included How to Review Common Exam Scenarios Validate Your Knowledge What to expect from the exam Final Exam Tips   The AWS Certified Cloud Practitioner CLF-C02 exam or AWS CCP is the easiest to achieve among [...]

The post AWS Certified Cloud Practitioner Exam Guide Study Path CLF-C02 appeared first on Tutorials Dojo.

]]>

 

The AWS Certified Cloud Practitioner CLF-C02 exam or AWS CCP is the easiest to achieve among all the AWS certification exams. This certification covers most if not all, fundamental knowledge that one should know when venturing into the Cloud. The AWS Cloud Practitioner course intends to provide practitioners with a fundamental understanding of the AWS Cloud without having to dive deep into the technicalities. This includes the AWS Global Infrastructure, best practices in using AWS Cloud, pricing models, technical support options, and many more. You can view the complete details and guidelines for the certification exam here.

What to Review For the CLF-C02 AWS Cloud Practitioner Exam?

1.  The AWS Cloud Services

Currently, AWS offers more than 200+ services and products to its customers. And every year, the list grows longer. You don’t have to memorize every single service and function to pass the exam (although that would be amazing if you did!). What’s important is that you familiarize yourself with the more commonly used services such as those under compute, storage, databases, security, networking and content delivery, management and governance, and a few others. Aside from questions on the different services, questions about Regions and Availability Zones commonly pop up in the exam as well.

2. Best Practices when Architecting for the Cloud

This section is highly important and might comprise the bulk of your AWS Certified Cloud Practitioner CLF-C02 exam. Focus on reading the content of this AWS Well-Architected Framework. The best practices are essentially the ways you can take advantage of AWS Cloud’s strengths. You can visit this site to gather more information and view additional content for your review of this section.

3. Security in the Cloud

Security in the AWS Cloud is another major part of your AWS Cloud Practitioner CLF-C02 Exam. AWS has defined the security controls that they manage and the security controls that you manage through the Shared Responsibility Model below.

AWS Cloud Practitioner

4. AWS Pricing Model

One of the advantages of using the Cloud is having on-demand capacity provisioning. Therefore, it is also crucial for you to understand the AWS pricing model. AWS charges you in multiple ways. There is no exact model that applies to all since different AWS services have their own cost plans. However, AWS has three fundamental drivers of cost that usually apply to any kind of service. They are:

  1. Compute cost
  2. Storage cost
  3. Outbound data transfer cost

5. AWS Support Plans

AWS offers five types of support plans: Basic, Developer, Business, Enterprise On-Ramp, and Enterprise. It is important to know how each support plan differs from one another.

What AWS services are included in the CLF-C02 Exam?

The official Exam Guide for the AWS Certified Cloud Practitioner CLF-C02 exam doesn’t just share the list of exam domains and a detailed description for each exam domain. It also has a list of relevant tools, technologies, and concepts that will be covered on the CLF-C02 exam. The following is a non-exhaustive list of AWS services and features that appear on the Cloud Practitioner exam, based on the provided information in the exam guide. Take note that this list could change at any time, but nonetheless, this information is still quite helpful in determining the relevant AWS services that you should focus on:

Analytics:

Application Integration:

Business Productivity:

Compute:

Containers:

Cost Management:

  • AWS Billing Conductor
  • AWS Budgets
  • AWS Cost and Usage Report
  • AWS Cost Explorer
  • AWS Marketplace

Customer Engagement:

  • AWS Activate for Startups
  • AWS IQ
  • AWS Managed Services (AMS)
  • AWS Support

Database:

Developer Tools:

Frontend Web and Mobile:

Storage:

Internet of Things (IoT):

Machine Learning:

Management and Governance:

Migration and Transfer:

  • AWS Application Discovery Service
  • AWS Application Migration Service
  • AWS Database Migration Service (AWS DMS)
  • AWS Migration Hub
  • AWS Schema Conversion Tool (AWS SCT)
  • AWS Snow Family
  • AWS Transfer Family

Networking and Content Delivery:

  • Amazon API Gateway
  • Amazon CloudFront
  • AWS Direct Connect
  • AWS Global Accelerator
  • Amazon Route 53
  • Amazon VPC
  • AWS VPN

Security, Identity, and Compliance:

Serverless:

End-User Computing:

  • Amazon AppStream 2.0
  • Amazon WorkSpaces
  • Amazon WorkSpaces Web

Review Process for the CLF-C02 AWS Cloud Practitioner Exam

As with any exam, the very first step is always the same – KNOWING WHAT TO STUDY. Although we have already enumerated them in the previous section, I highly suggest you go over the AWS Certified Cloud Practitioner exam (CLF-C02) Exam Guide again and see the exam content. 

AWS already has a vast number of free CLF-C02 resources available for you to prepare for the exam. With that being said, here is a suggested step-by-step review process to help you pass and even ace your AWS Certified Cloud Practitioner CLF-C02 exam.

Step 1 – Read the Overview of Amazon Web Services Whitepaper

I suggest going through the Overview of Amazon Web Services whitepaper to gain a good understanding of the different AWS concepts and services. Again, you don’t need to memorize every single AWS service and function there. Rather, focus on the services that are more commonly used by the industry.

» Tip: If you want a more concise version of the whitepaper, you can check out the Tutorials Dojo AWS cheat sheets. These cheat sheets provide an easy-to-read summary of the most important concepts about each service.

Step 2 – Study AWS Pricing

Next, I recommend studying AWS pricing. The AWS Certified Cloud Practitioner exam (CLF-C02) exam frequently throws out tricky questions about pricing, TCO, and cost optimization. Be extra careful in answering questions that ask for the most cost-effective solution. Always prioritize utility over pricing, since there might be a choice in the question where it is the cheapest solution, but is not appropriate for the scenario’s needs. 

Aside from on-demand capacity provisioning, AWS also offers you multiple ways to lower your total cost, such as the option to reserve capacity or create a savings plan.

The purpose of studying cost and pricing models is to help you optimize your costs in AWS. AWS provides a great tool to calculate expected monthly costs, known as the AWS Pricing Calculator. Note that the AWS Cloud Practitioner CLF-C02 exam frequently asks for scenarios where you’d have to optimize your costs.

Step 3 – Study the Shared Responsibility Model

The Best Practices for Security, Identity, & Compliance webpage discusses what you’ll need to know for AWS Security. Also, familiarize yourself with the Shared Responsibility Model. This frequently comes up in the AWS Certified Cloud Practitioner exam (CLF-C02). With security, you should know the following:

  • Protect your data in and going out of AWS. Different services (EBS, S3, EC2, RDS, etc) have different encryption methods and protocols.
  • Network level security and subnet level security. There are many ways you can secure your VPC and the services inside it, such as NACLs and security groups.
  • Be comfortable with IAM. Focus on concepts of IAM users, groups, policies, and roles.
  • Understand AWS monitoring and logging features such as Cloudwatch, CloudWatch Logs, VPC Logs, and CloudTrail.

Step 4 – Read the AWS Well-Architected Framework

The last whitepaper you need to review is the AWS Well-Architected Framework whitepaper. It is very important to understand what the best practices are since scenario questions in the exam always revolve around these topics. Once logged in, you can open up your AWS Management Console to help you visualize what is being discussed in this paper.

Step 5 – Study the AWS Support Plans

For the AWS Support Plans, this webpage will serve as your primary study material. This is a quick browse of a webpage, and shouldn’t take you long in studying. Take note of what support plans are available, and how they differ from each other. There might be questions in the exam that ask which support plan offers some specific service. With that said, you might miss the subtle details if you don’t read each support plan properly, so be sure to take note of these details.

In tandem with learning the AWS Support Plans is studying AWS Trusted Advisor. AWS Trusted Advisor is a tool that offers best practice checks and recommendations across these categories: cost optimization, security, fault tolerance, performance, and service limits. 

Step 6 – Enroll in a Video Course

To prepare for this exam, you can take our AWS Certified Cloud Practitioner Video Course. This will assist you in gaining the necessary AWS Cloud knowledge by covering the different AWS Cloud concepts, AWS services, security, architecture, pricing, and support. 

 

Common Exam Scenarios for the CLF-C02 AWS Cloud Practitioner Exam

Scenario

Solution

CLF-C02 Exam Domain 1: Cloud Concepts

A key financial benefit of migrating systems hosted on your on-premises data center to AWS.

 – Replaces upfront capital expenses (CAPEX) with low variable operational expense (OPEX).

 – Reduce the Total Cost of Ownership (TCO)

4 cloud architectures design principle in AWS

  1. Design for failure.

  2. Decouple your components

  3. Implement elasticity

  4. Think parallel

A Global Infrastructure component that is made up of one or more discrete data centers, with each redundant power, networking, and connectivity housed in separate facilities.

Availability Zones

A cloud best practice that reinforces the use of the Service-Oriented Architecture (SOA) design principle.

Decouple your components

You need to enable your Amazon EC2 instances in the public subnet to connect to the public Internet.

Internet Gateway

You can use it to resolve the connection between your on-premises VPN and your Amazon VPC.

– Virtual Private Gateway

– Amazon Route 53

CLF-C02 Exam Domain 2: Security and Compliance

It provides the event history of your AWS account activity, including actions taken through the AWS Management Console, AWS SDKs, command-line tools, and other AWS services.

AWS CloudTrail

A company needs to download the compliance-related documents in AWS such as Service Organization Controls (SOC) reports

AWS Artifact

Improve the security of IAM users.

 – Enable Multi-Factor Authentication (MFA)

 – Configure a strong password policy

An IAM identity that uses access keys to manage cloud resources via AWS CLI.

IAM User

Grant temporary access to your AWS resources.

IAM Role

Apply and easily manage the common access permissions to a large number of IAM users in AWS.

IAM Group

Grant the required permissions to access your Amazon S3 resources.

Bucket Policy

User Policy

It scales up to millions of users and supports sign-in with social identity providers, such as Facebook, Google, and Amazon, and enterprise identity providers via SAML 2.0.

Amazon Cognito

A startup needs to evaluate the newly created IAM policies.

IAM Policy Simulator

A service that discovers, classifies, and protects sensitive data such as personally identifiable information (PII) or intellectual property.

Amazon Macie

A threat detection service that continuously monitors for malicious activity to protect your AWS account.

Amazon GuardDuty

Prevent unauthorized deletion of Amazon S3 objects.

Enable Multi-Factor Authentication (MFA)

A company needs to control the traffic going in and out of its VPC subnets.

Network Access Control List (NACL)

What acts as a virtual firewall in AWS that controls the traffic at the EC2 instance level?

Security Group

Its responsibility is to patch the host operating system of an Amazon EC2 instance.

AWS

CLF-C02 Exam Domain 3: Cloud Technology and Services

A customer can assume the responsibility and management of the guest operating system, including updates and security patches.

Amazon EC2

You need to securely transfer hundreds of petabytes of data and exabyte-scale datasets into and out of the AWS Cloud.

AWS Snowmobile

A type of EC2 instance that allows you to use your existing server-bound software licenses.

Dedicated Host

A Developer can use these to interact with their AWS services.

– AWS Command Line Interface

-AWS SDKs

A highly available and scalable cloud DNS web service in AWS.

Amazon Route 53

Store the results of I/O-intensive SQL database queries to improve the application performance.

Amazon ElastiCache

A combination of AWS services that allows you to serve the static files with the lowest possible latency.

Amazon S3
Amazon CloudFront

Automatically scale the capacity of an AWS cloud resource based on the incoming traffic to improve availability and reduce failures

AWS Auto Scaling

A company needs to migrate the on-premises MySQL database to Amazon RDS.

AWS Database Migration Service (AWS DMS)

Automatically transfer your infrequently accessed data in your S3 bucket to a more cost-effective storage class.

S3 Lifecycle Policy

You need to upload a single object as a set of parts to improve throughput and have a quicker recovery from any network issues.

Use Multipart Upload API

A company needs to establish a dedicated connection between its on-premises network and its AWS VPC.

AWS Direct Connect

A Machine Learning service that allows you to add visual analysis features to your applications.

Amazon Rekognition

A source control service that allows you to host Git-based repositories.

AWS CodeCommit

A service that can trace user requests in your application.

AWS X-Ray

Inspects your AWS environment and makes recommendations for saving money, improving system performance, or closing security gaps.

AWS Trusted Advisor

You need to speed up the content delivery of static assets to your customers around the globe

Amazon CloudFront

Create and deploy infrastructure-as-code templates

AWS CloudFormation

You have to encrypt the log data that is stored and managed by AWS CloudTrail.

AWS Key Management Service (AWS KMS)

A database service that can be used to store JSON documents.

Amazon DynamoDB

CLF-C02 Exam Domain 4: Billing, Pricing and Support

A designated technical point of contact that will maintain an operationally healthy AWS environment.

Technical Account Manager (TAM)

It allows the customer to view his Reserved Instance usage for the past month.

AWS Billing Console

A startup needs to estimate the costs of moving its application to AWS.

AWS Total Cost of Ownership (TCO) Calculator

Allows you to set coverage targets and receive alerts when your utilization drops below the threshold you define.

AWS Budgets

A type of Reserved Instance that allows you to change its instance family, instance type, platform, scope, or tenancy.

Convertible RI

Take advantage of unused EC2 capacity in the AWS Cloud and provides up to 90% discount.

Spot Instance

You need to centrally manage policies and consolidate billing across multiple AWS accounts.

AWS Organizations

The most cost-efficient storage option for retaining database backups that allows occasional data retrieval in minutes.

Amazon Glacier

Forecast future costs and usage of your AWS resources based on your past consumption.

AWS Cost Explorer

Categorize and track AWS costs on a detailed level.

Cost allocation tags

The lowest support plan that allows an unlimited number of technical support cases to be opened.

Developer Support Plan

The most cost-effective option when you purchase a Reserved Instance for a 1-year term.

All Upfront

You have to combine usage volume discounts of your multiple AWS accounts.

Consolidated Billing

Sell your catalog of custom AMIs in AWS

AWS Marketplace

Final Step: Validate Your AWS Cloud Practitioner CLF-C02 Knowledge

When you are feeling confident with your review, it is best to validate your knowledge through sample exams. Tutorials Dojo offers a very useful and well-reviewed set of practice tests for the Cloud Practitioner exam takers here. Each test contains many unique questions which will surely help you verify if you have missed out on anything important that might appear on your exam. You can also pair our practice exams with our AWS Certified Cloud Practitioner Exam Study CLF-C02 Guide and Cheat Sheets eBook.

AWS Cloud Practitioner Practice Exams

 

Sample CLF-C02 / CLF-C02 Practice Test Questions:

Question 1

Which of the following channels shares a collection of offerings to help you achieve specific business outcomes related to enterprise cloud adoption through paid engagements in several specialty practice areas?

  1. AWS Enterprise Support
  2. Concierge Support
  3. AWS Professional Services
  4. AWS Technical Account Manager

Correct Answer: 3

AWS Professional Services shares a collection of offerings to help you achieve specific outcomes related to enterprise cloud adoption. Each offering delivers a set of activities, best practices, and documentation reflecting our experience supporting hundreds of customers in their journey to the AWS Cloud. AWS Professional Services’ offerings use a unique methodology based on Amazon’s internal best practices to help you complete projects faster and more reliably while accounting for evolving expectations and dynamic team structures along the way.

 AWS Professional Services

AWS Professional Services created the AWS Cloud Adoption Framework (AWS CAF) to help organizations design and travel an accelerated path to successful cloud adoption. The guidance and best practices provided by the framework help you build a comprehensive approach to cloud computing across your organization and throughout your IT lifecycle. Using the AWS CAF helps you realize measurable business benefits from cloud adoption faster and with less risk.

Hence, the correct answer in this scenario is: AWS Professional Services.

AWS Enterprise Support is incorrect because this is the one that provides 24×7 technical support from high-quality engineers, tools, and technology to automatically manage the health of your environment, consultative architectural guidance delivered in the context of your applications and use-cases, and a designated Technical Account Manager (TAM) to coordinate access to proactive/preventative programs and AWS subject matter experts.

Concierge Support is incorrect because this is a team composed of AWS billing and account experts that specialize in working with enterprise accounts. They will quickly and efficiently assist you with your billing and account inquiries and work with you to implement billing and account best practices so that you can focus on running your business.

AWS Technical Account Manager is incorrect because this is your designated technical point of contact who provides advocacy and guidance to help plan and build solutions using best practices, coordinate access to subject matter experts and product teams, and proactively keep your AWS environment operationally healthy.

References:
https://aws.amazon.com/professional-services/
https://aws.amazon.com/professional-services/CAF/

Check out these AWS Overview Cheat Sheets:
https://tutorialsdojo.com/aws-cheat-sheets-overview/

Tutorials Dojo’s AWS Certified Cloud Practitioner Exam Study Guide:
https://tutorialsdojo.com/aws-certified-cloud-practitioner/

Question 2

A company is planning to launch a new system in AWS but they do not have an employee who has AWS-related expertise. Which of the following AWS channels can instead help the company design, architect, build, migrate, and manage their workloads and applications on AWS?

  1. AWS Partner Network Technology Partners
  2. AWS Marketplace
  3. AWS Partner Network Consulting Partners
  4. Technical Account Management

Correct Answer: 3

The AWS Partner Network (APN) is focused on helping partners build successful AWS-based businesses to drive superb customer experiences. This is accomplished by developing a global ecosystem of Partners with specialties unique to each customer’s needs.

There are two types of APN Partners:

    1. APN Consulting Partners

    2. APN Technology Partners

AWS Partner Network (APN) Badges

APN Consulting Partners are professional services firms that help customers of all sizes design, architect, migrate, or build new applications on AWS. Consulting Partners include System Integrators (SIs), Strategic Consultancies, Resellers, Digital Agencies, Managed Service Providers (MSPs), and Value-Added Resellers (VARs).

APN Technology Partners provide software solutions that are either hosted on or integrated with the AWS platform. Technology Partners include Independent Software Vendors (ISVs), SaaS, PaaS, developer tools, management, and security vendors.

Hence, the correct answer in this scenario is APN Consulting Partners. 

APN Technology Partners is incorrect because this only provides software solutions that are either hosted on or integrated with the AWS platform. You should use APN Consulting Partners instead, as this program helps customers to design, architect, migrate, or build new applications on AWS, which is what is needed in the scenario.

AWS Marketplace is incorrect because this just provides a new sales channel for independent software vendors (ISVs) and Consulting Partners to sell their solutions to AWS customers. This makes it easy for customers to find, buy, deploy, and manage software solutions, including SaaS, in a matter of minutes.

Technical Account Management is incorrect because this is just a part of AWS Enterprise Support which provides advocacy and guidance to help plan and build solutions using best practices, coordinate access to subject matter experts and product teams, and proactively keep your AWS environment operationally healthy.  

References:
https://aws.amazon.com/partners/
https://aws.amazon.com/partners/consulting/journey/
https://aws.amazon.com/partners/technology/journey/

Tutorials Dojo’s AWS Certified Cloud Practitioner Exam Study Guide:
https://tutorialsdojo.com/aws-certified-cloud-practitioner/

Click here for more AWS Certified Cloud Practitioner practice exam questions.

Check out our other AWS practice test courses here:

AWS Certification

 

What to Expect on the CLF-C02 AWS Cloud Practitioner Exam?

There are two types of questions on the examination:

  • Multiple-choice: Has one correct response and three incorrect responses (distractors).
  • Multiple-response: Has two or more correct responses out of five or more options.

Distractors, or incorrect answers, are response options that an examinee with incomplete knowledge or skill would likely choose. However, they are generally plausible responses that fit in the content area defined by the test objective.

Unanswered questions are scored as incorrect; there is no penalty for guessing. 

The majority of questions are usually scenario-based. Some will ask you to identify a specific service or concept. While others will ask you to select multiple responses that fit the given requirements. No matter the style of the question, as long as you understand what is being asked, then you will do fine.

Your examination may include unscored items that are placed on the test by AWS to gather statistical information. These items are not identified on the form and do not affect your score.

The AWS Certified Cloud Practitioner (CLF-C02) examination is a pass or fail exam. Your results for the examination are reported as a scaled score from 100 through 1000, with a minimum passing score of 700. Right after the exam, you will immediately know whether you passed or you failed. And in the succeeding business days, you should receive your complete results with the score breakdown (and hopefully the certificate too).

Final Exam Tips for the CLF-C02 AWS Certified Cloud Practitioner Exam

  1. Be sure to get proper sleep the night before. If you feel that you aren’t ready enough, you can just reschedule your exam.
  2. Come early to the exam venue so that you have time to handle mishaps if there are any.
  3. Read the exam questions properly, but don’t spend too much time on a question you don’t know the answer to. You can always go back to it after you answer the rest. 
  4. Keep your reviewer if you plan on taking other AWS certifications in the future. It will be handy for sure. 
  5. And be sure to visit the Tutorials Dojo website to see our latest AWS reviewers, cheat sheets, and other guides.

Note: The new AWS Certified Cloud Practitioner CLF-C02 exam version will be available starting September 19, 2023. Read this article to learn more.

💝 Valentine’s Sale! Get 30% OFF Any Reviewer. Use coupon code: PASSION-4-CLOUD & 10% OFF Store Credits/Gift Cards

Tutorials Dojo portal

Learn AWS with our PlayCloud Hands-On Labs

FREE AWS Exam Readiness Digital Courses

Tutorials Dojo Exam Study Guide eBooks

tutorials dojo study guide eBook

FREE AWS, Azure, GCP Practice Test Samplers

Subscribe to our YouTube Channel

Tutorials Dojo YouTube Channel

Follow Us On Linkedin

Recent Posts

The post AWS Certified Cloud Practitioner Exam Guide Study Path CLF-C02 appeared first on Tutorials Dojo.

]]>
https://tutorialsdojo.com/aws-cloud-practitioner-clf-c02-exam-guide/feed/ 0 4496
AWS Cloud Adoption Framework – AWS CAF https://tutorialsdojo.com/aws-cloud-adoption-framework-aws-caf/ Thu, 24 Aug 2023 10:36:25 +0000 https://tutorialsdojo.com/?p=23241 Bookmarks What is the Cloud Adoption Framework? The Perspectives of the AWS Cloud Adoption Framework Capabilities of AWS CAF AWS CAF Use Cases Benefits of Using AWS CAF What is the AWS Cloud Adoption Framework? The AWS Cloud Adoption Framework, or AWS [...]

The post AWS Cloud Adoption Framework – AWS CAF appeared first on Tutorials Dojo.

]]>

What is the AWS Cloud Adoption Framework?

The AWS Cloud Adoption Framework, or AWS CAF for short, is simply a framework provided by AWS to assist you in adopting cloud computing for your enterprise infrastructure. It is a framework that contains various perspectives that are based on years of extensive experience and best practices in AWS. This can help you digitally transform and accelerate your digital transformation as well as business outcomes through the innovative use of the AWS Cloud.

AWS CAF zeroes in on specific organizational capabilities that are vital in successful cloud transformations. The capabilities and perspectives of this framework provide best-practice guidance that assists companies in improving their total cloud readiness.

 

What are the different Perspectives of the AWS Cloud Adoption Framework?

The AWS Cloud Adoption Framework groups its many capabilities in 6 different perspectives namely:

  • Business
  • People
  • Governance
  • Platform
  • Security
  • Operations.

Each of these perspectives consists of a set of capabilities that particular stakeholders own or manage in the company’s cloud transformation journey.  These perspectives can identify and prioritize transformation opportunities, evaluate and improve your company’s cloud readiness as well and evolve your transformation roadmap iteratively.

Capabilities of AWS CAF

  • Business: This perspective ensures that your investments in the cloud propel your digital transformation goals and business results.
  • People: This perspective acts as a link between technology and business, speeding up the cloud journey to help organizations quickly evolve into a culture of continuous growth and learning, where change is the norm. It focuses on culture, organizational structure, leadership, and workforce.
  • Governance: This perspective helps coordinate cloud initiatives while maximizing organizational benefits and minimizing risks associated with transformation.
  • Platform: This perspective helps construct an enterprise-grade, scalable, hybrid cloud platform, modernize existing workloads, and implement new cloud-native solutions.
  • Security: This perspective helps achieve the confidentiality, integrity, and availability of data and cloud workloads.
  • Operations: This perspective helps ensure that cloud services are delivered at a level that meets the business needs.

Using AWS CAF, businesses can identify and prioritize transformation opportunities, evaluate and improve cloud readiness, and iteratively evolve their transformation roadmap.

Benefits of Using AWS CAF

  • Risk Reduction: It reduces the risk profile through improved reliability, increased performance, and enhanced security.
  • Improve environmental, social, and governance performance: It uses insights to improve sustainability and corporate transparency.

  • Revenue Growth: Businesses can create new products and services, reach new customers, and enter new market segments.

  • Increased Operational Efficiency: It reduces operating costs, increases productivity, and improves the employee and customer experience.

These benefits make AWS CAF a valuable tool for organizations looking to adopt cloud practices.

Cloud Transformation Phases in AWS CAF

  1. Envision: This phase involves identifying and prioritizing transformation opportunities that align with strategic objectives. Transformation initiatives are associated with key stakeholders and measurable business outcomes to demonstrate value as the business progresses through the transformation journey.

  2. Align: In this phase, capability gaps and cross-organizational dependencies are identified. This helps in creating strategies for improving cloud readiness, ensuring stakeholder alignment, and facilitating relevant organizational change management activities.

  3. Launch: This phase involves delivering pilots in production and demonstrating incremental business value. Pilots should be highly impactful, and when successful, they influence future direction. Learning from pilots helps businesses adjust their approach before scaling to full production.

  4. Scale: In this phase, pilots and business value are expanded to the desired scale. This ensures that the business benefits associated with cloud investments are realized and sustained.

AWS CAF Use Cases

  • Technology: This involves migrating and modernizing legacy infrastructure, applications, and data and analytics platforms.

  • Process: This involves digitizing, automating, and optimizing business operations. This may include leveraging new data and analytics platforms to create actionable insights or using machine learning (ML) to improve your customer service experience, employee productivity and decision-making, business forecasting, fraud detection and prevention, and industrial operations.

  • Organization: This involves reimagining how business and technology teams create customer value and meet strategic intent. Organizing teams around products and value streams while leveraging agile methods to rapidly iterate and evolve will help businesses become more responsive and customer-centric.

  • Product: This involves reimagining the business model by creating new value propositions and revenue models.

Related AWS Certified Cloud Practitioner CLF-C02 Resources:

Are you preparing for your AWS Certified Cloud Practitioner CLF-C02 Exam?

Get Actual AWS Hands-On Labs, Full 65-Question Timed Practice Test, Flashcards plus many more with our highly-visual AWS Certified Cloud Practitioner CLF-C02 Video course — all for a price of lunch!

 

References:

https://docs.aws.amazon.com/whitepapers/latest/overview-aws-cloud-adoption-framework/introduction.html

https://aws.amazon.com/cloud-adoption-framework/

 

 

💝 Valentine’s Sale! Get 30% OFF Any Reviewer. Use coupon code: PASSION-4-CLOUD & 10% OFF Store Credits/Gift Cards

Tutorials Dojo portal

Learn AWS with our PlayCloud Hands-On Labs

FREE AWS Exam Readiness Digital Courses

Tutorials Dojo Exam Study Guide eBooks

tutorials dojo study guide eBook

FREE AWS, Azure, GCP Practice Test Samplers

Subscribe to our YouTube Channel

Tutorials Dojo YouTube Channel

Follow Us On Linkedin

Recent Posts

The post AWS Cloud Adoption Framework – AWS CAF appeared first on Tutorials Dojo.

]]>
23241
AWS Certified Advanced Networking Specialty ANS-C01 Sample Exam Questions https://tutorialsdojo.com/aws-certified-advanced-networking-specialty-ans-c01-sample-exam-questions/ https://tutorialsdojo.com/aws-certified-advanced-networking-specialty-ans-c01-sample-exam-questions/#respond Wed, 28 Jun 2023 09:58:32 +0000 https://tutorialsdojo.com/?p=22238 Here are 10 AWS Certified Advanced Networking Specialty ANS-C01 practice exam questions to help you gauge your readiness for the actual exam. Question 1 A company is building its customer web portal in multiple EC2 instances behind an Application Load Balancer. The portal must be accessible on www.tutorialsdojo.com as well as on its tutorialsdojo.com [...]

The post AWS Certified Advanced Networking Specialty ANS-C01 Sample Exam Questions appeared first on Tutorials Dojo.

]]>

Here are 10 AWS Certified Advanced Networking Specialty ANS-C01 practice exam questions to help you gauge your readiness for the actual exam.

Question 1

A company is building its customer web portal in multiple EC2 instances behind an Application Load Balancer. The portal must be accessible on www.tutorialsdojo.com as well as on its tutorialsdojo.com root domain.

How should the Network Engineer set up Amazon Route 53 to satisfy this requirement?

  1. Set up an Alias A Record for tutorialsdojo.com with the ALB as the target. For the www.tutorialsdojo.com subdomain, create a CNAME record that points to the ALB.
  2. Set up a CNAME Record for tutorialsdojo.com with the ALB as the target. For the www.tutorialsdojo.com subdomain, create a CNAME record that points to the ALB.
  3. Set up a CNAME Record for tutorialsdojo.com with the ALB as the target. For the www.tutorialsdojo.com subdomain, create an Alias A record that points to the ALB.
  4. Set up a non-alias A Record for tutorialsdojo.com with the ALB as the target. For the www.tutorialsdojo.com subdomain, create a CNAME record that points to the ALB.

Correct Answer: 1

Amazon Route 53 alias records provide a Route 53–specific extension to DNS functionality. Alias records let you route traffic to selected AWS resources, such as CloudFront distributions and Amazon S3 bucket. They also let you route traffic from one record in a hosted zone to another record.

Unlike a CNAME record, you can create an alias record at the top node of a DNS namespace, also known as the zone apex. For example, if you register the DNS name tutorialsdojo.com, the zone apex is tutorialsdojo.com. You can’t create a CNAME record for tutorialsdojo.com, but you can create an alias record for tutorialsdojo.com that routes traffic to www.tutorialsdojo.com.

When Route 53 receives a DNS query for an alias record, Route 53 responds with the applicable value for that resource:

A CloudFront distribution – Route 53 responds with one or more IP addresses for CloudFront edge servers that can serve your content.

An Elastic Beanstalk environment – Route 53 responds with one or more IP addresses for the environment.

An ELB load balancer – Route 53 responds with one or more IP addresses for the load balancer.

An Amazon S3 bucket that is configured as a static website – Route 53 responds with one IP address for the Amazon S3 bucket.

Another Route 53 record in the same hosted zone – Route 53 responds as if the query is for the record that is referenced by the alias record.

Hence, the correct answer is: Set up an Alias A Record for tutorialsdojo.com with the ALB as the target. For the www.tutorialsdojo.com subdomain, create a CNAME record that points to the ALB.

The option that says: Set up a CNAME Record for tutorialsdojo.com with the ALB as the target. For the www.tutorialsdojo.com subdomain, create a CNAME record that points to the ALB is incorrect. Although the configuration for the subdomain is correct, you still can’t create a CNAME record for the root domain or zone apex. You have to set up an Alias A record instead.

The option that says: Set up a CNAME Record for tutorialsdojo.com with the ALB as the target. For the www.tutorialsdojo.com subdomain, create an Alias A record that points to the ALB is incorrect because you can’t create a CNAME record for the root domain or zone apex. The subdomain configuration is technically correct because you can set up an Alias A record for your subdomain in Route 53. The issue here is the use of a CNAME record in the zone apex.

The option that says: Set up a non-alias A Record for tutorialsdojo.com with the ALB as the target. For the www.tutorialsdojo.com subdomain, create a CNAME record that points to the ALB is incorrect because a non-alias A Record can only accept IP addresses and not the DNS name of the ALB.

References:
https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/resource-record-sets-choosing-alias-non-alias.html
https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/routing-to-elb-load-balancer.html

Question 2

A company has a hybrid cloud architecture that connects its on-premises Microsoft Active Directory to its Amazon VPC. The company is launching an application that uses Amazon Elastic MapReduce with a fleet of On-Demand EC2 instances. Two AWS Managed Microsoft AD domain controllers as well as the DHCP options set of the VPC have been provisioned and properly configured. The Network Engineer must ensure that the requests destined for the Route 53 private hosted zone are sent to the VPC-provided DNS.

What should the Engineer implement in order to satisfy this requirement?

  1. Set up a new conditional forwarder to the Amazon-provided DNS server.
  2. Configure a seamless EC2 Domain-Join in the AWS Managed Microsoft AD.
  3. Create a new PTR record in the Route 53 private hosted zone that points to the on-premises Microsoft Active Directory.
  4. Set up an Amazon Connect omnichannel connection to ensure that the requests destined for the Route 53 private hosted zone are sent to the VPC-provided DNS.

Correct Answer: 1

AWS Directory Service lets you run Microsoft Active Directory (AD) as a managed service. AWS Directory Service for Microsoft Active Directory also referred to as AWS Managed Microsoft AD, is powered by Windows Server 2012 R2. When you select and launch this directory type, it is created as a highly available pair of domain controllers connected to your virtual private cloud (VPC). The domain controllers run in different Availability Zones in a region of your choice. Host monitoring and recovery, data replication, snapshots, and software updates are automatically configured and managed for you.

With AWS Managed Microsoft AD, you can run directory-aware workloads in the AWS Cloud, including Microsoft SharePoint and custom .NET and SQL Server-based applications. You can also configure a trust relationship between AWS Managed Microsoft AD in the AWS Cloud and your existing on-premises Microsoft Active Directory, providing users and groups with access to resources in either domain using single sign-on (SSO).

You can follow the steps below to integrate your on-premises Microsoft Active Directory and your AWS resources: 

  1. Connect your on-premises network to the VPC using AWS Direct Connect or a VPN connection, and verify that the new Windows Server instances can resolve the domain’s DNS name.
  2. Promote the new Windows Server instances in your VPC to domain controllers in your Active Directory domain.
  3. Configure your on-premises Active Directory Sites and Services to include sites and subnets that represent the Availability Zones within your VPC, and place the newly promoted domain controllers in their associated sites.
  4. Promote the Windows Server instances in the private subnets to domain controllers in your Active Directory domain.
  5. Ensure that instances can resolve names via AD DNS by statically assign AD DNS servers on Windows instances or setting the domain-name-servers field in new DHCP options set in your VPC to include your AWS-based domain controllers hosting Active Directory DNS. 

By default, the Microsoft Active Directory-provided DNS doesn’t automatically forward requests to the VPC-provided DNS. You have to configure a DNS forwarder so that requests destined for the Route 53 private hosted zone are sent to the VPC-provided DNS. You can use the Windows DNS Server Tools feature to configure a DNS forwarder.

Hence, the correct answer is: Set up a new conditional forwarder to the Amazon-provided DNS server.

The option that says: Configure a seamless EC2 Domain-Join in the AWS Managed Microsoft AD is incorrect because the seamless EC2 Domain-Join process only allows you to share a directory between two AWS accounts and not for resolving hostnames in your internal, on-premises Active Directory domain.

The option that says: Create a new PTR record in the Route 53 private hosted zone that points to the on-premises Microsoft Active Directory is incorrect because a PTR record in Route 53 simply maps an IP address to the corresponding domain name. It is not capable of connecting your Route 53 private hosted zone to the on-premises Microsoft Active Directory.

The option that says: Set up an Amazon Connect omnichannel connection to ensure that the requests destined for the Route 53 private hosted zone are sent to the VPC-provided DNS is incorrect because Amazon Connect is just an easy-to-use omnichannel cloud contact center that helps companies provide superior customer service at a lower cost. This service is not suitable for integrating your on-premises Active Directory and AWS VPC.

References:
https://aws.amazon.com/blogs/security/how-to-set-up-dns-resolution-between-on-premises-networks-and-aws-using-aws-directory-service-and-microsoft-active-directory/ 
https://docs.aws.amazon.com/directoryservice/latest/admin-guide/launching_instance.html
https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/resolver.html

Check out this Amazon Route 53 Cheat Sheet:
https://tutorialsdojo.com/amazon-route-53/

Resolve Route 53 Private Hosted Zones from an On-premises Network:
https://tutorialsdojo.com/resolve-route-53-private-hosted-zones-from-an-on-premises-network/

Question 3

A company has a suite of publicly accessible web applications that are hosted in several Amazon EC2 instances. To improve the infrastructure security, the Network Engineer must automate the network configuration analysis of all EC2 instances that regularly checks for ports that are reachable from outside the VPC. This will protect the architecture from malicious activities and port scans by external systems. The solution should also highlight network configurations that allow for potentially malicious access, such as mismanaged security groups, ACLs, IGWs, and other vulnerabilities.

What should the Engineer do to satisfy this requirement?

  1. Amazon Inspector
  2. Bidirectional Forwarding Detection (BFD)
  3. AWS Security Hub
  4. Amazon Macie

Correct Answer: 1

You can use Amazon Inspector to assess your assessment targets (collections of AWS resources) for potential security issues and vulnerabilities. Amazon Inspector compares the behavior and the security configuration of the assessment targets to selected security rule packages. In the context of Amazon Inspector, a rule is a security check that Amazon Inspector performs during the assessment run.

An Amazon Inspector assessment can use any combination of the following rules packages:

Network assessments:

-Network Reachability

Host assessments:

-Common vulnerabilities and exposures

-Center for Internet Security (CIS) Benchmarks

-Security best practices for Amazon Inspector

The rules in the Network Reachability package analyze your network configurations to find security vulnerabilities of your EC2 instances. The findings that Amazon Inspector generates also provide guidance about restricting access that is not secure. The findings generated by these rules show whether your ports are reachable from the Internet through an Internet gateway (including instances behind Application Load Balancers or Classic Load Balancers), a VPC peering connection, or a VPN through a virtual gateway.

These findings also highlight network configurations that allow for potentially malicious access, such as mismanaged security groups, ACLs, IGWs, and so on. These rules help automate the monitoring of your AWS networks and identify where network access to your EC2 instances might be misconfigured. By including this package in your assessment run, you can implement detailed network security checks without having to install scanners and send packets, which are complex and expensive to maintain, especially across VPC peering connections and VPNs.

Hence, the correct answer is: Amazon Inspector.

Bidirectional Forwarding Detection (BFD) is incorrect because this is just a detection protocol to provide fast forwarding path failure detection times, which allows for a faster routing re-convergence time. This is primarily used in an AWS Direct Connect (DX) connection and not for analyzing your network configurations to find security vulnerabilities in your EC2 instances.

AWS Security Hub is incorrect because it only gives you a comprehensive view of your high-priority security alerts and security posture across your AWS accounts.

Amazon Macie is incorrect because this is simply a fully managed data security and data privacy service that uses machine learning and pattern matching to discover and protect your sensitive data in AWS. Macie is primarily used in Amazon S3 to identify and alert you for sensitive data in your S3 buckets, such as personally identifiable information (PII).

References:
https://aws.amazon.com/blogs/security/amazon-inspector-assess-network-exposure-ec2-instances-aws-network-reachability-assessments/
https://docs.aws.amazon.com/inspector/latest/userguide/inspector_network-reachability.html

Check out this Amazon Inspector Cheat Sheet:
https://tutorialsdojo.com/amazon-inspector/

Question 4

The company’s on-premises network has an established AWS Direct Connect connection to its VPC in AWS. A Network Engineer is designing the network infrastructure of a multitier application hosted in an Auto Scaling group of EC2 instances. The application will be accessed by the employees from the on-premises network as well as from the public Internet. The network configuration must automatically update routes in your route table based on your dynamic BGP route advertisement.

What should the Engineer do to implement this network setup?

  1. Enable route propagation in the route table of the VPC and specify the virtual private gateway as the target.
  2. Set up two different route tables in the VPC. The first route table must have a default route to the Internet Gateway and the second table has a route to the virtual private gateway.
  3. Disable the default route propagation option in the route table of the VPC and add a specific route to the on-premises network. Choose the virtual private gateway as the target. Enable the route propagation option in the customer gateway.
  4. Modify the main route table of the VPC to have two default routes. The first route goes to the public Internet via the Internet Gateway while the second route goes to the on-premises network via the virtual private gateway.

Correct Answer: 1

Route tables determine where network traffic is directed. In your VPC route table, you must add a route for your remote network and specify the virtual private gateway as the target. This enables traffic from your VPC that’s destined for your remote network to route via the virtual private gateway and over one of the VPN tunnels. You can enable route propagation for your route table to automatically propagate your network routes to the table for you.

AWS uses the most specific route in your route table that matches the traffic to determine how to route the traffic (longest prefix match). If your route table has overlapping or matching routes, the following rules apply:

-If propagated routes from a Site-to-Site VPN connection or AWS Direct Connect connection overlap with the local route for your VPC, the local route is most preferred, even if the propagated routes are more specific.

-If propagated routes from a Site-to-Site VPN connection or AWS Direct Connect connection have the same destination CIDR block as other existing static routes (longest prefix match cannot be applied), AWS prioritizes the static routes whose targets are an internet gateway, a virtual private gateway, a network interface, an instance ID, a VPC peering connection, a NAT gateway, a transit gateway, or a gateway VPC endpoint. 

Hence, the correct answer is: Enable route propagation in the route table of the VPC and specify the virtual private gateway as the target.

The option that says: Set up two different route tables in the VPC. The first route table must have a default route to the Internet Gateway and the second table has a route to the virtual private gateway is incorrect because using two route tables is not required in this scenario. You can use a single route table with a specific route to the on-premises network and enable route propagation.

The option that says: Disable the default route propagation option in the route table of the VPC and add a specific route to the on-premises network. Choose the virtual private gateway as the target. Enable the route propagation option in the customer gateway is incorrect. You have to enable route propagation for the route table to automatically propagate the network routes to the on-premises network. You have to enable this in the Amazon VPC and not in the customer gateway. Moreover, this option is not enabled by default. 

The option that says: Modify the main route table of the VPC to have two default routes. The first route goes to the public Internet via the Internet Gateway while the second route goes to the on-premises network via the virtual private gateway is incorrect because a route table cannot have two default routes. Route propagation should also be enabled in order to satisfy the requirements.

References:
https://docs.aws.amazon.com/vpn/latest/s2svpn/VPNRoutingTypes.html
https://docs.aws.amazon.com/directconnect/latest/UserGuide/Troubleshooting.html
https://docs.aws.amazon.com/vpn/latest/s2svpn/SetUpVPNConnections.html

Check out these Cheat Sheets: 
https://tutorialsdojo.com/aws-direct-connect/
https://tutorialsdojo.com/amazon-vpc/
https://tutorialsdojo.com/vpc-peering/

Longest Prefix Match: Understanding Advanced Concepts in VPC Peering:
https://tutorialsdojo.com/longest-prefix-match-understanding-advanced-concepts-in-vpc-peering/

Question 5

An enterprise is extending its on-premises data storage systems using AWS. A Network Engineer established an AWS Direct Connect connection with a Public Virtual Interface (VIF) to the on-premises network to allow low latency access to Amazon S3. The Engineer must ensure that the network connection is properly secured.

Which of the following is a valid security concern about this network architecture?

  1. AWS Direct Connect advertises all public prefixes with the well-known NO_EXPORT BGP community tag to help control the scope (regional or global) and route preference of traffic. However, the NO_EXPORT BGP community tag is only supported for private virtual interfaces and transit virtual interfaces.
  2. The prefixes are always advertised to all public AWS Regions so all Direct Connect customers in the same or different region can access your router as long as they also have a Public VIF. You cannot apply BGP community tags on the public prefixes.
  3. It’s not possible to directly access an S3 bucket through a public virtual interface (VIF) using Direct Connect. You must have a pre-configured VPC endpoint for Amazon S3.
  4. Prefixes are advertised to all public AWS Regions (global) by default. The Network Engineer must add a BGP community tag to control the scope and route preference of the traffic on public virtual interfaces.

Correct Answer: 4

AWS Direct Connect links your internal network to an AWS Direct Connect location over a standard Ethernet fiber-optic cable. One end of the cable is connected to your router, and the other to an AWS Direct Connect router. With this connection, you can create virtual interfaces directly to public AWS services (for example, to Amazon S3) or to Amazon VPC, bypassing internet service providers in your network path. An AWS Direct Connect location provides access to AWS in the Region with which it is associated. You can use a single connection in a public Region or AWS GovCloud (US) to access public AWS services in all other public Regions.

AWS Direct Connect applies inbound (to your on-premises data center) and outbound (from your AWS Region) routing policies for a public AWS Direct Connect connection. You can also use Border Gateway Protocol (BGP) community tags on advertised Amazon routes and apply BGP community tags on the routes you advertise to Amazon. You can use the NO_EXPORT BGP community tag to help control the scope (Regional or global) and route preference of traffic on public virtual interfaces. If you do not apply any community tags, prefixes are advertised to all public AWS Regions (global) by default.

In Direct Connect, it’s not possible to directly access an S3 bucket through a private virtual interface (VIF) using a Gateway VPC endpoint. Take note that the on-premises traffic can’t traverse the Gateway VPC endpoint. You have to use an Interface VPC endpoint instead if you have a private virtual interface in place.

Hence, the correct answer is: Prefixes are advertised to all public AWS Regions (global) by default. The Network Engineer must add a BGP community tag to control the scope and route preference of the traffic on public virtual interfaces.

The option that says: AWS Direct Connect advertises all public prefixes with the well-known NO_EXPORT BGP community tag to help control the scope (regional or global) and route preference of traffic. However, the NO_EXPORT BGP community tag is only supported for private virtual interfaces and transit virtual interfaces is incorrect because the NO_EXPORT BGP community tag is also supported for public virtual interfaces. 

The option that says: The prefixes are always advertised to all public AWS Regions so all Direct Connect customers in the same, or different, region can access your router as long as they also have a Public VIF. You cannot apply BGP community tags on the public prefixes is incorrect because you can actually use Border Gateway Protocol (BGP) community tags on advertised Amazon routes as well as the routes you advertise to Amazon. You can use the NO_EXPORT BGP community tag to help control the scope (Regional or global) and route preference of traffic on public virtual interfaces. 

The option that says: It’s not possible to directly access an S3 bucket through a public virtual interface (VIF) using Direct Connect. You must have a pre-configured VPC endpoint for Amazon S3 is incorrect because it is actually possible to access an S3 bucket through a public virtual interface. This configuration doesn’t require an Amazon Virtual Private Cloud (Amazon VPC) endpoint for Amazon S3 because the on-premises traffic can’t traverse the Gateway VPC endpoint.

References:
https://aws.amazon.com/premiumsupport/knowledge-center/public-private-interface-dx/
https://aws.amazon.com/premiumsupport/knowledge-center/s3-bucket-access-direct-connect
https://docs.aws.amazon.com/directconnect/latest/UserGuide/routing-and-bgp.html

Check out this AWS Direct Connect Cheat Sheet: 
https://tutorialsdojo.com/aws-direct-connect/

Tutorials Dojo’s AWS Certified Advanced Networking – Specialty Exam Study Guide:
https://tutorialsdojo.com/aws-certified-advanced-networking-specialty-exam-study-path

Question 6

A Network Administrator is instructed to support high-throughput processing workloads between the company’s on-premises Storage Gateway appliance and AWS Storage Gateway. She must establish a dedicated network connection to reduce the company’s network costs, increase bandwidth throughput, and provide a more consistent network experience than Internet-based connections.

What are the steps that the Administrator should do to properly implement this integration? (Select THREE.)

  1. Create and establish an AWS Direct Connect connection between the on-premises data center and the Storage Gateway public endpoints.
  2. Establish an AWS Managed VPN connection between the on-premises data center and the Storage Gateway private endpoint.
  3. Connect the on-premises Storage Gateway appliance to the AWS Direct Connect router.
  4. Connect the on-premises Storage Gateway appliance to the VPC via a Virtual Public Gateway.

Correct Answer: 1,3,5

AWS Direct Connect links your internal network to the AWS Cloud. By using AWS Direct Connect with AWS Storage Gateway, you can create a connection for high-throughput workload needs, providing a dedicated network connection between your on-premises gateway and AWS.

Storage Gateway uses public endpoints. With an AWS Direct Connect connection in place, you can create a public virtual interface to allow traffic to be routed to the Storage Gateway endpoints. The public virtual interface bypasses Internet service providers in your network path. The Storage Gateway service public endpoint can be in the same AWS Region as the AWS Direct Connect location, or it can be in a different AWS Region.

To use AWS Direct Connect with Storage Gateway: 

  1. Create and establish an AWS Direct Connect connection between your on-premises data center and your Storage Gateway endpoint.
  2. Connect your on-premises Storage Gateway appliance to the AWS Direct Connect router.
  3. Create a public virtual interface, and configure your on-premises router accordingly.

Hence, the correct answers are:

– Create and establish an AWS Direct Connect connection between the on-premises data center and the Storage Gateway public endpoints

– Connect the on-premises Storage Gateway appliance to the AWS Direct Connect router.

– Create a public virtual interface, and configure your on-premises router accordingly

The option that says: Establish an AWS Managed VPN connection between the on-premises data center and the Storage Gateway private endpoint is incorrect because a VPN is an Internet-based connection and not a dedicated network connection that can increase bandwidth throughput. In addition, you have to use the public endpoints of AWS Storage Gateway and not VPC (private) endpoints.

The option that says: Connect the on-premises Storage Gateway appliance to the VPC via a Virtual Public Gateway is incorrect because you have to connect the on-premises Storage Gateway appliance to the AWS Direct Connect router or via a Virtual Private Gateway. Take note that there is no Virtual Public Gateway in AWS.

The option that says: Set up a private virtual interface and configure your on-premises router accordingly is incorrect because you have to use a public virtual interface instead.

References:
https://docs.aws.amazon.com/storagegateway/latest/userguide/using-dx.html
https://aws.amazon.com/storagegateway/faqs/ 

Check out this AWS Direct Connect and Storage Gateway Cheat Sheet: 
https://tutorialsdojo.com/aws-direct-connect/
https://tutorialsdojo.com/aws-storage-gateway/

Question 7

A multinational company has several public websites that were registered using a 3rd-party DNS registrar. The DNS services being used by their websites are from an external service provider, which also includes the Domain Name System Security Extensions (DNSSEC) feature.

The company needs to transfer the domain registration and the DNS services to Amazon Route 53. The migration should have little to no downtime as the websites are already running production workloads.

Which of the following is the most operationally efficient solution with the LEAST amount of downtime?

  1. Create a new hosted zone and DNS records in Amazon Route 53. Lower the TTL (time to live) setting of the NS (name server) record to 300 and remove the Delegation Signer (DS) record from the parent zone in the current DNS service provider. Lower the TTL setting of the NS record in the Route 53 hosted zone. Once the TTL expires, update the NS records to use Route 53 name servers and monitor the traffic. Increase the TTL to 172800 seconds for the NS record once the migration is complete. Transfer domain registration to Amazon Route 53 and re-enable DNSSEC signing.
  2. Create a new hosted zone and DNS records in Amazon Route 53 with the DNSSEC feature enabled. Lower the TTL (time to live) setting of the NS (name server) record to 300. Increase the TTL setting of the NS record in the Route 53 hosted zone. Once the TTL expires, update the NS records to use Route 53 name servers and monitor the traffic. Lower the TTL to 600 seconds for the NS record once the migration is complete. Transfer domain registration to Amazon Route 53 and re-enable DNSSEC signing.
  3. Create a new hosted zone and DNS records in Amazon Route 53. Lower the TTL (time to live) setting of the NS (name server) record to 300 and remove the Delegation Signer (DS) record from the parent zone in the current DNS service provider. Transfer domain registration to Amazon Route 53, where the DNSSEC will be automatically re-enabled. Lower the TTL setting of the NS record in the Route 53 hosted zone. Once the TTL expires, update the NS records to use Route 53 name servers and monitor the traffic. Set the TTL to 600 seconds for the NS record once the migration is complete.
  4. Create a new hosted zone and DNS records in Amazon Route 53. Remove the Delegation Signer (DS) record from the parent zone in the current DNS service provider. Transfer domain registration to Amazon Route 53, where the DNSSEC will be automatically re-enabled. Update the NS records to use Route 53 name servers and monitor the traffic. Configure the TTL to 172800 seconds for the NS record once the migration is complete.

Correct Answer: 1

If you’re transferring one or more domain registrations to Route 53, and you’re currently using a domain registrar that doesn’t provide paid DNS service, you need to migrate the DNS service before you migrate the domain. Otherwise, the registrar will stop providing DNS service when you transfer your domains, and the associated websites and web applications will become unavailable on the internet. (You can also migrate the DNS service from the current registrar to another DNS service provider. We don’t require you to use Route 53 as the DNS service provider for domains that are registered with Route 53.)

The process depends on whether you’re currently using the domain:

-If the domain is currently getting traffic—for example, if your users are using the domain name to browse a website or access a web application.

-If the domain isn’t getting any traffic (or is getting very little traffic)

For both options, your domain should remain available during the entire migration process. However, in the unlikely event that there are issues, the first option lets you roll back the migration quickly. With the second option, your domain could be unavailable for a few days.

If you want to migrate DNS service to Amazon Route 53 for a domain that is currently getting traffic—for example, if your users are using the domain name to browse to a website or access a web application — perform the procedures below:

-Get your current DNS configuration from the current DNS service provider (optional but recommended)

-Step 2: Create a hosted zone

-Step 3: Create records

-Step 4: Lower TTL settings

-Step 5: (If you have DNSSEC configured) Remove the DS record from the parent zone

-Step 6: Wait for the old TTL to expire

-Step 7: Update the NS records to use Route 53 name servers

-Step 8: Monitor traffic for the domain

-Step 9: Change the TTL for the NS record back to a higher value

-Step 10: Transfer domain registration to Amazon Route 53

-Step 11: Re-enable DNSSEC signing (if required)

Ensure that you properly set the TTL of your current DNS service and Amazon Route 53 before you update your DNS records. By default, the typical TTL setting for the NS record is 172800 seconds, which is equivalent to two days. This means that it would take 2 days for your DNS change to be propagated. You have to lower the TTL settings when you conduct the migration and then set it back to its default value after the domain was successfully moved to Amazon Route 53.

Updating the Name Server (NS) records should happen first before you transfer the domain registration of your web domain to Amazon Route 53. If you’ve configured DNSSEC for your domain, you should remove the Delegation Signer (DS) record from the parent zone before you migrate your domain to Route 53. If the parent zone is hosted through Route 53 or another registrar, contact them to remove the DS record. Because it isn’t currently possible to have DNSSEC signing enabled across two providers, you must remove any DS or DNSKEYs to deactivate DNSSEC. This temporarily signals to DNS resolvers to disable DNSSEC validation.

Hence, the correct answer is: Create a new hosted zone and DNS records in Amazon Route 53. Lower the TTL (time to live) setting of the NS (name server) record to 300 and remove the Delegation Signer (DS) record from the parent zone in the current DNS service provider. Lower the TTL setting of the NS record in the Route 53 hosted zone. Once the TTL expires, update the NS records to use Route 53 name servers and monitor the traffic. Increase the TTL to 172800 seconds for the NS record once the migration is complete. Transfer domain registration to Amazon Route 53 and re-enable DNSSEC signing.

The option that says: Create a new hosted zone and DNS records in Amazon Route 53 with the DNSSEC feature enabled. Lower the TTL (time to live) setting of the NS (name server) record to 300. Increase the TTL setting of the NS record in the Route 53 hosted zone. Once the TTL expires, update the NS records to use Route 53 name servers and monitor the traffic. Lower the TTL to 600 seconds for the NS record once the migration is complete. Transfer domain registration to Amazon Route 53 and re-enable DNSSEC signing is incorrect because you should remove the Delegation Signer (DS) record first on the current DNS service. Not doing this first step will cause DNSSEC issues during migration. Keep in mind that you should set the TTL to a higher value, like 172800 seconds, and not lower as this affects the DNS performance of your website.

The option that says: Create a new hosted zone and DNS records in Amazon Route 53. Lower the TTL (time to live) setting of the NS (name server) record to 300 and remove the Delegation Signer (DS) record from the parent zone in the current DNS service provider. Transfer domain registration to Amazon Route 53, where the DNSSEC will be automatically re-enabled. Lower the TTL setting of the NS record in the Route 53 hosted zone. Once the TTL expires, update the NS records to use Route 53 name servers and monitor the traffic. Set the TTL to 600 seconds for the NS record once the migration is complete is incorrect. First off, the DNSSEC feature is not automatically re-enabled in Route 53 after migration. The process of transferring the domain registration to Route 53 should happen after the DNS Service was successfully ported and not before. This may cause some issues during the web domain migration. You should also set the TTL to 172800 seconds for the NS record once the migration is complete, not 600 seconds (5 minutes), to optimize DNS calls.

The option that says: Create a new hosted zone and DNS records in Amazon Route 53. Remove the Delegation Signer (DS) record from the parent zone in the current DNS service provider. Transfer domain registration to Amazon Route 53, where the DNSSEC will be automatically re-enabled. Update the NS records to use Route 53 name servers and monitor the traffic. Configure the TTL to 172800 seconds for the NS record once the migration is complete is incorrect because the TTL setting must be lowered on the current DNS service provider before porting the NS records to Route 53. This solution will cause downtime to the websites as the change will only be reflected once the TTL setting (which is usually set to 172800 seconds or 2 days) of the current DNS service provider has elapsed. In addition, the DNSSEC feature is not automatically re-enabled in Amazon Route 53. This feature must be manually configured.

References:
https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/migrate-dns-domain-in-use.html
https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/MigratingDNS.html
https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/dns-configuring-dnssec.html

Check out this Amazon Route 53 Cheat Sheet:
https://tutorialsdojo.com/amazon-route-53/

Question 8

A multinational organization plans to adopt a hybrid cloud infrastructure that requires a dedicated connection between its on-premises data center and virtual private cloud (VPC) in AWS. The connection must allow the cloud-based applications hosted in EC2 instances to fetch data from the organization’s on-premises file servers with a more consistent network experience than Internet-based connections.

Which of the following options should the Network team implement to satisfy this requirement?

  1. Set up a VPC Peering connection between the VPC and the on-premises data center.
  2. Set up an AWS Direct Connect connection between the VPC and the on-premises data center.
  3. Set up an Amazon Connect omnichannel connection between the VPC and the on-premises data center.
  4. Set up an AWS VPN CloudHub between the VPC and the on-premises data center.

Correct Answer: 2

AWS Direct Connect is a cloud service solution that makes it easy to establish a dedicated network connection from your premises to AWS. Using AWS Direct Connect, you can establish private connectivity between AWS and your data center, office, or colocation environment, which in many cases, can reduce your network costs, increase bandwidth throughput, and provide a more consistent network experience than Internet-based connections.  

Hence, the correct answer is: Set up an AWS Direct Connect connection between the VPC and the on-premises data center

The option that says: Set up a VPC Peering connection between the VPC and the on-premises data center is incorrect because VPC Peering is an Internet-based connection that is primarily used to connect two or more VPCs. You can’t set up a connection between your VPC and your on-premises data center using VPC Peering. 

The option that says: Set up an AWS VPN CloudHub between the VPC and the on-premises data center is incorrect because a VPN is an Internet-based connection, unlike Direct Connect, which provides a dedicated connection. An Internet-based connection means that the traffic from the VPC and to the on-premises network traverses the public Internet, which is why it is slow. You should use Direct Connect instead.

The option that says: Set up an Amazon Connect omnichannel connection between the VPC and the on-premises data center is incorrect because Amazon Connect is just an easy-to-use omnichannel cloud contact center that helps companies provide superior customer service at a lower cost. This service is not suitable for integrating your VPC and on-premises network.

References:
https://aws.amazon.com/directconnect/
https://docs.aws.amazon.com/whitepapers/latest/aws-vpc-connectivity-options/aws-direct-connect-network-to-amazon.html

Check out this AWS Direct Connect Cheat Sheet:
https://tutorialsdojo.com/aws-direct-connect/

Question 9

A company is launching a web application that will be hosted in an Amazon ECS cluster with an EC2 launch type. The Network Engineer configured the associated security group and network ACL of the instances to allow inbound traffic on ports 80 and 443. After the deployment, the QA team noticed that the application is unreachable over the public Internet.

What should the Engineer do to rectify this issue?

  1. Ensure that the security group has a rule that allows outbound traffic on port 80 and port 443.
  2. Verify that the network ACL has a rule that allows Inbound traffic on the ephemeral ports 1024 – 65535.
  3. Set the network mode to bridge to ensure that every task that is launched from the task definition gets its own elastic network interface (ENI) and a primary private IP address.
  4. Ensure that the network ACL has a rule that allows Outbound traffic on the ephemeral ports 1024 – 65535.

Correct Answer: 4

To enable the connection to a service running on an instance, the associated network ACL must allow both inbound traffic on the port that the service is listening on as well as allow outbound traffic from ephemeral ports. When a client connects to a server, a random port from the ephemeral port range (1024-65535) becomes the client’s source port.

The designated ephemeral port then becomes the destination port for return traffic from the service, so outbound traffic from the ephemeral port must be allowed in the network ACL. By default, network ACLs allow all inbound and outbound traffic. If your network ACL is more restrictive, then you need to explicitly allow traffic from the ephemeral port range.

You might want to use a different range for your network ACLs depending on the type of client that you’re using or with which you’re communicating. The client that initiates the request chooses the ephemeral port range. The range varies depending on the client’s operating system.

-Many Linux kernels (including the Amazon Linux kernel) use ports 32768-61000.

-Requests originating from Elastic Load Balancing use ports 1024-65535.

-Windows operating systems through Windows Server 2003 use ports 1025-5000.

-Windows Server 2008 and later versions use ports 49152-65535.

-A NAT gateway uses ports 1024-65535.

-AWS Lambda functions use ports 1024-65535.

For example, if a request comes into a web server in your VPC from a Windows XP client on the internet, your network ACL must have an outbound rule to enable traffic destined for ports 1025-5000.

If an instance in your VPC is the client initiating a request, your network ACL must have an inbound rule to enable traffic destined for the ephemeral ports specific to the type of instance (Amazon Linux, Windows Server 2008, and so on).

In practice, to cover the different types of clients that might initiate traffic to public-facing instances in your VPC, you can open ephemeral ports 1024-65535. However, you can also add rules to the ACL to deny traffic on any malicious ports within that range. Ensure that you place the deny rules earlier in the table than the allow rules that open the wide range of ephemeral ports.

Hence, the correct answer is: Ensure that the network ACL has a rule that allows Outbound traffic on the ephemeral ports 1024 – 65535.

The option that says: Ensure that the security group has a rule that allows outbound traffic on port 80 and port 443 is incorrect because Security groups are stateful, which means that if you send a request from your instance, the response traffic for that request is allowed to flow in regardless of inbound security group rules. An allowed inbound traffic is also permitted to flow out, regardless of outbound rules.

The option that says: Verify that the network ACL has a rule that allows Inbound traffic on the ephemeral ports 1024 – 65535 is incorrect because you should allow the Outbound traffic to the ephemeral port range (1024-65535) in the network ACL and not the Inbound traffic.

The option that says: Set the network mode to bridge to ensure that every task that is launched from the task definition gets its own elastic network interface (ENI) and a primary private IP address is incorrect. If the network mode is set to bridge, the task utilizes Docker’s built-in virtual network, which runs inside each container instance. A better solution is to use the awsvpc network mode. When you use the awsvpc network mode in your task definitions, every task that is launched from that task definition gets its elastic network interface (ENI) and a primary private IP address.

References: 
https://aws.amazon.com/premiumsupport/knowledge-center/resolve-connection-sg-acl-inbound/
https://docs.aws.amazon.com/vpc/latest/userguide/vpc-network-acls.html#nacl-ephemeral-ports
https://aws.amazon.com/premiumsupport/knowledge-center/connect-http-https-ec2/

Check out this Amazon VPC Cheat Sheet:
https://tutorialsdojo.com/amazon-vpc/

Question 10

A Network Administrator is migrating an on-premises application to AWS Cloud to improve its scalability and availability. The application will be hosted in Amazon EC2 Instances that are deployed on a private subnet with an Application Load Balancer in front to distribute IPv4 traffic. The users of the application are internal employees only.

As part of its processing, the application will also pull massive amounts of data from an external API service over the Internet. The Administrator must allow the EC2 instances to fetch data from the Internet but prevent external hosts over the Internet from initiating a connection with the instances. When the application downloads data from the Internet, the connection must be highly available, and the bandwidth should scale up to 45 Gbps.

The solution must also support long-running queries and downloads initiated by the EC2 instances. Some requests may take a total of 10 minutes to complete.

What is the MOST suitable solution that the Administrator should implement?

  1. Launch a NAT Instance in a public subnet. Modify the route table to block any incoming traffic from the Internet. Configure TCP keepalive on the EC2 instances with a value of more than 600 seconds.
  2. Launch a NAT Gateway in a private subnet. Configure the route table to direct the outgoing Internet traffic from the private subnet to the NAT gateway. Enable TCP keepalive on the EC2 instances with a value of more than 600 seconds.
  3. Set up a Direct Connect Gateway with five 10 Gbps AWS Direct Connect connections. Associate Direct Connect Gateway to the Internet gateway of the VPC. Configure TCP keepalive on the EC2 instances with a value of less than 600 seconds.
  4. Launch a NAT Gateway in a public subnet. Update the route table to direct the outgoing Internet traffic from the private subnet to the NAT gateway. Enable TCP keepalive on the EC2 instances with a value of less than 600 seconds.

Correct Answer: 4

You can use a network address translation (NAT) gateway to enable instances in a private subnet to connect to the Internet or other AWS services but prevent the Internet from initiating a connection with those instances.

To create a NAT gateway, you must specify the public subnet in which the NAT gateway should reside. You must also specify an Elastic IP address to associate with the NAT gateway when you create it. After you’ve created a NAT gateway, you must update the route table associated with one or more of your private subnets to point Internet-bound traffic to the NAT gateway. This enables instances in your private subnets to communicate with the internet.

In this scenario, it is better to use NAT gateway as it provides better availability, higher bandwidth, and requires less administrative effort than NAT Instance. You also have to deploy the NAT Gateway in a public subnet so it can communicate to the Internet.

If a connection that’s using a NAT gateway is idle for 600 seconds or more, the connection times out by default. When a connection times out, a NAT gateway returns an RST packet to any resources behind the NAT gateway that attempt to continue the connection (it does not send a FIN packet).

To prevent the connection from being dropped, you can initiate more traffic over the connection. Alternatively, you can enable TCP keepalive on the instance with a value less than 600 seconds. Take note that the value should be less and not more than the idle time value, to ensure that the connection will be reinitiated properly before the timeout takes effect.

Hence, the correct answer is: Launch a NAT Gateway in a public subnet. Update the route table to direct the outgoing Internet traffic from the private subnet to the NAT gateway. Enable TCP keepalive on the EC2 instances with a value of less than 600 seconds.

The option that says: Launch a NAT Instance in a public subnet. Modify the route table to block any incoming traffic from the Internet. Configure TCP keepalive on the EC2 instances with a value of more than 600 seconds is incorrect because it is not appropriate to use the route table to block the incoming Internet traffic to your VPC. Moreover, a NAT Instance is not highly available, and its bandwidth can’t scale up to 45 Gbps, unlike a NAT Gateway. This should also be launched in a public subnet and not in private. You should also enable TCP keepalive on the EC2 instances with a value of less than 600 seconds and not more than the actual connection timeout.

The option says: Launch a NAT Gateway in a private subnet. Configure the route table to direct the outgoing Internet traffic from the private subnet to the NAT gateway. Enable TCP keepalive on the EC2 instances with a value of more than 600 seconds is incorrect because the NAT Gateway must be launched in the public subnet. In addition, the value for the TCP keepalive should be less and not more than the idle time value. Setting this to more than 600 seconds will result in a timeout after 10 minutes. 

The option says: Set up a Direct Connect Gateway with five 10 Gbps AWS Direct Connect connections. Associate Direct Connect Gateway to the Internet gateway of the VPC. Configure TCP keepalive on the EC2 instances with a value of less than 600 seconds is incorrect because AWS Direct Connect is primarily used to link your internal on-premises network to your AWS VPC. You can’t use this to enable instances in a private subnet to connect to the Internet or other AWS services, but prevent the Internet from initiating a connection with those instances.

References:
https://docs.aws.amazon.com/vpc/latest/userguide/vpc-nat-comparison.html
https://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/vpc-nat-gateway.html
https://docs.aws.amazon.com/vpc/latest/userguide/nat-gateway-troubleshooting.html#nat-gateway-troubleshooting-timeout

Check out this Amazon VPC Cheat Sheet:
https://tutorialsdojo.com/amazon-vpc/

For more practice questions like these and to further prepare you for the actual AWS Certified Advanced Networking Specialty ANS-C01 exam, we recommend that you take our top-notch AWS Certified Advanced Networking Specialty Practice Exams, which have been regarded as the best in the market. 

Also, check out our AWS Certified Advanced Networking Specialty ANS-C01 exam study guide here.

💝 Valentine’s Sale! Get 30% OFF Any Reviewer. Use coupon code: PASSION-4-CLOUD & 10% OFF Store Credits/Gift Cards

Tutorials Dojo portal

Learn AWS with our PlayCloud Hands-On Labs

FREE AWS Exam Readiness Digital Courses

Tutorials Dojo Exam Study Guide eBooks

tutorials dojo study guide eBook

FREE AWS, Azure, GCP Practice Test Samplers

Subscribe to our YouTube Channel

Tutorials Dojo YouTube Channel

Follow Us On Linkedin

Recent Posts

The post AWS Certified Advanced Networking Specialty ANS-C01 Sample Exam Questions appeared first on Tutorials Dojo.

]]>
https://tutorialsdojo.com/aws-certified-advanced-networking-specialty-ans-c01-sample-exam-questions/feed/ 0 22238
AZ-104 Microsoft Azure Administrator Sample Exam Questions https://tutorialsdojo.com/az-104-microsoft-azure-administrator-sample-exam-questions/ https://tutorialsdojo.com/az-104-microsoft-azure-administrator-sample-exam-questions/#respond Wed, 28 Jun 2023 09:40:34 +0000 https://tutorialsdojo.com/?p=22289 Here are 10 AZ-104 Microsoft Azure Administrator practice exam questions to help you gauge your readiness for the actual exam. Question 1 Your company has an Azure Storage account named TutorialsDojo1. You have to copy your files hosted on your on-premises network to TutorialsDojo1 using AzCopy. What Azure Storage services will you be able [...]

The post AZ-104 Microsoft Azure Administrator Sample Exam Questions appeared first on Tutorials Dojo.

]]>

Here are 10 AZ-104 Microsoft Azure Administrator practice exam questions to help you gauge your readiness for the actual exam.

Question 1

Your company has an Azure Storage account named TutorialsDojo1.

You have to copy your files hosted on your on-premises network to TutorialsDojo1 using AzCopy.

What Azure Storage services will you be able to copy your data into?

  1. Table and Queue only
  2. Blob, Table, and File only
  3. Blob, File, Table, and Queue
  4. Blob and File only

Correct Answer: 4

The Azure Storage platform is Microsoft’s cloud storage solution for modern data storage scenarios. Core storage services offer a massively scalable object store for data objects, disk storage for Azure virtual machines (VMs), a file system service for the cloud, a messaging store for reliable messaging, and a NoSQL store.

AzCopy is a command-line utility that you can use to copy blobs or files to or from a storage account.

AZ-104 Microsoft Azure Administrator Sample Exam Questions

Azure Blob storage is Microsoft’s object storage solution for the cloud. Blob storage is optimized for storing massive amounts of unstructured data. Unstructured data is data that doesn’t adhere to a particular data model or definition, such as text or binary data. 

Blob storage is designed for:

– Serving images or documents directly to a browser.

– Storing files for distributed access.

– Streaming video and audio.

– Writing to log files.

– Storing data for backup and restore disaster recovery, and archiving.

– Storing data for analysis by an on-premises or Azure-hosted service. 

Azure Files enables you to set up highly available network file shares that can be accessed by using the standard Server Message Block (SMB) protocol. That means that multiple VMs can share the same files with both read and write access. You can also read the files using the REST interface or the storage client libraries.

One thing that distinguishes Azure Files from files on a corporate file share is that you can access the files from anywhere in the world using a URL that points to the file and includes a shared access signature (SAS) token. You can generate SAS tokens; they allow specific access to a private asset for a specific amount of time.

File shares can be used for many common scenarios:

– Many on-premises applications use file shares. This feature makes it easier to migrate those applications that share data to Azure. If you mount the file share to the same drive letter that the on-premises application uses, the part of your application that accesses the file share should work with minimal, if any, changes.

– Configuration files can be stored on a file share and accessed from multiple VMs. Tools and utilities used by multiple developers in a group can be stored on a file share, ensuring that everybody can find them and that they use the same version.

– Diagnostic logs, metrics, and crash dumps are just three examples of data that can be written to a file share and processed or analyzed later.

Hence, the correct answers are: Blob and File only.

The option that says: Table and Queue only is incorrect because Table and Queue are not supported services by AzCopy. 

The option that says: Blob, Table, and File only is incorrect because Table is not a supported service by AzCopy. The AzCopy command-line utility can only copy blobs or files to or from a storage account.

The option that says: Blob, File, Table, and Queue is incorrect. Although Blob and File types are supported by AzCopy, the Table and Queue services are not supported.

References:
https://docs.microsoft.com/en-us/azure/storage/common/storage-account-overview
https://docs.microsoft.com/en-us/azure/storage/common/storage-use-azcopy-v10

Check out this Azure Storage Overview Cheat Sheet:
https://tutorialsdojo.com/azure-storage-overview/

Azure Blob vs. Disk vs. File Storage:
https://tutorialsdojo.com/azure-blob-vs-disk-vs-file-storage/

Question 2

Your organization has deployed multiple Azure virtual machines configured to run as web servers and an Azure public load balancer named TD1.

There is a requirement that TD1 must consistently route your user’s request to the same web server every time they access it.

What should you configure?

  1. Hash based
  2. Session persistence: None
  3. Session persistence: Client IP
  4. Health probe

Correct Answer: 3

A public load balancer can provide outbound connections for virtual machines (VMs) inside your virtual network. These connections are accomplished by translating their private IP addresses to public IP addresses. Public Load Balancers are used to load balance Internet traffic to your VMs.

AZ-104 Microsoft Azure Administrator Sample Exam Questions

Session persistence is also known session affinity, source IP affinity, or client IP affinity. This distribution mode uses a two-tuple (source IP and destination IP) or three-tuple (source IP, destination IP, and protocol type) hash to route to backend instances.

When using session persistence, connections from the same client will go to the same backend instance within the backend pool.

Session persistence mode has two configuration types:

– Client IP (2-tuple) – Specifies that successive requests from the same client IP address will be handled by the same backend instance.

– Client IP and protocol (3-tuple) – Specifies that successive requests from the same client IP address and protocol combination will be handled by the same backend instance.

Hence, the correct answer is: Session persistence: Client IP.

Hash based is incorrect because this simply allows traffic from the same client IP to be routed to any healthy instance in the backend pool. You would need session persistence if you need users to connect to the same virtual machine for each request.

Session persistence: None is incorrect because this will route the user request to any health instance in the backend pool.

Health probe is incorrect because this is only used to determine the health status of the instances in the backend pool. During load balancer creation, configure a health probe for the load balancer to use. This health probe will determine if an instance is healthy and can receive traffic.

References:
https://learn.microsoft.com/en-us/azure/load-balancer/load-balancer-overview
https://learn.microsoft.com/en-us/azure/load-balancer/distribution-mode-concepts

Check out this Azure Load Balancer Cheat Sheet:
https://tutorialsdojo.com/azure-load-balancer/

Question 3

Your company has a Microsoft Entra ID tenant named tutorialsdojo.onmicrosoft.com and a public DNS zone for tutorialsdojo.com.

You added the custom domain name tutorialsdojo.com to Microsoft Entra ID. You need to verify that Azure can verify the domain name.

What DNS record type should you use?

  1. A
  2. RRSIG
  3. SOA
  4. MX

Correct Answer: 4

Microsoft Entra ID is a cloud-based identity and access management service that enables your employees access external resources. Example resources include Microsoft 365, the Azure portal, and thousands of other SaaS applications.

Microsoft Entra ID also helps them access internal resources like apps on your corporate intranet, and any cloud apps developed for your own organization.

Every new Microsoft Entra ID tenant comes with an initial domain name, <domainname>.onmicrosoft.com. You can’t change or delete the initial domain name, but you can add your organization’s names. Adding custom domain names helps you to create user names that are familiar to your users, such as azure@tutorialsdojo.com.

You can verify your custom domain name by using TXT or MX record types.

Hence, the correct answer is: MX.

A is incorrect. A records are used to map domain names to IP addresses and are unrelated to domain verification in Microsoft Entra ID.

RRSIG is incorrect. RRSIG records are used in DNSSEC (Domain Name System Security Extensions) to provide cryptographic signatures for DNS records. These signaturess simply validate the authenticity of DNS data and not used for ownership verification.

SOA is incorrect. SOA records provide administrative details about the domain. This record is not relevant for domain verification.

References:

https://learn.microsoft.com/en-us/entra/fundamentals/whatis
https://learn.microsoft.com/en-us/entra/fundamentals/add-custom-domain

Check out this Azure Active Directory Cheat Sheet:
https://tutorialsdojo.com/microsoft-entra-id/

Question 4

You have an existing Azure subscription that has the following Azure Storage accounts.

AZ-104 Microsoft Azure Administrator Sample Exam Questions

There is a requirement to identify the storage accounts that can be converted to zone-redundant storage (ZRS) replication. This must be done only through a live migration from Azure Support.

Which of the following accounts can you convert to ZRS?

  1. tdaccount1
  2. tdaccount2
  3. tdaccount3
  4. tdaccount4

Correct Answer: 1

Azure Storage always stores multiple copies of your data so that it is protected from planned and unplanned events, including transient hardware failures, network or power outages, and massive natural disasters. Redundancy ensures that your storage account meets its availability and durability targets even in the face of failures.

When deciding which redundancy option is best for your scenario, consider the tradeoffs between lower costs and higher availability. The factors that help determine which redundancy option you should choose to include are:

– How your data is replicated in the primary region.

– Whether your data is replicated to a second region that is geographically distant to the primary region, to protect against regional disasters.

– Whether your application requires read access to the replicated data in the secondary region if the primary region becomes unavailable for any reason.

Data in an Azure Storage account is always replicated three times in the primary region. Azure Storage offers four options for how your data is replicated:

  1. Locally redundant storage (LRS) copies your data synchronously three times within a single physical location in the primary region. LRS is the least expensive replication option but is not recommended for applications requiring high availability.
  2. Zone-redundant storage (ZRS) copies your data synchronously across three Azure availability zones in the primary region. For applications requiring high availability.
  3. Geo-redundant storage (GRS) copies your data synchronously three times within a single physical location in the primary region using LRS. It then copies your data asynchronously to a single physical location in a secondary region that is hundreds of miles away from the primary region.
  4. Geo-zone-redundant storage (GZRS) copies your data synchronously across three Azure availability zones in the primary region using ZRS. It then copies your data asynchronously to a single physical location in the secondary region.

You can switch a storage account from one type of replication to any other type, but some scenarios are more straightforward than others. If you want to add or remove geo-replication or read access to the secondary region, you can use the Azure portal, PowerShell, or Azure CLI to update the replication setting. However, if you want to change how data is replicated in the primary region, by moving from LRS to ZRS or vice versa, then you must perform a manual migration.

The following table provides an overview of how to switch from each type of replication to another:

AZ-104 Microsoft Azure Administrator Sample Exam Questions

To request a live migration to ZRS, GZRS, or RA-GZRS, you need to migrate your storage account from LRS to ZRS in the primary region with no application downtime. To migrate from LRS to GZRS or RA-GZRS, first switch to GRS or RA-GRS and then request a live migration. Similarly, you can request a live migration from GRS or RA-GRS to GZRS or RA-GZRS. To migrate from GRS or RA-GRS to ZRS, first switch to LRS, then request a live migration.

Live migration is supported only for storage accounts that use LRS or GRS replication. If your account uses RA-GRS then you need to first change your account’s replication type to either LRS or GRS before proceeding. This intermediary step removes the secondary read-only endpoint provided by RA-GRS before migration.

Hence, the correct answer is: tdaccount1.

tdaccount2 is incorrect because you need to first change your account’s replication type to either LRS or GRS before you change to zone-redundant storage (ZRS). The requirement states that you must only request live migration.

tdaccount3 is incorrect because a general-purpose V1 storage account type does not support zone-redundant storage (ZRS) as its replication option. Only General-purpose V2, FileStorage, and BlockBlobStorage support ZRS.

tdaccount4 is incorrect because a BlobStorage account type does not support zone-redundant storage (ZRS) as its replication option. Only General-purpose V2, FileStorage, and BlockBlobStorage support ZRS.

References:
https://docs.microsoft.com/en-us/azure/storage/common/storage-redundancy
https://docs.microsoft.com/en-us/azure/storage/common/redundancy-migration

Check out these Azure Cheat Sheets:
https://tutorialsdojo.com/azure-storage-overview/
https://tutorialsdojo.com/locally-redundant-storage-lrs-vs-zone-redundant-storage-zrs/

Question 5

A company has two virtual networks named TDVnet1 and TDVnet2. A site-to-site VPN, using a VPN Gateway (TDGW1) with static routing, connects your on-premises network to TDVnet1. On your Windows 10 computer, TD1, you’ve set up a point-to-site VPN connection to TDVnet1.

You’ve recently established a virtual network peering between TDVnet1 and TDVnet2. Tests confirm connectivity to TDVnet2 from your on-premises network and to TDVnet1 from TD1. However, TD1 is currently unable to access TDVnet2.

What steps are necessary to enable a connection from TD1 to TDVnet2?

  1. Enable transit gateway for TDVnet1.
  2. Restart TDGW1 to re-establish the connection.
  3. Download the VPN client configuration file and re-install it on TD1.
  4. Enable transit gateway for TDVnet2.

Correct Answer: 3

Point-to-Site (P2S) VPN connection allows you to create a secure connection to your virtual network from an individual client computer. A P2S connection is established by starting it from the client’s computer. This solution is useful for telecommuters who want to connect to Azure VNets from a remote location, such as from home or a conference. P2S VPN is also a helpful solution to utilize instead of S2S VPN when you have only a few clients that need to connect to a VNet. 

AZ-104 Microsoft Azure Administrator Sample Exam Questions

As part of the Point-to-Site configuration, you install a certificate and a VPN client configuration package which are contained in a zip file. Configuration files provide the settings required for a native Windows, Mac IKEv2 VPN, or Linux clients to connect to a virtual network over Point-to-Site connections that use native Azure certificate authentication and are specific to the VPN configuration for the virtual network.

Take note that after creating the point-to-site connection between TD1 and TDVnet1, there is already a change in network topology when you created the virtual network peering with TDVnet1 and TDVnet2. Whenever there is a change in the topology of your network, you will always need to download and re-install the VPN configuration file.

Hence, the correct answer is: Download the VPN client configuration file and re-install it on TD1.

The option that says: Restart TDGW1 to re-establish the connection is incorrect because restarting the VPN gateway is only done when you lose cross-premises VPN connectivity on one or more Site-to-Site VPN tunnels. In this scenario, TD1 can connect to TDVnet1 which implies that TDGW1 is working and running.

The options that say: Enable transit gateway for TDVnet1 and Enable transit gateway for TDVnet2 are incorrect. Transit gateway is a peering property that lets one virtual network use the VPN gateway in the peered virtual network for cross-premises or VNet-to-VNet connectivity. Since TDVnet2 can connect to the on-premises network, it means that the transit gateway is already enabled and as such, enabling the transit gateway is not necessary.

References:

https://azure.microsoft.com/en-us/services/vpn-gateway/

https://docs.microsoft.com/en-us/azure/vpn-gateway/point-to-site-about

 

Check out this Azure VPN Gateway Cheat Sheet:

https://tutorialsdojo.com/azure-vpn-gateway/

Question 6

You have a file share in your Azure subscription named Manila-Subscription-01.

You plan to synchronize files from your on-premises file server named TDFileServer1 to Azure.

You created an Azure file share and a storage sync service.

Which four actions should you perform in sequence to synchronize files from TDFileServer1 to Azure?

Instructions: To answer, drag the appropriate item from the column on the left to its description on the right. Each correct match is worth one point.

AZ-104 Microsoft Azure Administrator Sample Exam Questions

 

Correct Answer: 

Deploy the Azure File Sync agent to TDFileServer1 

Register TDFileServer1 with Storage Sync Service 

Create a sync group and a cloud endpoint 

Create a server endpoint 

Azure Files enables you to set up highly available network file shares that can be accessed by using the standard Server Message Block (SMB) protocol. That means that multiple VMs can share the same files with both read and write access. You can also read the files using the REST interface or the storage client libraries.

One thing that distinguishes Azure Files from files on a corporate file share is that you can access the files from anywhere in the world using a URL that points to the file and includes a shared access signature (SAS) token. You can generate SAS tokens; they allow specific access to a private asset for a specific amount of time.

File shares can be used for many common scenarios:

1. Many on-premises applications use file shares. This feature makes it easier to migrate those applications that share data to Azure. If you mount the file share to the same drive letter that the on-premises application uses, the part of your application that accesses the file share should work with minimal, if any, changes.

2. Configuration files can be stored on a file share and accessed from multiple VMs. Tools and utilities used by multiple developers in a group can be stored on a file share, ensuring that everybody can find them and that they use the same version.

3. Resource logs, metrics, and crash dumps are just three examples of data that can be written to a file share and processed or analyzed later.

AZ-104 Microsoft Azure Administrator Sample Exam Questions

You can use Azure File Sync to centralize your organization’s file shares in Azure Files while keeping the flexibility, performance, and compatibility of an on-premises file server. Azure File Sync transforms Windows Server into a quick cache of your Azure file share. You can use any protocol that’s available on Windows Server to access your data locally, including SMB, NFS, and FTPS. You can have as many caches as you need across the world.

You can sync TDFileServer1 to Azure using the following steps in order:

1. Prepare Windows Server to use with Azure File Sync

– You need to disable Internet Explorer Enhanced Security Configuration in your server. This is required only for initial server registration. You can re-enable it after the server has been registered.

2. Deploy the Storage Sync Service

– Allows you to create sync groups that contain Azure file shares across multiple storage accounts and multiple registered Windows Servers.

3. Deploy the Azure File Sync agent to TDFileServer1

– The Azure File Sync agent is a downloadable package that enables Windows Server to be synced with an Azure file share.

4. Register TDFileServer1 with Storage Sync Service

– This establishes a trust relationship between your server (or cluster) and the Storage Sync Service. A server can only be registered to one Storage Sync Service and can sync with other servers and Azure file shares associated with the same Storage Sync Service.

– 5. Create a sync group and a cloud endpoint

– A sync group defines the sync topology for a set of files. Endpoints within a sync group are kept in sync with each other.

6. Create a server endpoint

– A server endpoint represents a specific location on a registered server, such as a folder on a server volume.

Hence, the correct order of deployment are:

1. Deploy the Azure File Sync agent to TDFileServer1

2. Register TDFileServer1 with Storage Sync Service

3. Create a sync group and a cloud endpoint

4. Create a server endpoint

References:
https://docs.microsoft.com/en-us/azure/storage/files/storage-files-introduction
https://docs.microsoft.com/en-us/azure/storage/files/storage-sync-files-deployment-guide

Check out this Azure Files Cheat Sheet:
https://tutorialsdojo.com/azure-file-storage/

Question 7

You have an Azure subscription named Davao-Subscription1.

You will be deploying a three-tier application as shown below:

AZ-104 Microsoft Azure Administrator Sample Exam Questions

Due to compliance requirements, you need to find a solution for the following:

  • Traffic between the web tier and application tier must be spread equally across all the virtual machines.

  • The web tier must be protected from SQL injection attacks.

Which Azure solution would you recommend for each requirement?

Select the correct answer from the drop-down list of options. Each correct selection is worth one point.

AZ-104 Microsoft Azure Administrator Sample Exam Questions

AZ-104 Microsoft Azure Administrator Sample Exam Questions

 

Correct Answer: 

Traffic between the web tier and application tier must be spread equally across all the virtual machines.: Internal Load Balancer

The web tier must be protected from SQL injection attacks.: Application Gateway WAF tier

Private (or Internal) Load balancer provides a higher level of availability and scale by spreading incoming requests across virtual machines (VMs). Private load balancer distributes traffic to resources that are inside a virtual network.

Azure Application Gateway is a web traffic load balancer that enables you to manage traffic to your web applications. For example, you can route traffic based on the incoming URL. So if /images are in the incoming URL, you can route traffic to a specific set of servers (known as a pool) configured for images. If /video is in the URL, that traffic is routed to another pool that’s optimized for videos.

AZ-104 Microsoft Azure Administrator Sample Exam Questions

Application Gateway web application firewall (WAF) protects web applications from common vulnerabilities and exploits. This is done through rules that are defined based on the OWASP core rule sets 3.1, 3.0, or 2.2.9. These rules can be disabled on a rule-by-rule basis.

The WAF protects against the following web vulnerabilities:

– SQL injection attacks

– Cross-site scripting attacks

– Other common attacks, such as command injection, HTTP request smuggling, HTTP response splitting, and remote file inclusion

– HTTP protocol violations

– HTTP protocol anomalies, such as missing host user-agent and accept headers

– Bots, crawlers, and scanners

– Common application misconfigurations (for example, Apache and IIS)

Take note that Internal load balancers distribute traffic within a VNET while public load balancers balance traffic to and from an internet-connected endpoint.

Therefore, you have to use the Internal Load Balancer to equally spread traffic between your web tier and application tier virtual machines.

Conversely, to protect your web tier from SQL injection attacks, you need to deploy the Application Gateway WAF tier.

Public Load Balancer is incorrect because you only use this if you want to load balance Internet traffic to your virtual machines. Public Load Balancer also does not support WAF protection for your web tier.

Traffic Manager is incorrect because Traffic Manager does not protect your application from SQL injection attacks. This service is mainly used for DNS-based traffic load balancing.

Application Gateway Standard tier is incorrect because the standard tier cannot protect your web tier from SQL Injection attacks. You have to use the Application Gateway WAF tier instead.

References:
https://docs.microsoft.com/en-us/azure/load-balancer/load-balancer-overview
https://docs.microsoft.com/en-us/azure/web-application-firewall/ag/ag-overview
https://docs.microsoft.com/en-us/azure/application-gateway/understanding-pricing

Check out these Azure Networking Services Cheat Sheets:
https://tutorialsdojo.com/azure-load-balancer/
https://tutorialsdojo.com/azure-application-gateway/

Question 8

You have the following resources deployed in Azure:

azure104-1-08 image

There is a requirement to connect TDVnet1 and TDVnet2.

What should you do first?

  1. Create virtual network peering.
  2. Change the address space of TDVnet2.
  3. Transfer TDVnet1 to TD2.
  4. Transfer VM1 to TD2.

Correct Answer: 1

Azure Virtual Network (VNet) is the fundamental building block for your private network in Azure. VNet enables many types of Azure resources, such as Azure Virtual Machines (VM), to securely communicate with each other, the Internet, and on-premises networks. VNet is similar to a traditional network that you’d operate in your own datacenter but brings with it additional benefits of Azure’s infrastructure such as scale, availability, and isolation.

There are two ways to connect two virtual networks, based on your specific scenario and needs, you might want to pick one over the other.

VNet Peering provides low latency, high bandwidth connection useful in scenarios such as cross-region data replication and database failover scenarios. Since traffic is completely private and remains on the Microsoft backbone, customers with strict data policies prefer to use VNet Peering as public Internet is not involved. Since there is no gateway in the path, there are no extra hops, ensuring low latency connections.

VPN Gateways provide a limited bandwidth connection and is useful in scenarios where encryption is needed, but bandwidth restrictions are tolerable. In these scenarios, customers are also not latency-sensitive.

Hence, the correct answer is: Create virtual network peering.

The option that says: Change the address space of TDVnet2 is incorrect because the address spaces of TDVnet1(10.1.0.0/16) and TDVnet2(10.10.0.0/18) do not overlap. Therefore, you can directly connect the two VMs by creating two virtual network gateways without changing the IP address ranges.

The options that say: Transfer TDVnet1 to TD2 and Transfer VM1 to TD2 are incorrect because VNet-to-VNet connections that use VPN gateways work across Microsoft Entra Tenant. You can also connect two virtual networks that have different subscriptions.

References: 
https://docs.microsoft.com/en-us/azure/virtual-network/virtual-networks-overview
https://docs.microsoft.com/en-us/azure/vpn-gateway/vpn-gateway-howto-vnet-vnet-resource-manager-portal

Check out this Azure Virtual Network Cheat Sheet:
https://tutorialsdojo.com/azure-virtual-network-vnet/

Question 9

You have an Azure subscription that contains an Azure virtual network named TDVnet1 with an address space of 10.1.0.0/18 and a subnet named TDSub1 with an address space of 10.1.0.0/22.

Your on-premises network has multiple branch offices, and you plan to connect them to Azure using a site-to-site VPN. You need to ensure that routing between your branch offices and Azure is dynamic and can adapt automatically to network changes.

Which four actions should you perform in sequence?

Instructions: To answer, drag the appropriate item from the column on the left to its description on the right. Each correct match is worth one point.

Correct Answer: 

Azure Virtual Network (VNet) is the fundamental building block for your private network in Azure. VNet enables many types of Azure resources, such as Azure Virtual Machines (VM), to securely communicate with each other, the Internet, and on-premises networks. VNet is similar to a traditional network that you’d operate in your own datacenter but brings with it additional benefits of Azure’s infrastructure, such as scale, availability, and isolation.

A Site-to-Site VPN gateway connection is used to connect your on-premises network to an Azure virtual network over an IPsec/IKE (IKEv1 or IKEv2) VPN tunnel. This type of connection requires a VPN device located on-premises that has an externally facing public IP address assigned to it.

You can create a site-to-site VPN connection by deploying the following in order:

1. Deploy a virtual network

2. Deploy a gateway subnet

– You need to create a gateway subnet for your VNet in order to configure a virtual network gateway. All gateway subnets must be named ‘GatewaySubnet’ to work properly. Don’t name your gateway subnet something else. It is recommended that your gateway subnet be /27 or bigger (/26, /25, etc.).

3. Deploy a VPN gateway

– A VPN gateway is a specialized virtual network gateway used to establish encrypted communication between an Azure virtual network and an on-premises environment over the public Internet. Optionally, you can enable Border Gateway Protocol (BGP) on a VPN gateway to allow Azure and your on-premises network to exchange routing information automatically. This eliminates the need for manual updates to route tables and ensures the network adapts dynamically to topology changes.

4. Deploy a local network gateway

– The local network gateway is a specific object that represents your on-premises location (the site) for routing purposes.

5. Deploy a VPN connection

– A VPN connection creates the link for the VPN gateway and local network gateway. It also gives you the status of your site-to-site connection.

Since you have deployed TDVnet1, the next step is to deploy a gateway subnet.

Hence, the correct order of deployment are:

1. Deploy a gateway subnet

2. Deploy a BGP-enabled VPN gateway

3. Deploy a local network gateway

4. Deploy a VPN connection

https://docs.microsoft.com/en-us/azure/virtual-network/virtual-networks-overview
https://docs.microsoft.com/en-us/azure/vpn-gateway/tutorial-site-to-site-portal

Check out this Azure VPN Gateway Cheat Sheet:
https://tutorialsdojo.com/azure-vpn-gateway/

Question 10

Your company has an Azure subscription that contains the following resources:

 
AZ-104 Microsoft Azure Administrator Sample Exam Questions

You plan to create an internal load balancer with the following parameters:

  • Name: TDB1
  • SKU: Basic
  • Subnet: TDSub2
  • Virtual network: TDVnet1

For each of the following items, choose Yes if the statement is true or choose No if the statement is false. Take note that each correct item is worth one point.

AZ-104 Microsoft Azure Administrator Sample Exam Questions

Correct Answer: Yes, No, No

Private (or Internal) Load balancer provides a higher level of availability and scale by spreading incoming requests across virtual machines (VMs). A private load balancer distributes traffic to resources that are inside a virtual network. Azure restricts access to the frontend IP addresses of a virtual network that is load balanced. Front-end IP addresses and virtual networks are never directly exposed to an internet endpoint. Internal line-of-business applications run in Azure and are accessed from within Azure or from on-premises resources.

AZ-104 Microsoft Azure Administrator Sample Exam Questions

Take note that in this scenario, you need to determine if you can load balance traffic in between virtual machines according to the parameters of TDB1. TD1 and TD2 are the only virtual machines that are associated with an availability set. In the image above, it states that only virtual machines within a single availability set or virtual machine scale set can be used as backend pool endpoints for load balancers that use Basic as its SKU.

The backend pool is a critical component of the load balancer. The backend pool defines the group of resources that will serve traffic for a given load-balancing rule.

Hence, this statement is correct: Traffic between TD1 and TD2 can be load balanced by TDB1

The following statements are incorrect because TDB1 is using the Basic SKU. Since the virtual machines below do not have an availability set or a virtual machine scale set, it does not have the capability to load balance the traffic.

– Traffic between TD3 and TD4 can be load balanced by TDB1

– Traffic between TD5 and TD6 can be load balanced by TDB1

References:
https://docs.microsoft.com/en-us/azure/load-balancer/load-balancer-overview
https://docs.microsoft.com/en-us/azure/load-balancer/skus

Check out this Azure Load Balancer Cheat Sheet:
https://tutorialsdojo.com/azure-load-balancer/

For more practice questions like these and to further prepare you for the actual AZ-900 AZ-104 Microsoft Azure Administrator exam, we recommend that you take our top-notch AZ-104 Microsoft Azure Administrator Practice Exams, which simulate the real unique question types in the AZ-900 exam such as drag and drop, dropdown, and hotspot.

Also, check out our AZ-104 Microsoft Azure Administrator exam study guide here.

💝 Valentine’s Sale! Get 30% OFF Any Reviewer. Use coupon code: PASSION-4-CLOUD & 10% OFF Store Credits/Gift Cards

Tutorials Dojo portal

Learn AWS with our PlayCloud Hands-On Labs

FREE AWS Exam Readiness Digital Courses

Tutorials Dojo Exam Study Guide eBooks

tutorials dojo study guide eBook

FREE AWS, Azure, GCP Practice Test Samplers

Subscribe to our YouTube Channel

Tutorials Dojo YouTube Channel

Follow Us On Linkedin

Recent Posts

The post AZ-104 Microsoft Azure Administrator Sample Exam Questions appeared first on Tutorials Dojo.

]]>
https://tutorialsdojo.com/az-104-microsoft-azure-administrator-sample-exam-questions/feed/ 0 22289
AWS Certified Security Specialty SCS-C02 Sample Exam Questions https://tutorialsdojo.com/aws-certified-security-specialty-scs-c01-scs-c02-sample-exam-questions/ https://tutorialsdojo.com/aws-certified-security-specialty-scs-c01-scs-c02-sample-exam-questions/#respond Wed, 28 Jun 2023 09:18:22 +0000 https://tutorialsdojo.com/?p=22234 Here are 10 AWS Certified Security Specialty SCS-C02 practice exam questions to help you gauge your readiness for the actual exam. Question 1 A leading hospital has a web application hosted in AWS that will store sensitive Personally Identifiable Information (PII) of its patients in an Amazon S3 bucket. Both the master keys and [...]

The post AWS Certified Security Specialty SCS-C02 Sample Exam Questions appeared first on Tutorials Dojo.

]]>

Here are 10 AWS Certified Security Specialty SCS-C02 practice exam questions to help you gauge your readiness for the actual exam.

Question 1

A leading hospital has a web application hosted in AWS that will store sensitive Personally Identifiable Information (PII) of its patients in an Amazon S3 bucket. Both the master keys and the unencrypted data should never be sent to AWS to comply with the strict compliance and regulatory requirements of the company.

Which S3 encryption technique should the Security Engineer implement?

  1. Implement an Amazon S3 client-side encryption with a KMS key.
  2. Implement an Amazon S3 client-side encryption with a client-side master key.
  3. Implement an Amazon S3 server-side encryption with a KMS managed key.
  4. Implement an Amazon S3 server-side encryption with customer provided key.

Correct Answer: 2

Client-side encryption is the act of encrypting data before sending it to Amazon S3. To enable client-side encryption, you have the following options:

    – Use an AWS KMS key.

    – Use a client-side master key.

When using an AWS KMS key to enable client-side data encryption, you provide an AWS KMS key ID (KeyId) to AWS. On the other hand, when you use client-side master key for client-side data encryption, your client-side master keys and your unencrypted data are never sent to AWS. It’s important that you safely manage your encryption keys because if you lose them, you can’t decrypt your data.

 

 

This is how client-side encryption using client-side master key works:

When uploading an object – You provide a client-side master key to the Amazon S3 encryption client. The client uses the master key only to encrypt the data encryption key that it generates randomly. The process works like this:

    1. The Amazon S3 encryption client generates a one-time-use symmetric key (also known as a data encryption key or data key) locally. It uses the data key to encrypt the data of a single Amazon S3 object. The client generates a separate data key for each object.

    2. The client encrypts the data encryption key using the master key that you provide. The client uploads the encrypted data key and its material description as part of the object metadata. The client uses the material description to determine which client-side master key to use for decryption.

    3. The client uploads the encrypted data to Amazon S3 and saves the encrypted data key as object metadata (x-amz-meta-x-amz-key) in Amazon S3.

When downloading an object – The client downloads the encrypted object from Amazon S3. Using the material description from the object’s metadata, the client determines which master key to use to decrypt the data key. The client uses that master key to decrypt the data key and then uses the data key to decrypt the object. 

Hence, the correct answer is: Implementing an Amazon S3 client-side encryption with a client-side master key.

Implementing an Amazon S3 client-side encryption with a KMS key is incorrect because in client-side encryption with a KMS key, you provide an AWS KMS key ID (KeyId) to AWS. The scenario clearly indicates that both the master keys and the unencrypted data should never be sent to AWS.

Implementing an Amazon S3 server-side encryption with a KMS key is incorrect because the scenario mentioned that the unencrypted data should never be sent to AWS, which means that you have to use client-side encryption in order to encrypt the data first before sending to AWS. In this way, you can ensure that there are no unencrypted data being uploaded to AWS. In addition, the master key used by Server-Side Encryption with AWS KMS–Managed Keys (SSE-KMS) is uploaded and managed by AWS, which directly violates the requirement of not uploading the master key.

Implementing an Amazon S3 server-side encryption with customer provided key is incorrect because, just as mentioned above, you have to use client-side encryption in this scenario instead of server-side encryption. For the S3 server-side encryption with customer-provided key (SSE-C), you actually provide the encryption key as part of your request to upload the object to S3. Using this key, Amazon S3 manages both the encryption (as it writes to disks) and decryption (when you access your objects).

References:
https://docs.aws.amazon.com/AmazonS3/latest/dev/UsingEncryption.html
https://docs.aws.amazon.com/AmazonS3/latest/dev/UsingClientSideEncryption.html

Check out this AWS Key Management Service Cheat Sheet:
https://tutorialsdojo.com/aws-key-management-service-aws-kms/

Tutorials Dojo’s AWS Certified Security – Specialty Exam Study Guide:
https://tutorialsdojo.com/aws-certified-security-specialty-exam-study-path/

Question 2

An enterprise monitoring application collects data and generates audit logs of all operational activities of the company’s AWS Cloud infrastructure. The IT Security team requires that the application retain the logs for 5 years before the data can be deleted.

How can the Security Engineer meet the above requirement?

  1. Use Amazon S3 Glacier to store the audit logs and apply a Vault Lock policy.
  2. Use Amazon EBS Volumes to store the audit logs and take automated EBS snapshots every month using Amazon Data Lifecycle Manager.
  3. Use Amazon S3 to store the audit logs and enable Multi-Factor Authentication Delete (MFA Delete) for additional protection.
  4. Use Amazon EFS to store the audit logs and enable Network File System version 4 (NFSv4) file-locking mechanism.

Correct Answer: 1

An Amazon S3 Glacier (Glacier) vault can have one resource-based vault access policy and one Vault Lock policy attached to it. A Vault Lock policy is a vault access policy that you can lock. Using a Vault Lock policy can help you enforce regulatory and compliance requirements. Amazon S3 Glacier provides a set of API operations for you to manage the Vault Lock policies.

As an example of a Vault Lock policy, suppose that you are required to retain archives for one year before you can delete them. To implement this requirement, you can create a Vault Lock policy that denies users permissions from deleting an archive until the archive has existed for one year. You can test this policy before locking it down. After you lock the policy, it becomes immutable. For more information about the locking process, see Amazon S3 Glacier Vault Lock. If you want to manage other user permissions that can be changed, you can use the vault access policy

Amazon S3 Glacier supports the following archive operations: Upload, Download, and Delete. Archives are immutable and cannot be modified. Hence, the correct answer is: Use Amazon S3 Glacier to store the audit logs and apply a Vault Lock policy.

The option that says: Use Amazon EBS Volumes to store the audit logs and take automated EBS snapshots every month using Amazon Data Lifecycle Manager is incorrect because this is not a suitable and secure solution. Anyone who has access to the EBS Volume can simply delete and modify the audit logs. Snapshots can be deleted too.

The option that says: Use Amazon S3 to store the audit logs and enable Multi-Factor Authentication Delete (MFA Delete) for additional protection is incorrect because this would still not meet the requirement. If someone has access to the S3 bucket and also has the proper MFA privileges then the audit logs can be edited.

The option that says: Use Amazon EFS to store the audit logs and enable Network File System version 4 (NFSv4) file-locking mechanism is incorrect because the data integrity of the audit logs can still be compromised if it is stored in an EFS volume with Network File System version 4 (NFSv4) file-locking mechanism and hence, not suitable as storage for the files. Although it will provide some sort of security, the file lock can still be overridden and the audit logs might be edited by someone else.

References:
https://docs.aws.amazon.com/amazonglacier/latest/dev/vault-lock.html
https://docs.aws.amazon.com/amazonglacier/latest/dev/vault-lock-policy.html
https://aws.amazon.com/blogs/aws/glacier-vault-lock/

Check out this Amazon S3 Glacier Cheat Sheet:
https://tutorialsdojo.com/amazon-glacier/

Question 3

For data privacy, a healthcare company has been asked to comply with the Health Insurance Portability and Accountability Act (HIPAA) in handling static user documents. They instructed their Security Engineer to ensure that all of the data being backed up or stored on Amazon S3 are durably stored and encrypted.

Which combination of actions should the Engineer implement to meet the above requirement? (Select TWO.)

  1. Encrypt the data locally first using your own encryption keys before sending the data to Amazon S3. Send the data over HTTPS.
  2. Instead of using an S3 bucket, move and store the data on Amazon EBS volumes in two AZs with encryption enabled.
  3. Instead of using an S3 bucket, migrate and securely store the data in an encrypted RDS database.
  4. Enable Server-Side Encryption with Amazon S3-Managed Keys (SSE-S3) on the S3 bucket with AES-256 encryption.
  5. Enable Server-Side Encryption with Amazon S3-Managed Keys (SSE-S3) on the S3 bucket with AES-128 encryption.

Correct Answer: 1,4

Server-side encryption is about data encryption at rest—that is, Amazon S3 encrypts your data at the object level as it writes it to disks in its data centers and decrypts it for you when you access it. As long as you authenticate your request and you have access permissions, there is no difference in the way you access encrypted or unencrypted objects. For example, if you share your objects using a pre-signed URL, that URL works the same way for both encrypted and unencrypted objects.

You have three mutually exclusive options depending on how you choose to manage the encryption keys:

  1. Use Server-Side Encryption with Amazon S3-Managed Keys (SSE-S3)
  2. Use Server-Side Encryption with AWS KMS Keys (SSE-KMS)
  3. Use Server-Side Encryption with Customer-Provided Keys (SSE-C)

Using client-side encryption and Amazon S3-Managed Keys (SSE-S3) can satisfy the requirement in the scenario. Client-side encryption is the act of encrypting data before sending it to Amazon S3 while SSE-S3 uses AES-256 encryption. Hence, the correct answers are:

– Encrypt the data locally first using your own encryption keys before sending the data to Amazon S3. Send the data over HTTPS.

– Enable Server-Side Encryption with Amazon S3-Managed Keys(SSE-S3) on the S3 bucket with AES-256 encryption.

The option that says: Instead of using an S3 bucket, move and store the data on Amazon EBS volumes in two AZs with encryption enabled is incorrect because Amazon S3 is more durable than EBS volumes. Objects stored in Amazon S3 can durably withstand the failures of two or more AZs.

The option that says: Instead of using an S3 bucket, migrate and securely store the data in an encrypted RDS database is incorrect because an Amazon RDS database is not a suitable data storage for storing static documents.

The option that says: Enable Server-Side Encryption with Amazon S3-Managed Keys (SSE-S3) on the S3 bucket with AES-128 encryption is incorrect as Amazon S3 doesn’t provide AES-128 encryption, only AES-256.

References:

http://docs.aws.amazon.com/AmazonS3/latest/dev/UsingEncryption.html
https://docs.aws.amazon.com/AmazonS3/latest/dev/UsingClientSideEncryption.html

 

Check out this Amazon S3 Cheat Sheet:

https://tutorialsdojo.com/amazon-s3/

Question 4

A multinational company is developing a sophisticated web application that requires integration with multiple third-party APIs. The company’s unique keys for each API are hardcoded inside an AWS CloudFormation template.

The security team requires that the keys be passed into the template without exposing their values in plaintext. Moreover, the keys must be encrypted at rest and in transit.

Which of the following provides the HIGHEST level of security while meeting these requirements?

  1. Use AWS Systems Manager Parameter Store to store the API keys. Then, reference them in the AWS CloudFormation templates using !GetAtt AppKey.Value
  2. Use AWS Systems Manager Parameter Store to store the API keys as SecureString parameters. Then, reference them in the AWS CloudFormation templates using {{resolve:ssm:AppKey}}
  3. Utilize AWS Secrets Manager to store the API keys. Then, reference them in the AWS CloudFormation templates using {{resolve:secretsmanager:AppKey:SecretString:password}}
  4. Use an Amazon S3 bucket to store the API keys. Then, create a custom AWS Lambda function to read the keys from the S3 bucket. Reference the keys in the AWS CloudFormation templates using a custom resource that invokes the Lambda function.

Correct Answer: 3

AWS Secrets Manager is an AWS service that makes it easier for you to manage secrets. Secrets can be database credentials, passwords, third-party API keys, and even arbitrary text. You can store and control access to these secrets centrally by using the Secrets Manager console, the Secrets Manager command line interface (CLI), or the Secrets Manager API and SDKs. Secrets Manager enables you to replace hardcoded credentials in your code, with an API call to Secrets Manager to retrieve the secret programmatically. This helps ensure that the secret will not be compromised by someone examining your code, because the secret simply isn’t there. Also, you can configure Secrets Manager to automatically rotate the secret for you according to a schedule that you specify. This enables you to replace long-term secrets with short-term ones, which helps to significantly reduce the risk of compromise.

AWS CloudFormation Dynamic References is a feature that allows you to specify external values that are stored and managed in other services, such as the Systems Manager Parameter Store and AWS Secrets Manager, in your stack templates.

When you use a dynamic reference, CloudFormation retrieves the value of the specified reference when necessary during stack and change set operations. This provides a compact, powerful way for you to manage sensitive data like API keys, without exposing them in plaintext.

CloudFormation currently supports the following dynamic reference patterns:

ssm: for plaintext values stored in AWS Systems Manager Parameter Store.

ssm-secure: for secure strings stored in AWS Systems Manager Parameter Store.

secretsmanager: for entire secrets or secret values stored in AWS Secrets Manager.

Dynamic references adhere to the following pattern: {{resolve:<service>:<parameter-name>}}. Here, <service> specifies the service in which the value is stored and managed. The <parameter-name> is the name of the parameter stored in the specified service.

Hence, the correct answer is: Utilize AWS Secrets Manager to store the API keys. Then, reference them in the AWS CloudFormation templates using {{resolve:secretsmanager:AppKey:SecretString:password}}The option that says: Use AWS Systems Manager Parameter Store to store the API keys. Then, reference them in the AWS CloudFormation templates using !GetAtt AppKey.Value is incorrect because while the AWS Systems Manager Parameter Store can be used to store plaintext or encrypted strings, including API keys, the Fn::ImportValue or!ImportValue intrinsic functions are primarily used to import values from other stacks, not to retrieve values from the Parameter Store.

The option that says: Use AWS Systems Manager Parameter Store to store the API keys as SecureString parameters. Then, reference them in the AWS CloudFormation templates using {{resolve:ssm:AppKey}} is incorrect. While AWS Systems Manager Parameter Store is a valid option for storing configuration data and secrets, it still doesn’t provide the same level of features as AWS Secrets Manager, such as automatic secret rotation. The dynamic reference {{resolve:ssm:AppKey}} is used to retrieve parameter values from the AWS Systems Manager Parameter Store, but they do not provide the same level of security as AWS Secrets Manager.

The option that says: Use an Amazon S3 bucket to store the API keys. Then, create a custom AWS Lambda function to read the keys from the S3 bucket. Reference the keys in the AWS CloudFormation templates using a custom resource that invokes the Lambda function is incorrect. While S3 does support encryption at rest, it doesn’t automatically encrypt the data in transit within AWS services. S3 is designed for object storage and not for storing sensitive data like API keys, and it can be challenging to manage access controls for individual keys on S3. Moreover, the use of a custom Lambda function introduces extra overhead and potential security considerations. Therefore, this approach may not meet the requirements outlined in the question. It’s generally more secure and efficient to use services specifically designed for storing sensitive data, such as AWS Secrets Manager or AWS Systems Manager Parameter Store. These services integrate directly with AWS CloudFormation, simplifying the process and enhancing security.

References:
https://docs.aws.amazon.com/secretsmanager/latest/userguide/intro.html
https://docs.aws.amazon.com/secretsmanager/latest/userguide/create_secret.html
https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/dynamic-references.html

Check out this AWS Secrets Manager Cheat Sheet:
https://tutorialsdojo.com/aws-secrets-manager/

Question 5

A company is looking to store its confidential financial files in AWS, which are accessed every week. A Security Engineer was instructed to set up the storage system, which uses envelope encryption and automates key rotation. It should also provide an audit trail that shows who used the encryption key and by whom for security purposes.

Which of the following should the Engineer implement to satisfy the requirement with the LEAST amount of cost? (Select TWO.)

  1. Store the confidential financial files in Amazon S3.
  2. Store the confidential financial files in the Amazon S3 Glacier Deep Archive.
  3. Enable Server-Side Encryption with Customer-Provided Keys (SSE-C).
  4. Enable Server-Side Encryption with Amazon S3-Managed Keys (SSE-S3).
  5. Enable Server-Side Encryption with AWS KMS Keys (SSE-KMS).

Correct Answer: 1,5

Server-side encryption is the encryption of data at its destination by the application or service that receives it. AWS Key Management Service (AWS KMS) is a service that combines secure, highly available hardware and software to provide a key management system scaled for the cloud. Amazon S3 uses AWS KMS keys to encrypt your Amazon S3 objects.

A KMS key is a logical representation of a cryptographic key. The KMS keys include metadata, such as the key ID, creation date, description, and key state. The KMS keys also contain the key material used to encrypt and decrypt data. You can use a KMS key to encrypt and decrypt up to 4 KB (4096 bytes) of data. Typically, you use a KMS key to generate, encrypt, and decrypt the data keys that you use outside of AWS KMS to encrypt your data. This strategy is known as envelope encryption.

You have three mutually exclusive options depending on how you choose to manage the encryption keys:

Use Server-Side Encryption with Amazon S3-Managed Keys (SSE-S3) – Each object is encrypted with a unique key. As an additional safeguard, it encrypts the key itself with a master key that it regularly rotates. Amazon S3 server-side encryption uses one of the strongest block ciphers available, 256-bit Advanced Encryption Standard (AES-256), to encrypt your data. 

Use Server-Side Encryption with KMS key Stored in AWS KMS keys (SSE-KMS) – Similar to SSE-S3, but with some additional benefits and charges for using this service. There are separate permissions for the use of a KMS key that provides added protection against unauthorized access of your objects in Amazon S3. SSE-KMS also provides you with an audit trail that shows when your KMS keys were used and by whom. Additionally, you can create and manage customer-managed keys or use AWS managed KMS keys that are unique to you, your service, and your Region.

Use Server-Side Encryption with Customer-Provided Keys (SSE-C) – You manage the encryption keys and Amazon S3 manages the encryption, as it writes to disks, and decryption when you access your objects.

In the scenario, the company needs to store financial files in AWS which are accessed every week and the solution should use envelope encryption. This requirement can be fulfilled by using an Amazon S3 configured with Server-Side Encryption with AWS KMS Keys (SSE-KMS). The image below shows how to enable server-side encryption with SSE-KMS in Amazon S3 Console. This option will automatically encrypt new objects stored in an S3 bucket.

Amazon S3 Default encryption

Hence, the correct answers are:

– Store the confidential financial files in Amazon S3

– Enable Server-Side Encryption with AWS KMS Keys (SSE-KMS).

The option that says: Store the confidential financial files in the S3 Amazon Glacier Deep Archive is incorrect. Although this provides the most cost-effective storage solution, it is not the appropriate service to use if the files being stored are frequently accessed every week.

The option that says: Enable Server-Side Encryption with Customer-Provided Keys (SSE-C) is incorrect because although SSE-C allows for tracking usage through AWS CloudTrail, it typically requires manual management of encryption keys and does not support automated key rotation. Therefore, it is less convenient than SSE-KMS.

The option that says: Enable Server-Side Encryption with Amazon S3-Managed Keys (SSE-S3) is incorrect because SSE-S3 provides server-side encryption, but it does not primarily offer advanced key management features, such as automated key rotation and detailed audit trails.

References:

https://docs.aws.amazon.com/AmazonS3/latest/dev/serv-side-encryption.html
https://docs.aws.amazon.com/AmazonS3/latest/dev/UsingKMSEncryption.html
https://docs.aws.amazon.com/kms/latest/developerguide/services-s3.html

Check out this Amazon S3 Cheat Sheet:

https://tutorialsdojo.com/amazon-s3/

Question 6

A company is expanding its operations and setting up new teams in different regions around the world. The company is using AWS for its development environment. There’s a strict policy that only approved software can be used when launching EC2 instances.

In addition to enforcing the policy, the company also wants to ensure that the solution is cost-effective, does not significantly increase the launch time of the EC2 instances, and is easy to manage and maintain. The company also wants to ensure that the solution is scalable and can easily accommodate the addition of new software to the approved list or the removal of software from it.

Which of the following solutions would be the most effective considering all the requirements?

  1. Use a portfolio in the AWS Service Catalog that includes EC2 products with the right AMIs, each containing only the approved software. Ensure that developers have access only to this Service Catalog portfolio when they need to launch a product in the software development account.
  2. Set up an Amazon EventBridge rule that triggers whenever any EC2 RunInstances API event occurs in the software development account. Specify AWS Systems Manager Run Command as a target of the rule. Configure Run Command to run a script that installs all approved software onto the instances that the developers launch.
  3. Use AWS Systems Manager State Manager to create an association that specifies the approved software. The association will automatically install the software when an EC2 instance is launched.
  4. Use AWS Config to monitor the EC2 instances and send alerts when unapproved software is detected. The alerts can then be used to manually remove the software.

Correct Answer: 1

AWS Service Catalog allows organizations to create and manage catalogs of IT services that are approved for use on AWS. These IT services can include everything from virtual machine images, servers, software, and databases to complete multi-tier application architectures. AWS Service Catalog allows you to centrally manage deployed IT services and your applications, resources, and metadata.

With AWS Service Catalog, you define your own catalog of AWS services and AWS Marketplace software and make them available for your organization. Then, end users can quickly discover and deploy IT services using a self-service portal.

Hence, the correct answer is: Use a portfolio in the AWS Service Catalog that includes EC2 products with the right AMIs, each containing only the approved software. Ensure that developers have access only to this Service Catalog portfolio when they need to launch a product in the software development account. By using the AWS Service Catalog, you can organize your existing service offerings into portfolios, each of which can be assigned to specific AWS accounts or IAM users. This allows you to centrally manage commonly deployed IT services and helps you achieve consistent governance and meet your compliance requirements while enabling users to quickly deploy only the approved IT services they need.

The option that says: Set up an Amazon EventBridge rule that triggers whenever any EC2 RunInstances API event occurs in the software development account. Specify AWS Systems Manager Run Command as a target of the rule. Configure Run Command to run a script that installs all approved software onto the instances that the developers launch is incorrect. While Amazon EventBridge and AWS Systems Manager Run Command can be used to automate the installation of approved software onto the instances that the developers launch, this solution does not prevent developers from installing unapproved software. Moreover, it could potentially increase the launch time of the EC2 instances, and any changes to the approved software list would require updating the script used by Run Command, which could be time-consuming and error-prone.

The option that says: Use AWS Systems Manager State Manager to create an association that specifies the approved software. The association will automatically install the software when an EC2 instance is launched is incorrect. AWS Systems Manager State Manager can automate the process of keeping your EC2 instances in a desired state (for example, installing software), but it does not prevent developers from installing unapproved software. Moreover, any changes to the approved software list would require updating the State Manager association, which could be time-consuming and error-prone.

The option that says: Use AWS Config to monitor the EC2 instances and send alerts when unapproved software is detected. The alerts can then be used to manually remove the software is incorrect. AWS Config can monitor the configurations of your AWS resources, but it does not have the capability to detect or remove unapproved software on EC2 instances. Moreover, this solution involves manual intervention (removing the unapproved software when an alert is received), which is not ideal for scalability and ease of management.

 

References:

https://aws.amazon.com/servicecatalog/faqs/
https://docs.aws.amazon.com/servicecatalog/

Check out this AWS Service Catalog Cheat Sheet:

https://tutorialsdojo.com/aws-service-catalog/

Question 7

After migrating the DNS records of a domain to Route 53, a company configured logging of public DNS queries. After a week, the company realized that log data were accumulating quickly. The company is worried that this might incur high storage fees in the long run, so they wanted logs older than 1 month to be deleted.

Which action will resolve the problem most cost-effectively?

  1. Configure a retention policy in CloudWatch Logs to delete logs older than 1 month.
  2. Change the destination of the DNS query logs to S3 Glacier Deep Archive.
  3. Configure CloudWatch Logs to export log data to an S3 bucket. Set an S3 lifecycle policy that deletes objects older than 1 month.
  4. Create a scheduled job using a Lambda function to export logs from CloudWatch Logs to an S3 bucket. Set an S3 lifecycle policy that deletes objects older than 1 month.

Correct Answer: 1

Amazon Route 53 sends query logs directly to CloudWatch Logs; the logs are never accessible through Route 53. Instead, you use CloudWatch Logs to view logs in near real-time, search and filter data, and export logs to Amazon S3.

By default, CloudWatch Logs stores query logs indefinitely, which could potentially lead to uncontrolled increases in the cost of storing logs. By setting a retention policy in CloudWatch Logs, you can ensure that log data is only stored for a specific period of time and that it is automatically deleted when it reaches the end of that period. This can help you control storage costs and manage your log data more effectively.

Hence, the correct answer is: Configure a retention policy in CloudWatch Logs to delete logs older than 1 month

The option that says: Change the destination of the DNS query logs to S3 Glacier Deep Archive is incorrect. This is not possible since Route 53 sends query logs to CloudWatch Logs only.

The option that says: Configure CloudWatch Logs to export log data to an S3 bucket. Set an S3 lifecycle policy that deletes objects older than 1 month is incorrect. This is unnecessary since the deletion of logs older than 1 month can be done through the CloudWatch Logs retention policy. Exporting logs to S3 is often done if you prefer retaining log data in Amazon S3 to reduce storage costs. Take note that you won’t be able to use CloudWatch Logs tools (like CloudWatch Insights) to analyze logs stored in S3.

The option that says: Create a scheduled job using a Lambda function to export logs from CloudWatch Logs to an S3 bucket. Set an S3 lifecycle policy that deletes objects older than 1 month is incorrect. Although this could work, this option involves unnecessary steps that the CloudWatch Logs retention policy could simplify.

References:
https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/query-logs.html
https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/S3Export.html 
https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/WhatIsCloudWatchLogs.html

Check out this Amazon CloudWatch cheat sheet:
https://tutorialsdojo.com/amazon-cloudwatch/

Question 8

A data security company is experimenting on various security features that they can implement on their Elastic Load Balancers such as Server Order Preference, Predefined Security Policy, Perfect Forward Secrecy, and many others. The company is planning to use the Perfect Forward Secrecy feature to provide additional safeguards to their architecture against the eavesdropping of encrypted data through the use of a unique random session key. This feature also prevents the decoding of captured data, even if the secret long-term key is compromised.

Which AWS services can offer SSL/TLS cipher suites for Perfect Forward Secrecy?

  1. Amazon EC2 and Amazon S3
  2. AWS CloudTrail and Amazon CloudWatch
  3. Amazon CloudFront and Elastic Load Balancers
  4. Amazon API Gateway and AWS Lambda

Correct Answer: 3

Elastic Load Balancing uses a Secure Socket Layer (SSL) negotiation configuration, known as a security policy, to negotiate SSL connections between a client and the load balancer. A security policy is a combination of SSL protocols, SSL ciphers, and the Server Order Preference option. 

Perfect Forward Secrecy is a feature that provides additional safeguards against the eavesdropping of encrypted data through the use of a unique random session key. This prevents the decoding of captured data, even if the secret long-term key is compromised.

CloudFront and Elastic Load Balancing are the two AWS services that support Perfect Forward Secrecy.

Hence, the correct answer is: Amazon CloudFront and Elastic Load Balancers.

Amazon EC2 and S3AWS CloudTrail and CloudWatch, and Amazon API Gateway and AWS Lambda are incorrect since these services do not use Perfect Forward Secrecy. SSL/TLS is commonly used when you have sensitive data traveling through the public network.

References:
https://aws.amazon.com/about-aws/whats-new/2014/02/19/elastic-load-balancing-perfect-forward-secrecy-and-more-new-security-features/
https://d1.awsstatic.com/whitepapers/Security/Secure_content_delivery_with_CloudFront_whitepaper.pdf

Check out these AWS Elastic Load Balancing (ELB) and Amazon CloudFront Cheat Sheets:
https://tutorialsdojo.com/aws-elastic-load-balancing-elb/
https://tutorialsdojo.com/amazon-cloudfront/

Question 9

A Security Engineer sent a ping command from her laptop, with an IP address of 112.237.99.166, to an EC2 instance which has a private IP address of 172.31.17.140. However, the response ping is dropped and does not reach her laptop. To troubleshoot the issue, the Engineer checked the flow logs of your VPC and saw the following entries as shown below.

2 123456789010 eni-1235b8ca 112.237.99.166 172.31.17.140 0 0 1 4 336 1432917027 1432917142
ACCEPT OK
2 123456789010 eni-1235b8ca 172.31.17.140 112.237.99.166 0 0 1 4 336 1432917094 1432917142
REJECT OK

What is the MOST likely root cause of this issue?

  1. The security group has an inbound rule that allows ICMP traffic but does not have an outbound rule to explicitly allow outgoing ICMP traffic.
  2. The network ACL permits inbound ICMP traffic but does not permit outbound ICMP traffic.
  3. The security group’s inbound rules do not allow ICMP traffic.
  4. The Network ACL does not permit inbound ICMP traffic.

Correct Answer: 2

If you’re using flow logs to diagnose overly restrictive or permissive security group rules or network ACL rules, then be aware of the statefulness of these resources. Security groups are stateful — this means that responses to allowed traffic are also allowed, even if the rules in your security group do not permit it. Conversely, network ACLs are stateless, therefore, responses to allowed traffic are subject to network ACL rules. 

For example, you use the ping command from your home computer (IP address is 203.0.113.12) to your instance (the network interface’s private IP address is 172.31.16.139). Your security group’s inbound rules allow ICMP traffic and the outbound rules do not allow ICMP traffic; however, because security groups are stateful, the response ping from your instance is allowed. Your network ACL permits inbound ICMP traffic but does not permit outbound ICMP traffic. Because network ACLs are stateless, the response ping is dropped and does not reach your home computer.

In a flow log, this is displayed as two flow log records:

– An ACCEPT record for the originating ping that was allowed by both the network ACL and the security group, and therefore was allowed to reach your instance.

2 123456789010 eni-1235b8ca 203.0.113.12 172.31.16.139 0 0 1 4 336 1432917027 1432917142 ACCEPT OK

REJECT record for the response ping that the network ACL denied.

2 123456789010 eni-1235b8ca 172.31.16.139 203.0.113.12 0 0 1 4 336 1432917094 1432917142 REJECT OK

A flow log record is a space-separated string that has the following format:

<version> <account-id> <interface-id> <srcaddr> <dstaddr> <srcport> <dstport> <protocol> <packets> <bytes> <start> <end> <action> <log-status>

In this scenario, the ping command from your home computer to your EC2 instance failed and there are two VPC Flow logs provided in the scenario. The logs basically mean that the first one is the record of the traffic flow that goes from your home computer to your EC2 instance and the latter is the record of the traffic flow that goes back from the EC2 instance back to your home computer. 

Apparently, the first one is an ACCEPT record and the second one is a REJECT record, which means that the incoming traffic was successfully accepted by your EC2 instance but the response, or the outgoing traffic, was rejected by either your security group or network ACL.

Hence, the correct answer is: The network ACL permits inbound ICMP traffic but does not permit outbound ICMP traffic.

The option that says: The security group has an inbound rule that allows ICMP traffic but does not have an outbound rule to explicitly allow outgoing ICMP traffic is incorrect because security groups are stateful. Hence, the response ping from your EC2 instance will still be allowed without explicitly allowing outgoing ICMP traffic.

The options that say: The security group’s inbound rules do not allow ICMP traffic and The network ACL does not permit inbound ICMP traffic are both incorrect because the first flow log clearly shows that the incoming traffic was successfully accepted by your EC2 instance which is why the issue lies on the outgoing traffic. The second flow log shows that the response, or the outgoing traffic, was rejected by either your security group or network ACL.

References: 
https://docs.aws.amazon.com/vpc/latest/userguide/flow-logs.html#flow-log-records
https://docs.aws.amazon.com/vpc/latest/userguide/flow-logs-records-examples.html

Check out this Amazon VPC Cheat Sheet:
https://tutorialsdojo.com/amazon-vpc/

Security Group vs Network ACL:
https://tutorialsdojo.com/security-group-vs-nacl/

 

Question 10

A newly hired Security Analyst is assigned to manage the existing CloudFormation templates of the company. The Analyst opened the templates and analyzed the configured IAM policy for an S3 bucket as shown below:

<pre>
{
  “Version”: “2012-10-17”,
  “Statement”: [
    {
      “Effect”: “Allow”,
      “Action”: [
        “s3:Get*”,
        “s3:List*”
     ],
     “Resource”: “*”
  },
  {
     “Effect”: “Allow”,
     “Action”: “s3:PutObject”,
     “Resource”: “arn:aws:s3:::team-palawan/*”
   }
 ] }
</pre>

What does the following IAM policy allow? (Select THREE.)

  1. Allows reading objects from all S3 buckets owned by the account.
  2. Allows writing objects into the team-palawan S3 bucket.
  3. Allows changing access rights for the team-palawan S3 bucket.
  4. Allows reading objects in the team-palawan S3 bucket but not allowed to list the objects in the bucket.
  5. Allows reading objects from the team-palawan S3 bucket.
  6. Allows reading and deleting objects from the team-palawan S3 bucket.

Correct Answer: 1,2,5

The first statement in the policy allows all List (e.g., ListBucket, ListObject) and Get (e.g., GetObject, GetObjectVersion) operations on any S3 buckets and objects. The second statement explicitly allows the upload of any object to the team-palawan bucket.

Hence, the correct answers are:

-Allows reading objects from all S3 buckets owned by the account.

-Allows writing objects into the team-palawan S3 bucket.

-Allows reading objects from the team-palawan S3 bucket.

The option that says: Allows changing access rights for the team-palawan S3 bucket is incorrect because the policy does not have any statement that allows changing access rights in the bucket.

The option that says: Allows reading objects in the team-palawan S3 bucket but not allowed to list the objects in the bucket is incorrect. s3:List* refers to any permissions that start with the word “List,” which implies that the ListObject, which is required to list objects, is implicitly included. Hence, listing objects in any bucket is allowed.

The option that says: Allows reading and deleting objects from the team-palawan S3 bucket is incorrect. Although you can read objects from the bucket, you cannot delete any objects.

References:
https://docs.aws.amazon.com/AmazonS3/latest/API/RESTObjectOps.html
https://aws.amazon.com/blogs/security/how-to-use-bucket-policies-and-apply-defense-in-depth-to-help-secure-your-amazon-s3-data/

Check out this Amazon S3 Cheat Sheet:
https://tutorialsdojo.com/amazon-s3/

For more practice questions like these and to further prepare you for the actual AWS Certified Security Specialty SCS-C02 exam, we recommend that you take our top-notch AWS Certified Security Specialty Practice Exams, which have been regarded as the best in the market. 

Also, check out our AWS Certified Security Specialty SCS-C02 exam study guide here.

💝 Valentine’s Sale! Get 30% OFF Any Reviewer. Use coupon code: PASSION-4-CLOUD & 10% OFF Store Credits/Gift Cards

Tutorials Dojo portal

Learn AWS with our PlayCloud Hands-On Labs

FREE AWS Exam Readiness Digital Courses

Tutorials Dojo Exam Study Guide eBooks

tutorials dojo study guide eBook

FREE AWS, Azure, GCP Practice Test Samplers

Subscribe to our YouTube Channel

Tutorials Dojo YouTube Channel

Follow Us On Linkedin

Recent Posts

The post AWS Certified Security Specialty SCS-C02 Sample Exam Questions appeared first on Tutorials Dojo.

]]>
https://tutorialsdojo.com/aws-certified-security-specialty-scs-c01-scs-c02-sample-exam-questions/feed/ 0 22234