Adrian Formaran, Author at Tutorials Dojo https://tutorialsdojo.com Your One-Stop Learning Portal Wed, 29 May 2024 06:39:05 +0000 en-US hourly 1 https://tutorialsdojo.com/wp-content/uploads/2018/09/cropped-tutorialsdojo_logo.008-1-32x32.png Adrian Formaran, Author at Tutorials Dojo https://tutorialsdojo.com 32 32 209025254 AWS Certified Solutions Architect Professional Exam Guide Study Path SAP-C02 https://tutorialsdojo.com/aws-certified-solutions-architect-professional-exam-guide-study-path-sap-c01-sap-c02/ https://tutorialsdojo.com/aws-certified-solutions-architect-professional-exam-guide-study-path-sap-c01-sap-c02/#respond Fri, 22 Jul 2022 04:18:00 +0000 https://tutorialsdojo.com/?p=4523 Bookmarks Study Materials AWS Services to Focus On Common Exam Scenarios Validate Your Knowledge Final Notes The AWS Certified Solutions Architect Professional Exam SAP-C02 Overview Few years ago, before you can take the AWS Certified Solutions Architect Professional exam (or SA Pro for short), you would [...]

The post AWS Certified Solutions Architect Professional Exam Guide Study Path SAP-C02 appeared first on Tutorials Dojo.

]]>

The AWS Certified Solutions Architect Professional Exam SAP-C02 Overview

Few years ago, before you can take the AWS Certified Solutions Architect Professional exam (or SA Pro for short), you would first have to pass the associate level exam of this track. This is to ensure that you have sufficient knowledge and understanding of architecting in AWS before tackling the more difficult certification. In October 2018, AWS removed this ruling so that there are no more prerequisites for taking the Professional level exams. You now have the freedom to directly pursue this certification if you wish to.

This certification is truly a leveled-up version of the AWS Solutions Architect Associate certification. It examines your capability to create well-architected solutions in AWS, but on a grander scale and with more difficult requirements. Because of this, we recommend that you go through our exam preparation guide for the AWS Certified Solutions Architect Associate and even the AWS Certified Cloud Practitioner if you have not done so yet. They contain very important materials such as review materials that will be crucial for passing the exam.

AWS Certified Solutions Architect Professional SAP-C02 Study Materials

The FREE AWS Exam Readiness course, official AWS sample questions, Whitepapers, FAQs, AWS Documentation, Re:Invent videos, forums, labs, AWS cheat sheets, AWS practice exams, and personal experiences are what you will need to pass the exam. Since the SA Pro is one of the most difficult AWS certification exams out there, you have to prepare yourself with every study material you can get your hands on. To learn more details regarding your exam, go through the official AWS SAP-C02 Exam Guide, as these documents discuss the various domains they will test you on.

AWS has a digital course called Exam Readiness: AWS Certified Solutions Architect – Professional, which is a short video lecture that discusses what to expect on the AWS Certified Solutions Architect – Professional exam. It should sufficiently provide an overview of the different concepts and practices that you’ll need to know about. Each topic in the course will also contain a short quiz right after you finish its lecture to help you lock in the important information.

Exam Readiness AWS Certified Solutions Architect Professional

For whitepapers, aside from the ones listed down in our Solutions Architect Associate and Cloud Practitioner exam guide, you should also study the following:

  1. Securing Data at Rest with Encryption
  2. Web Application Hosting in the AWS Cloud
  3. Migrating AWS Resources to a New Region
  4. Practicing Continuous Integration and Continuous Delivery on AWS Accelerating Software Delivery with DevOps
  5. Microservices on AWS
  6. AWS Security Best Practices
  7. AWS Well-Architected Framework
  8. Security Pillar – AWS Well-Architected Framework
  9. Using Amazon Web Services for Disaster Recovery
  10. AWS Architecture Center architecture whitepapers

The instructor-led classroom called “Advanced Architecting on AWS” should also provide additional information on how to implement the concepts and best practices that you have learned from whitepapers and other forms of documentation. Be sure to check it out.

Your AWS exam could also include a lot of migration scenarios. Visit this AWS documentation to learn about the different ways of performing cloud migration.

Also check out this article: Top 5 FREE AWS Review Materials.

 

AWS Services to Focus On For the AWS Certified Solutions Architect Professional SAP-C02 Exam

Generally, as a soon-to-be AWS Certified SA Pro, you should have a thorough understanding of every service and feature in AWS. But for the purpose of this review, give more attention to the following services since they are common topics in the SA Pro exams:

  1. AWS Organizations
    • Know how to create organizational units (OUs), service control policies (SCPs), and any additional parameters in AWS Organizations. 
    • There might be scenarios where the master account needs access to member accounts. Your options can include setting up OUs and SCPs, delegating an IAM role, or providing cross-account access.
    • Differentiate SCP from IAM policies. 
    • You should also know how to integrate AWS Organizations with other services such as CloudFormation, Service Catalog, and IAM to manage resources and user access. 
    • Lastly, read how you can save on costs by enabling consolidated billing in your organizations, and what would be the benefits of enabling all features.
  2. AWS Application Migration Service
    • Study the different ways to migrate on-premises servers to the AWS Cloud.
    • Also, study how you can perform the migration in a secure and reliable manner.
    • You should be aware of what types of objects AWS SMS can migrate for you i.e. VMs, and what is the output of the migration process.
  3. AWS Database Migration Service + Schema Conversion Tool 
    • Aside from server and application migration, you should also know how you can move on-premises databases to AWS, and not just to RDS but to other services as well as Aurora and RedShift. 
    • Read over what schemas can be converted by SCT.
  4. AWS Serverless Application Model
    • The AWS SAM has a syntax of its own. Study the syntax and how AWS SAM is used to deploy serverless applications through code.
    • Know the relationship between SAM and CloudFormation. Hint: You can use these two together.
  5. AWS EC2 Systems Manager 
    • Study the different features under Systems Manager and how each feature can automate EC2-related processes. Patch Manager and Maintenance Windows are often used together to perform automated patching. It allows for easier setup and better control over patch baselines, rather than using a cron job within an EC2 instance or using Cloudwatch Events.
    • It is also important to know how you can troubleshoot EC2 issues using Systems Manager. 
    • Parameter Store allows you to securely store a string in AWS, which can be retrieved anywhere in your environment. You can use this service instead of AWS Secrets Manager if you don’t need to rotate your secrets.
  6. AWS CI/CD – Study the different CI/CD tools in AWS, from function to features to implementation. It would be very helpful if you can create your own CI/CD pipeline as well using the services below.
    1. CodeCommit
    2. CodeBuild
    3. CodeDeploy
    4. CodePipeline
  7. AWS Service Catalog 
    • This service is also part of the automation toolkit in AWS. Study how you can create and manage portfolios of approved services in the Service Catalog, and how you can integrate these with other technologies such as AWS Organizations.
    • You can enforce tagging on services using service catalog. This way, users can only launch resources that have the tags you defined.
    • Know when Service Catalog is a better option for resource control rather than AWS Cloudformation. A good example is when you want to create a customized portfolio for each type of user in an organization and selectively grant access to the appropriate portfolio.
  8. AWS Direct Connect (DX)
    • You should have a deep understanding of this service. Questions commonly include Direct Connect Gateway, public and private VIFs, and LAGs.
    • Direct Connect is commonly used for connecting on-premises networks to AWS, but it can also be used to connect different AWS Regions to a central datacenter. For these kinds of scenarios, take note of the benefits of Direct Connect such as dedicated bandwidth, network security, multi-Region and multi-VPC connection support.
    • Direct Connect is also used along with a failover connection, such as a secondary DX line or IPsec VPN. The correct answer will depend on specific requirements like cost, speed, ease of management, etc. 
    • Another combination that can be used to link different VPCs is Transit Gateway + DX.
  9. AWS CloudFormation – Your AWS exam might include a lot of scenarios that involve Cloudformation, so take note of the following:
    • You can use CloudFormation to enforce tagging by requiring users to only use resources that CloudFormation launched.
    • CloudFormation can be used for managing resources across different AWS accounts in an Organization using StackSets.
    • CloudFormation is often compared to AWS Service Catalog and AWS SAM. The way to approach this in the exam is to know what features are supported by CloudFormation that cannot be performed in a similar fashion with Service Catalog or SAM.
  10. Amazon VPC (in depth)
    1. Know the ins and outs of NAT Gateways and NAT instances, such as supported IP protocols, which types of packets are dropped in a cut connection, etc.
    2. Study about transit gateway and how it can be used together with Direct Connect.
    3. Remember longest prefix routing.
    4. Compare VPC peering to other options such as Site to Site VPN. Know what components are in use: Customer gateway, Virtual Private Gateway, etc.
  11. Amazon ECS
    1. Differentiate task role from task execution role.
    2. Compare using ECS compute instances from Fargate serverless model.
    3. Study how to link together ECS and ECR with CI/CD tools to automate deployment of updates to your Docker containers.
  12. Elastic Load Balancer (in-depth)
    1. Differentiate the internet protocols used by each type of ELB for listeners and target groups: HTTP, HTTPS, TCP, UDP, TLS.
    2. Know how you can configure load balancers to forward client IP to target instances.
    3. Know how you can secure your ELB traffic through the use of SSL and WAF. SSL can be offloaded on either the ELB or CloudHSM.
  13. Elastic Beanstalk 
    1. Study the different deployment options for Elastic Beanstalk. 
    2. Know the steps in performing a blue/green deployment.
    3. Know how you can use traffic splitting deployment to perform canary testing
    4. Compare Elastic Beanstalk’s deployment options to CodeDeploy.
  14. WAF and Shield
    1. Know at what network layer WAF and Shield operate in
    2. Differentiate security capabilities of WAF and Shield Advanced, especially with regard to DDoS protection. A great way to determine which one to use is to look at the services that need the protection and if cost is a factor. You may also visit this AWS documentation for additional details.
  15. Amazon Workspaces vs Amazon Appstream
    1. Workspaces are best for virtual desktop environments. You can use it to provision either Windows or Linux desktops in just a few minutes and quickly scale to provide thousands of desktops to workers across the globe.
    2. Appstream is best for standalone desktop applications. You centrally manage your desktop applications on AppStream 2.0 and securely deliver them to any computer.
  16. Amazon Workdocs – It is important to determine what features make Workdocs unique compared to using S3 and EFS. Choose this service if you need secure document storage where you can collaborate in real-time with others and manage access to the documents.
  17. Elasticache vs DAX vs Aurora Read Replicas
    1. Know your caching options especially when it comes to databases.
    2. If there is a feature that is readily integrated with the database, it would be better to use that integrated feature instead for less overhead.
  18. Snowball Edge vs Direct Connect vs S3 Acceleration – These three services are heavily used for data migration purposes. Read the exam scenario properly to determine which service is best used. Factors in choosing the correct answer are cost, time allotted for the migration, and how much data is needed to be transported.
  19. Using Resource Tags with IAM – Study how you can use resource tags to manage access via IAM policies.

We also recommend checking out Tutorials Dojo’s AWS Cheat Sheets which provide a summarized but highly informative set of notes and tips for your review of these services. These cheat sheets are presented mostly in bullet points which will help you retain the knowledge much better vs reading lengthy FAQs. 

We expect that you already have vast knowledge on the AWS services that a Solutions Architect commonly uses, such as those listed in our SA Associate review guide. It is also not enough to just know the service and its features. You should also have a good understanding of how to integrate these services with one another to build large-scale infrastructures and applications. It’s why it is generally recommended to have hands-on experience managing and operating systems on AWS.

 

Common Exam Scenarios For SAP-C02

You have objects in an S3 bucket that have different retrieval frequencies. To optimize cost and retrieval times, what change should you make?

S3 has a new storage class called “S3 Intelligent-Tiering”. S3 IT moves data between two access tiers — frequent access and infrequent access — when access patterns change and is ideal for data with unknown or changing access patterns. What makes this relatively cost-effective is that there are no retrieval fees in S3 Intelligent-Tiering, unlike the S3 IA storage class.

SCENARIO

SOLUTION

SAP-C02 Exam Domain 1: Design Solutions for Organizational Complexity

You are managing multiple accounts and you need to ensure that each account can only use resources that it is supposed to. What is a simple and reusable method of doing so?

AWS Organizations is a given here. It simplifies a lot of the account management and controls that you would use for this scenario. For resource control, you may use AWS CloudFormation Stacksets to define a specific stack and limit your developers to the created resources. You may also use AWS Service Catalog if you like to define specific product configurations or CloudFormation stacks, and give your developers freedom to deploy them. For permission controls, a combination of IAM policies and SCPs should suffice.

You are creating a CloudFormation stack and uploading it to AWS Service Catalog so you may share this stack with other AWS accounts in your organization. How can your end-users access the product/portfolio while still granting the least privilege?

Your end-users require appropriate IAM permissions to access AWS Service Catalog and launch a CloudFormation stack. The AWSServiceCatalogEndUser
FullAccess
and AWSServiceCatalogEndUser
ReadOnlyAccess
policies grant access to the AWS Service Catalog end-user console view. When a user who has either of these policies chooses AWS Service Catalog in the AWS Management Console, the end-user console view displays the products that they have permission to launch. You should also provide the user the permission to pass IAM role to CloudFormation so that the CloudFormation stack can launch the necessary resources.

How can you provide access to users in a different account to resources in your account?

Use cross-account IAM roles and attach the permissions necessary to access your resources. Have the users in the other account reference this IAM role.

How do you share or link two networks together? (VPCs, VPNs, routes, etc) What if you have restrictions on your traffic e.g. it cannot traverse through the public Internet?

Sharing networks or linking two networks is a common theme in a very large organization. This ensures that your networks adhere to the best practices all the time. For VPCs, you can use VPC sharing, VPC Peering, or Transit Gateways. VPNs can utilize Site-to-Site VPN for cross-region or cross-account connections. For strict network compliance, you can access some of your AWS resources privately through shared VPC endpoints. This way, your traffic does not need to traverse through the public Internet. More information on that can be found in this article.

You have multiple accounts under AWS Organizations. Previously, each account can purchase their own RIs, but now, they have to request it from one central account for procurement. What steps should be done to satisfy this requirement in the most secure way?

Ensure that all AWS accounts are part of an AWS Organizations structure operating in all features mode. Then create an SCP that contains a deny rule to the ec2:PurchaseReservedInstancesOffering and ec2:ModifyReservedInstances actions. Attach the SCP to each organizational unit (OU) of the AWS Organizations’ structure.

Can you connect multiple VPCs that belong to different AWS accounts and have overlapping CIDRs? If so, how can you manage your route tables so that the correct traffic is routed to the correct VPC?

You can connect multiple VPCs together even if they have overlapping CIDRs. What is important is that you are aware of how routing works in AWS. AWS uses longest prefix matching to determine where traffic is delivered to. So to make sure that your traffic is routed properly, be as specific as possible with your routes.

Members of a department will need access to your AWS Management Console. Without having to create IAM Users for each member, how can you provide long-term access?

You can use your on-premises SAML 2.0-compliant identity provider to grant your members federated access to the AWS Management Console via the AWS single sign-on (SSO) endpoint. This will provide them long term access to the console as long as they can authenticate with the IdP.

Is it possible for one account to monitor all API actions executed by each member account in an AWS Organization? If so, how does it work?

You can configure AWS CloudTrail to create a trail that will log all events for all AWS accounts in that organization. When you create an organization trail, a trail with the name that you give it will be created in every AWS account that belongs to your organization. Users with CloudTrail permissions in member accounts will be able to see this trail. However, users in member accounts will not have sufficient permissions to delete the organization trail, turn logging on or off, change what types of events are logged, or otherwise alter the organization trail in any way. When you create an organization trail in the console, or when you enable CloudTrail as a trusted service in the Organizations, this creates a service-linked role to perform logging tasks in your organization’s member accounts. This role is named AWSServiceRoleForCloudTrail, and is required for CloudTrail to successfully log events for an organization. Log files for an account removed from the organization that were created prior to the account’s removal will still remain in the Amazon S3 bucket where log files are stored for the trail.

You have 50 accounts joined to your AWS Organizations and you will require a central, internal DNS solution to help reduce the network complexity. Each account has its own VPC that will rely on the private DNS solution for resolving different AWS resources (servers, databases, AD domains, etc). What is the least complex network architecture that you can create?

Create a shared services VPC in your central account, and connect the other VPCs to yours using VPC peering or AWS Transit Gateway. Set up a private hosted zone in Amazon Route 53 on your shared services VPC and add in the necessary domains/subdomains. Associate the rest of the VPCs to this private hosted zone.

How can you easily deploy a basic infrastructure to different AWS regions while at the same time allowing your developers to optimize (but not delete) the launched infrastructures?

Use CloudFormation Stacksets to deploy your infrastructure to different regions. Deploy the stack in an administrator account. Create an IAM role that developers can assume so they can optimize the infrastructure. Make sure that the IAM role has a policy that denies deletion for cloudformation-launched resources.

You have multiple VPCs in your organization that are using the same Direct Connect line to connect back to your corporate datacenter. This setup does not account for line failure which will affect the business greatly if something were to happen to the network. How do you make the network more highly available? What if the VPCs span multiple regions?

Utilize Site-To-Site VPN between the VPCs and your datacenter and terminate the VPN tunnel at a virtual private gateway. Setup BGP routing.

An alternative solution is to provision another Direct Connect line in another location if you require constant network performance, at the expense of additional cost. If the VPCs span multiple regions, you can use a Direct Connect Gateway.

SAP-C02 Exam Domain 2: Design for New Solutions

You have production instances running in the same account as your dev environment. Your developers occasionally mistakenly stop/terminate production instances. How can you prevent this from happening?

You can leverage resource tagging and create an explicit deny IAM policy that would prevent developers from stopping or terminating instances tagged under production.

If you have documents that need to be collaborated upon, and you also need strict access controls over who gets to view and edit these documents, what service should you use?

AWS has a suite of services similar to Microsoft Office or Gsuite, and one of those services is called Amazon Workdocs. Amazon Workdocs is a fully managed, secure content creation, storage, and collaboration service.

How can you quickly scale your applications in AWS while keeping costs low?

While EC2 instances are perfectly fine compute option, they tend to be pricey if they are not right-sized or if the capacity consumption is fluctuating. If you can, re-architect your applications to use Containers or Serverless compute options such as ECS, Fargate, Lambda and API Gateway.

You would like to automate your application deployments and use blue-green deployment to properly test your updates. Code updates are submitted to an S3 bucket you own. You wish to have a consistent environment where you can test your changes. Which services will help you fulfill this scenario?

Create a deployment pipeline using CodePipeline. Use AWS Lambda to invoke the stages in your pipeline. Use AWS CodeBuild to compile your code, before sending it to AWS Elastic Beanstalk in a blue environment. Have AWS Codebuild test the update in the blue environment. Once testing has succeeded, trigger AWS Lambda to swap the URLs between your blue and green Elastic Beanstalk environments. More information here.

Your company only allows the use of pre-approved AMIs for all your teams. However, users should not be prevented from launching unauthorized AMIs as it might affect some of their automation. How can you monitor all EC2 instances launched to make sure they are compliant with your approved AMI list, and that you are informed when someone uses an incompliant AMI?

Utilize AWS Config to monitor AMI compliance across all AWS accounts. Configure Amazon SNS to notify you when an EC2 instance was launched using an un-approved AMI. You can also use Amazon EventBridge to monitor each RunInstance event. Use it to trigger a Lambda function that will cross check the launch configuration to your AMI list and send you a notification via SNS if the AMI used was unapproved. This will give you more information such as who launched the instance.

How can you build a fully automated call center in AWS?

Utilize Amazon Connect, Amazon Lex, Amazon Polly, and AWS Lambda.

You have a large number of video files that are being processed locally by your custom AI application for facial detection and recognition. These video files are kept in a tape library for long term storage. Video metadata and timestamps of detected faces are stored in MongoDB. You decided to use AWS to further enhance your operations, but the migration procedure should have minimal disruption to the existing process. What should be your setup?

Use Amazon Storage Gateway Tape Gateway to store your video files in an Amazon S3 bucket. Start importing the video files to your tape gateway after you’ve configured the appliance. Create a Lambda function that will extract the videos from Tape Gateway and forward them to Amazon Rekognition. Use Amazon Rekognition for facial detection and timestamping. Once finished, have Rekognition trigger a Lambda function that will store the resulting information in Amazon DynamoDB.

Is it possible in AWS for you to enlist the help of other people to complete tasks that only humans can do?

Yes, you can submit tasks in AWS Mechanical Turk and have other people complete them in exchange for a fee.

You have a requirement to enforce HTTPS for all your connections but you would like to offload the SSL/TLS to a separate server to reduce the impact on application performance. Unfortunately, the region you are using does not support AWS ACM. What can be your alternative?

You cannot use ACM in another region for this purpose since ACM is a regional service. Generate your own certificate and upload it to AWS IAM. Associate the imported certificate with an elastic load balancer. More information here.

SAP-C02 Exam Domain 3:  Accelerate Workload Migration and Modernization

You are using a database engine on-premises that is not currently supported by RDS. If you wish to bring your database to AWS, how do you migrate it?

AWS has two tools to help you migrate your database workloads to the cloud: database migration service and schema conversion tool. First, collect information on your source database and have SCT convert your database schema and database code. You may check the supported source engines here. Once the conversion is finished, you can launch an RDS database and apply the converted schema, and use database migration service to safely migrate your database.

You have thousands of applications running on premises that need to be migrated to AWS. However, they are too intertwined with each other and may cause issues if the dependencies are not mapped properly. How should you proceed?

Use AWS Application Discovery Service to collect server utilization data and perform dependency mapping. Then send the result to AWS Migration Hub where you can initiate the migration of the discovered servers.

You have to migrate a large amount of data (TBs) over the weekend to an S3 bucket and you are not sure how to proceed. You have a 500Mbps Direct Connect line from your corporate data center to AWS. You also have a 1Gbps Internet connection. What should be your mode of migration?

One might consider using Snow hardware to perform the migration, but the time constraint does not allow you to ship the hardware in time. Your Direct Connect line is only 500Mbps as well. So you should instead enable S3 Transfer Acceleration and dedicate all your available bandwidth for the data transfer.

You have a custom-built application that you’d like to migrate to AWS. Currently, you don’t have enough manpower or money to rewrite the application to be cloud-optimized, but you would still like to optimize whatever you can on the application. What should be your migration strategy?

Rehosting is out of the question since there are no optimizations done in a lift-and-shift scenario. Re-architecting is also out of the question since you do not have the budget and manpower for it. You cannot retire nor repurchase since this is a custom production application. So your only option would be to re-platform it to utilize scaling and load balancing for example.

How can you leverage AWS as a cost-effective solution for offsite backups of mission-critical objects that have short RTO and RPO requirements?

For hybrid cloud architectures, you may use AWS Storage Gateway to continuously store file backups onto Amazon S3. Since you have short RTO and RPO, the best storage type to use is File Gateway. File Gateway allows you to mount Amazon S3 onto your server, and by doing so you can quickly retrieve the files you need. Volume Gateway does not work here since you will have to restore entire volumes before you can retrieve your files. Enable versioning on your S3 bucket to maintain old copies of an object. You can then create lifecycle policies in Amazon S3 to achieve even lower costs.

You have hundreds of EC2 Linux servers concurrently accessing files in your local NAS. The communication is kept private by AWS Direct Connect and IPsec VPN. You notice that the NAS is not able to sufficiently serve your EC2 instances, thus leading to huge slowdowns. You consider migrating to an AWS storage service as an alternative. What should be your service and how do you perform the migration?

Since you have hundreds of EC2 servers, the best storage for concurrent access would be Amazon EFS. To migrate your data to EFS, you may use AWS DataSync. Create a VPC endpoint for your EFS so that the data migration is performed quickly and securely over your Direct Connect.

If you have a piece of software (e.g. CRM) that you want to bring to the cloud, and you have an allocated budget but not enough manpower to re-architect it, what is your next best option to make sure the software is still able to take advantage of the cloud?

Check in the AWS Marketplace and verify if there is a similar tool that you can use — Repurchasing strategy.

SAP-C02 Exam Domain 4: Continuous Improvement for Existing Solutions

You have a running EMR cluster that has erratic utilization and task processing takes longer as time goes on. What can you do to keep costs to a minimum?

Add additional task nodes, but use instance fleets with the master node in on-Demand mode and a mix of On-Demand and Spot Instances for the core and task nodes. Purchase Reserved Instances for the master node.

A company has multiple AWS accounts in AWS Organizations that has full features enabled. How do you track AWS costs in Organizations and alert if costs from a business unit exceed a specific budget threshold?

Use Cost Explorer to monitor the spending of each account. Create a budget in AWS Budgets for each OU by grouping linked accounts, then configure SNS notification to alert you if the budget has been exceeded.

You have a Serverless stack running for your mobile application (Lambda, API Gateway, DynamoDB). Your Lambda costs are getting expensive due to the long wait time caused by high network latency when communicating with the SQL database in your on-premises environment. Only a VPN solution connects your VPC to your on-premises network. What steps can you make to reduce your costs?

If possible, migrate your database to AWS for lower latency. If this is not an option, consider purchasing a Direct Connect line with your VPN on top of it for a secure and fast network. Consider caching frequently retrieved results on API Gateway. Continuously monitor your Lambda execution time and reduce it gradually up to an acceptable duration.

You have a set of EC2 instances behind a load balancer and an autoscaling group, and they connect to your RDS database. Your VPC containing the instances uses NAT gateways to retrieve patches periodically. Everything is accessible only within the corporate network. What are some ways to lower your cost?

If your EC2 instances are production workloads, purchase Reserved instances. If they are not, schedule the autoscaling to scale in when they are not in use and scale out when you are about to use them. Consider a caching layer for your database reads if the same queries often appear. Consider using NAT instances instead, or better yet, remove the NAT gateways if you are only using them for patching. You can easily create a new NAT instance or NAT gateway when you need them again.

You need to generate continuous database and server backups in your primary region and have them available in your disaster recovery region as well. Backups need to be made available immediately in the primary region while the disaster region allows more leniency, as long as they can be restored in a few hours. A single backup is kept only for a month before it is deleted. A dedicated team conducts game days every week in the primary region to test the backups. You need to keep storage costs as low as possible.

Store the backups in Amazon S3 Standard and configure cross-region replication to the DR region S3 bucket. Create a lifecycle policy in the DR region to move the backups to S3 Glacier. S3 IA is not applicable since you need to wait for 30 days before you can transition to IA from Standard.

Determine the most cost-effective infrastructure:
a) Data is constantly being delivered to a file storage at a constant rate. Storage should have enough capacity to accommodate growth.
b) The data is extracted and worked upon by worker nodes. A job can take a few hours to finish.
c) This is not a mission-critical workload, so interruptions are acceptable as long as they are reprocessed.
d) The jobs only need to run during evenings.

You may use Amazon Kinesis Firehose to continuously stream the data into Amazon S3. Then configure AWS Batch with spot pricing for your worker nodes. Use Amazon Cloudwatch Events to schedule your jobs at night. More information here.

If you are cost-conscious about the charges incurred by external users who frequently access your S3 objects, what change can you introduce to shift the charges to the users?

Ensure that the external users have their own AWS accounts. Enable S3 Requester Pays on the S3 buckets. Create a bucket policy that will allow these users read/write access to the buckets.

You have a Direct Connect line from an AWS partner data center to your on-premises data center. Webservers are running in EC2, and they connect back to your on-premises databases/data warehouse. How can you increase the reliability of your connection?

There are multiple ways to increase the reliability of your network connection. You can order another Direct Connect line for redundancy, which AWS recommends for critical workloads.

You may also create an IPSec VPN connection over public Internet, but that will require additional configuration since you need to monitor the health of both networks.

You have a set of instances behind a Network Load Balancer and an autoscaling group. If you are to protect your instances from DDoS, what changes should you make?

Since AWS WAF does not integrate with NLB directly, you can create a CloudFront and attach the WAF there, and use your NLB as the origin. You can also enable AWS Shield Advanced so you get the full suite of features against DDoS and other security attacks.

You have a critical production workload (servers + databases) running in one region, and your RTO is 5 minutes while your RPO is 15 minutes. What is your most cost-efficient disaster recovery option?

If you have the option to choose warm standby, make sure that the DR infrastructure is able to automatically detect failure on the primary infrastructure (through health checks), and it can automatically scale up/scale out (autoscaling + scripts) and perform an immediate failover (Route 53 failover routing) in response. If your warm standby option does not state that it can do so then you might not be able to meet your RTO/RPO, which means you must use multi-site DR solution instead even though it is costly.

You use RDS to store data collected by hundreds of IoT robots. You know that these robots can produce up to tens of KBs of data per minute. It is expected that in a few years, the number of robots will continuously increase, and so database storage should be able to scale to handle the amount of data coming in and the IOPS required to match performance. How can you re-architect your solution to better suit this upcoming growth?

Instead of using a database, consider using a data warehousing solution such as Amazon Redshift instead. That way, your data storage can scale much larger and the database performance will not take that much of a hit.

You have a stream of data coming into your AWS environment that is being delivered by multiple sensors around the world. You need real-time processing for these data and you have to make sure that they are processed in the order in which they came in. What should be your architecture?

One might consider using SQS FIFO for this scenario, but since it also requires you to have real-time processing capabilities, Amazon Kinesis is a better solution. You can configure the data to have a specific partition key so that it is processed by the same Kinesis shard, thereby giving you similar FIFO capabilities.

You’re want to use your AWS Direct Connect to access S3 and DynamoDB endpoints while using your Internet provider for other types of traffic. How should you configure this?

Create a public interface on your AWS Direct Connect link. Advertise specific routes for your network to AWS, so that S3 traffic and DynamoDB traffic pass through your AWS Direct Connect.

You have a web application leveraging Cloudfront for caching frequently accessed objects. However, parts of the application are reportedly slow in some countries. What cost-effective improvement can you make?

Utilize Lambda@edge to run parts of the application closer to the users.

If you are running Amazon Redshift and you have a tight RTO and RPO requirement, what improvement can you make so that your Amazon Redshift is more highly available and durable in case of a regional disaster?

Amazon Redshift allows you to copy snapshots to other regions by enabling cross-region snapshots. Snapshots to S3 are automatically created on active clusters every 8 hours or when an amount of data equal to 5 GB per node changes. Depending on the snapshot policy configured on the primary cluster, the snapshot updates can either be scheduled, or based upon data change, and then any updates are automatically replicated to the secondary/DR region.

You have multiple EC2 instances distributed across different AZs depending on their function, and each of the AZ has its own m5.large NAT instance. A set of EC2 servers in one AZ occasionally cannot reach an API that is external to AWS when there is a high volume of traffic. This is unacceptable for your organization. What is the most cost-effective solution for your problem?

It would be better if you transition your NAT Instances to NAT Gateways since they provide faster network speeds. Resizing the NAT instance to something higher is not cost-effective anymore since the network speed increase is gradual as you go up. Adding more NAT instances to a single AZ makes your environment too complex.

Most of your vendors’ applications use IPv4 to communicate with your private AWS resources. However, a newly acquired vendor will only be supporting IPv6. You will be creating a new VPC dedicated for this vendor, and you need to make sure that all of your private EC2 instances can communicate using IPv6. What are the configurations that you need to do?

Provide your EC2 instances with IPv6 addresses. Create security groups that will allow IPv6 addresses for inbound and outbound. Create an egress-only Internet gateway to allow your private instances to reach the vendor.

 

Validate Your AWS Certified Solutions Architect Professional SAP-C02 Exam Readiness

After your review, you should take some practice tests to measure your preparedness for the real exam. AWS offers a sample practice test for free which you can find here. You can also opt to buy the longer AWS sample practice test at aws.training, and use the discount coupon you received from any previously taken certification exams. Be aware though that the sample practice tests do not mimic the difficulty of the real SA Pro exam. You should not rely solely on them to gauge your preparedness. It is better to take more practice tests to fully understand if you are prepared to pass the certification exam.

Fortunately, Tutorials Dojo also offers a great set of practice questions for you to take here. It is kept updated by the creators to ensure that the questions match what you’ll be expecting in the real exam. The practice tests will help fill in any important details that you might have missed or skipped in your review. You can also pair our practice exams with our AWS Certified Solutions Architect Professional Study Guide eBook to further help in your exam preparations.

 

Sample Practice Test Questions:

Question 1

A data analytics startup has been chosen to develop a data analytics system that will track all statistics in the Fédération Internationale de Football Association (FIFA) World Cup, which will also be used by other 3rd-party analytics sites. The system will record, store and provide statistical data reports about the top scorers, goal scores for each team, average goals, average passes, average yellow/red cards per match, and many other details. FIFA fans all over the world will frequently access the statistics reports every day and thus, it should be durably stored, highly available, and highly scalable. In addition, the data analytics system will allow the users to vote for the best male and female FIFA player as well as the best male and female coach. Due to the popularity of the FIFA World Cup event, it is projected that there will be over 10 million queries on game day and could spike to 30 million queries over the course of time.

Which of the following is the most cost-effective solution that will meet these requirements?

Option 1:

  1. Launch a MySQL database in Multi-AZ RDS deployments configuration with Read Replicas.
  2. Generate the FIFA reports by querying the Read Replica.
  3. Configure a daily job that performs a daily table cleanup.

Option 2:

  1. Launch a MySQL database in Multi-AZ RDS deployments configuration.
  2. Configure the application to generate reports from ElastiCache to improve the read performance of the system.
  3. Utilize the default expire parameter for items in ElastiCache.

Option 3:

  1. Generate the FIFA reports from MySQL database in Multi-AZ RDS deployments configuration with Read Replicas.
  2. Set up a batch job that puts reports in an S3 bucket.
  3. Launch a CloudFront distribution to cache the content with a TTL set to expire objects daily

Option 4:

  1. Launch a Multi-AZ MySQL RDS instance.
  2. Query the RDS instance and store the results in a DynamoDB table.
  3. Generate reports from DynamoDB table.
  4. Delete the old DynamoDB tables every day.

Correct Answer: 3

In this scenario, you are required to have the following:

  1. A durable storage for the generated reports.
  2. A database that is highly available and can scale to handle millions of queries.
  3. A Content Delivery Network that can distribute the report files to users all over the world.

Amazon S3 is object storage built to store and retrieve any amount of data from anywhere. It’s a simple storage service that offers industry leading durability, availability, performance, security, and virtually unlimited scalability at very low costs.

Amazon RDS provides high availability and failover support for DB instances using Multi-AZ deployments. In a Multi-AZ deployment, Amazon RDS automatically provisions and maintains a synchronous standby replica in a different Availability Zone. The primary DB instance is synchronously replicated across Availability Zones to a standby replica to provide data redundancy, eliminate I/O freezes, and minimize latency spikes during system backups. 

Amazon RDS uses the MariaDB, Microsoft SQL Server, MySQL, Oracle, and PostgreSQL DB engines’ built-in replication functionality to create a special type of DB instance called a read replica from a source DB instance. The source DB instance becomes the primary DB instance. Updates made to the primary DB instance are asynchronously copied to the read replica.

Amazon CloudFront is a web service that speeds up distribution of your static and dynamic web content, such as .html, .css, .js, and image files, to your users. CloudFront delivers your content through a worldwide network of data centers called edge locations. When a user requests content that you’re serving with CloudFront, the request is routed to the edge location that provides the lowest latency (time delay), so that content is delivered with the best possible performance.

Hence, the following option is the best solution that satisfies all of these requirements:

1. Generate the FIFA reports from MySQL database in Multi-AZ RDS deployments configuration with Read Replicas.

2. Set up a batch job that puts reports in an S3 bucket.

3. Launch a CloudFront distribution to cache the content with a TTL set to expire objects daily.

In the above, S3 provides durable storage; Multi-AZ RDS with Read Replicas provide a scalable and highly available database and CloudFront provides the CDN.

The following option is incorrect:

1. Launch a MySQL database in Multi-AZ RDS deployments configuration with Read Replicas.

2. Generate the FIFA reports by querying the Read Replica.

3. Configure a daily job that performs a daily table cleanup.

Although the database is scalable and highly available, it neither has any durable data storage nor a CDN.

The following option is incorrect:

1. Launch a MySQL database in Multi-AZ RDS deployments configuration.

2. Configure the application to generate reports from ElastiCache to improve the read performance of the system.

3. Utilize the default expire parameter for items in ElastiCache.

Although this option handles and provides a better read capability for the system, it is still lacking a durable storage and a CDN.

The following option is incorrect:

1. Launch a Multi-AZ MySQL RDS instance.

2. Query the RDS instance and store the results in a DynamoDB table.

3. Generate reports from DynamoDB table.

4. Delete the old DynamoDB tables every day.

The above is not a cost-effective solution to maintain both RDS and a DynamoDB instance.

References:
https://aws.amazon.com/rds/details/multi-az/
https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_ReadRepl.html
https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/Introduction.html

Check out this Amazon RDS Cheat Sheet:
https://tutorialsdojo.com/amazon-relational-database-service-amazon-rds/

Question 2

A company provides big data services to enterprise clients around the globe. One of the clients has 60 TB of raw data from their on-premises Oracle data warehouse. The data is to be migrated to Amazon Redshift. However, the database receives minor updates on a daily basis while major updates are scheduled every end of the month. The migration process must be completed within approximately 30 days before the next major update on the Redshift database. The company can only allocate 50 Mbps of Internet connection for this activity to avoid impacting business operations.

Which of the following actions will satisfy the migration requirements of the company while keeping the costs low?

  1. Create a new Oracle Database on Amazon RDS. Configure Site-to-Site VPN connection from the on-premises data center to the Amazon VPC. Configure replication from the on-premises database to Amazon RDS. Once replication is complete, create an AWS Schema Conversion Tool (SCT) project with AWS DMS task to migrate the Oracle database to Amazon Redshift. Monitor and verify if the data migration is complete before the cut-over.
  2. Create an AWS Snowball Edge job using the AWS Snowball console. Export all data from the Oracle data warehouse to the Snowball Edge device. Once the Snowball device is returned to Amazon and data is imported to an S3 bucket, create an Oracle RDS instance to import the data. Create an AWS Schema Conversion Tool (SCT) project with AWS DMS task to migrate the Oracle database to Amazon Redshift. Copy the missing daily updates from Oracle in the data center to the RDS for Oracle database over the Internet. Monitor and verify if the data migration is complete before the cut-over.
  3. Since you have a 30-day window for migration, configure VPN connectivity between AWS and the company’s data center by provisioning a 1 Gbps AWS Direct Connect connection. Launch an Oracle Real Application Clusters (RAC) database on an EC2 instance and set it up to fetch and synchronize the data from the on-premises Oracle database. Once replication is complete, create an AWS DMS task on an AWS SCT project to migrate the Oracle database to Amazon Redshift. Monitor and verify if the data migration is complete before the cut-over.
  4. Create an AWS Snowball import job to request for a Snowball Edge device. Use the AWS Schema Conversion Tool (SCT) to process the on-premises data warehouse and load it to the Snowball Edge device. Install the extraction agent on a separate on-premises server and register it with AWS SCT. Once the Snowball Edge imports data to the S3 bucket, use AWS SCT to migrate the data to Amazon Redshift. Configure a local task and AWS DMS task to replicate the ongoing updates to the data warehouse. Monitor and verify that the data migration is complete.

Correct Answer: 4

You can use an AWS SCT agent to extract data from your on-premises data warehouse and migrate it to Amazon Redshift. The agent extracts your data and uploads the data to either Amazon S3 or, for large-scale migrations, an AWS Snowball Edge device. You can then use AWS SCT to copy the data to Amazon Redshift.

Large-scale data migrations can include many terabytes of information and can be slowed by network performance and by the sheer amount of data that has to be moved. AWS Snowball Edge is an AWS service you can use to transfer data to the cloud at faster-than-network speeds using an AWS-owned appliance. An AWS Snowball Edge device can hold up to 100 TB of data. It uses 256-bit encryption and an industry-standard Trusted Platform Module (TPM) to ensure both security and full chain-of-custody for your data. AWS SCT works with AWS Snowball Edge devices.

When you use AWS SCT and an AWS Snowball Edge device, you migrate your data in two stages. First, you use the AWS SCT to process the data locally and then move that data to the AWS Snowball Edge device. You then send the device to AWS using the AWS Snowball Edge process, and then AWS automatically loads the data into an Amazon S3 bucket. Next, when the data is available on Amazon S3, you use AWS SCT to migrate the data to Amazon Redshift. Data extraction agents can work in the background while AWS SCT is closed. You manage your extraction agents by using AWS SCT. The extraction agents act as listeners. When they receive instructions from AWS SCT, they extract data from your data warehouse.

Therefore, the correct answer is: Create an AWS Snowball import job to request for a Snowball Edge device. Use the AWS Schema Conversion Tool (SCT) to process the on-premises data warehouse and load it to the Snowball Edge device. Install the extraction agent on a separate on-premises server and register it with AWS SCT. Once the Snowball Edge imports data to the S3 bucket, use AWS SCT to migrate the data to Amazon Redshift. Configure a local task and AWS DMS task to replicate the ongoing updates to the data warehouse. Monitor and verify that the data migration is complete.

The option that says: Create a new Oracle Database on Amazon RDS. Configure Site-to-Site VPN connection from the on-premises data center to the Amazon VPC. Configure replication from the on-premises database to Amazon RDS. Once replication is complete, create an AWS Schema Conversion Tool (SCT) project with AWS DMS task to migrate the Oracle database to Amazon Redshift. Monitor and verify if the data migration is complete before the cut-over is incorrect. Replicating 60 TB worth of data over the public Internet will take several days over the 30-day migration window. It is also stated in the scenario that the company can only allocate 50 Mbps of Internet connection for the migration activity. Sending the data over the Internet could potentially affect business operations.

The option that says: Create an AWS Snowball Edge job using the AWS Snowball console. Export all data from the Oracle data warehouse to the Snowball Edge device. Once the Snowball device is returned to Amazon and data is imported to an S3 bucket, create an Oracle RDS instance to import the data. Create an AWS Schema Conversion Tool (SCT) project with AWS DMS task to migrate the Oracle database to Amazon Redshift. Copy the missing daily updates from Oracle in the data center to the RDS for Oracle database over the internet. Monitor and verify if the data migration is complete before the cut-over is incorrect. You need to configure the data extraction agent first on your on-premises server. In addition, you don’t need the data to be imported and exported via Amazon RDS. AWS DMS can directly migrate the data to Amazon Redshift.

The option that says: Since you have a 30-day window for migration, configure VPN connectivity between AWS and the company’s data center by provisioning a 1 Gbps AWS Direct Connect connection. Install Oracle database on an EC2 instance that is configured to synchronize with the on-premises Oracle database. Once replication is complete, create an AWS DMS task on an AWS SCT project to migrate the Oracle database to Amazon Redshift. Monitor and verify if the data migration is complete before the cut-over Since you have a 30-day window for migration, configure VPN connectivity between AWS and the company’s data center by provisioning a 1 Gbps AWS Direct Connect connection. Launch an Oracle Real Application Clusters (RAC) database on an EC2 instance and set it up to fetch and synchronize the data from the on-premises Oracle database. Once replication is complete, create an AWS DMS task on an AWS SCT project to migrate the Oracle database to Amazon Redshift. Monitor and verify if the data migration is complete before the cut-over is incorrect. Although this is possible, the company wants to keep the cost low. Using a Direct Connect connection for a one-time migration is not a cost-effective solution.

References:
https://aws.amazon.com/getting-started/hands-on/migrate-oracle-to-amazon-redshift/
https://docs.aws.amazon.com/SchemaConversionTool/latest/userguide/agents.dw.html
https://docs.aws.amazon.com/SchemaConversionTool/latest/userguide/agents.html

Click here for more AWS Certified Solutions Architect Professional practice exam questions.

More AWS reviewers can be found here:

 

Additional Training Materials for the AWS Certified Solutions Architect Professional Exam

There are a few top-rated AWS Certified Solutions Architect Professional video courses that you can check out as well, which can help in your exam preparations. The list below is constantly updated based on feedback from our students on which course/s helped them the most during their exams.

Based on consensus, any of these video courses plus our practice test course and our AWS Certified Solutions Architect Professional Study Guide eBook were enough to pass this tough exam.

In general, what you should have learned from your review are the following:

  • Features and use cases of the AWS services and how they integrate with each other
  • AWS networking, security, billing and account management
  • The AWS CLI, APIs and SDKs
  • Automation, migration planning, and troubleshooting
  • The best practices in designing solutions in the AWS Cloud
  • Building CI/CD solutions using different platforms
  • Resource management in a multi-account organization
  • Multi-level security

All these factors are essentially the domains of your certification exam. It is because of this difficult hurdle that AWS Certified Solutions Architect Professionals are highly respected in the industry. They are capable of architecting ingenious solutions that solve customer problems in AWS. They are also constantly improving themselves by learning all the new services and features that AWS produces each year to make sure that they can provide the best solutions to their customers. Let this challenge be your motivation to dream high and strive further in your career as a Solutions Architect!

 

Final Notes About the AWS Certified Solutions Architect Professional SAP-C02 Exam

The SA Professional exam questions always ask for highly available, fault tolerant, cost-effective and secure solutions. Be sure to understand the choices provided to you, and verify that they have accurate explanations. Some choices are very misleading such that they seem to be the most natural answer to the question, but actually contain incorrect information, such as the incorrect use of a service. Always place accuracy above all else.

When unsure of which options are correct in a multi-select question, try to eliminate some of the choices that you believe are false. This will help narrow down the feasible answers to that question. The same goes for multiple choice type questions. Be extra careful as well when selecting the number of answers you submit.

Since an SA Professional has responsibilities in creating large-scale architectures, be wary of the different ways AWS services can be integrated with one another. Common combinations include:

Lastly, be on the lookout for “key terms” that will help you realize the answer faster. Words such as millisecond latency, serverless, managed, highly available, most cost effective, fault tolerant, mobile, streaming, object storage, archival, polling, push notifications, etc are commonly seen in the exam. Time management is very important when taking AWS certification exams, so be sure to monitor the time you consume for each question.

💝 Valentine’s Sale! Get 30% OFF Any Reviewer. Use coupon code: PASSION-4-CLOUD & 10% OFF Store Credits/Gift Cards

Tutorials Dojo portal

Learn AWS with our PlayCloud Hands-On Labs

FREE AWS Exam Readiness Digital Courses

Tutorials Dojo Exam Study Guide eBooks

tutorials dojo study guide eBook

FREE AWS, Azure, GCP Practice Test Samplers

Subscribe to our YouTube Channel

Tutorials Dojo YouTube Channel

Follow Us On Linkedin

Recent Posts

The post AWS Certified Solutions Architect Professional Exam Guide Study Path SAP-C02 appeared first on Tutorials Dojo.

]]>
https://tutorialsdojo.com/aws-certified-solutions-architect-professional-exam-guide-study-path-sap-c01-sap-c02/feed/ 0 4523
AWS Certified Solutions Architect Associate Exam – SAA-C03 Study Path https://tutorialsdojo.com/aws-certified-solutions-architect-associate-saa-c03/ https://tutorialsdojo.com/aws-certified-solutions-architect-associate-saa-c03/#respond Fri, 08 Jul 2022 05:26:03 +0000 https://tutorialsdojo.com/?p=4505 Bookmarks Exam Overview Difference between SAA-C02 and SAA-C03 SAA-C03 Study Materials Additional SAA-C03 Whitepapers Core SAA-C03 AWS Services to Focus On Other SAA-C03 AWS Services That You Should Prepare For Common Exam Scenarios for the AWS Certified Solutions Architect Associate SAA-C03 AWS Certified Solutions Architect Associate [...]

The post AWS Certified Solutions Architect Associate Exam – SAA-C03 Study Path appeared first on Tutorials Dojo.

]]>

The AWS Certified Solutions Architect Associate SAA-C03 exam, or SAA for short, is one of the most sought-after certifications in the Cloud industry today. This certification verifies your knowledge of the AWS Cloud and your know-how in building a well-architected infrastructure in AWS. This AWS Certification exam helps companies identify and develop their in-house talent in implementing cloud initiatives. Achieving the latest version of the AWS Certified Solutions Architect – Associate SAA-C03 certification validates one’s ability to design and implement various solutions on AWS, such as distributed architecture, serverless, containerized applications, and the like.

The AWS Certified Solutions Architect Associate Certification Exam Overview

AWS Certified Solutions Architect Associate SAA-C03

The AWS Certified Solutions Architect – Associate SAA-C03 certification exam is intended for people who perform in a solutions architect role, but any IT Professional can take this. College students who want to get ahead of their peers can also take this test.  The SAA-C03 exam validates your ability to use various Amazon Web Services (AWS) technologies to design solutions based on the AWS Well-Architected Framework.

If you are interested in taking the AWS Certified Solutions Architect – Associate SAA-C03 exam soon, you must prepare first by studying the core cloud concepts and design principles in AWS. Pay close attention to how you can properly secure your cloud architecture as this exam constitutes a lot of security-related scenarios. Take note that the SAA-C03 exam also validates a candidate’s ability to complete the following tasks:

  • Design solutions that incorporate AWS services to meet current business requirements and future projected needs
  • Design architectures that are secure, resilient, high-performing, and cost-optimized
  • Review existing solutions and determine improvements

The official AWS Exam Guide, AWS Documentation, and AWS Whitepapers will be your primary study materials for this exam. Experience in building systems will also be helpful since the exam constitutes of multiple scenario-type questions. You can learn more details on your exam through the official AWS Certified Solutions Architect – Associate SAA-C03 Exam Guide here. Do a quick read on it to be aware of how to prepare and what to expect on the exam itself.

Difference between the SAA-C02 and SAA-C03 AWS Certified Solutions Architect Associate Exam Versions

Before you start preparing for the exam, you have to know the exact knowledge areas and topics that you should focus on. It is also beneficial for you to learn the differences between the previous SAA-C02 version and the new SAA-C03 AWS Certified Solutions Architect Associate certification exam.

Both exam domains of the SAA-C02 and SAA-C03 are virtually the same. As you can see in the diagram below, the new SAA-C03 exam has retained the Design Resilient Architectures, Design High-Performing Architectures, and Design Cost-Optimized Architectures exam domains from the previous one. However, the existing Design Secure Applications and Architectures exam domain was renamed to Design Secure Architectures. 

SAA-C02 vs SAA-C03 Comparison AWS Solutions Architect Associate 2022

Another important thing to note here is the change in the percentage of its exam domain coverage. The previous version of the AWS Certified Solutions Architect Associate exam was focused on the topic of resiliency. This time, the new AWS Certified Solutions Architect Associate SAA-c03 exam version has put a spotlight on security. Its biggest exam domain is Design Secure Architecture (30%) so you have to focus on the various security services in AWS as well as the different security features available on each related AWS service.

The AWS Certified Solutions Architect Associate SAA-C03 Study Materials

As a starting point for your AWS Certified Solutions Architect Associate exam studies, we recommend taking the FREE AWS Certified Cloud Practitioner Essential digital course. If you are quite new to AWS, taking and completing this digital course should be your first step for your SAA-C03 exam prep. 

There are a lot of posts on the Internet claiming the “best” course for the AWS Certified Solutions Architect Associate SAA-C03 Exam. However, some of these resources are already obsolete and don’t cover the latest topics that were recently introduced in the SAA-C03 test. How can I ensure that you are using the right study materials for your upcoming AWS Certified Solutions Architect Associate test?

The best thing to do is to check the official AWS Certification website for the most up-to-date information. You can also head on to the official AWS Certification page for the AWS Certified Solutions Architect Associate SAA-C03 exam. This page is where you can find the actual link to schedule your SAA-C03 exam as well as get the official SAA-C03 Exam Guide and Sample Questions as shown below: 

Official Certificate Page for the AWS Certified Solutions Architect Associate SAA-C03 Exam 

Let’s now enumerate the top study materials for the AWS Certified Solutions Architect Associate SAA-C03 certification test. This list contains the official SAA-C03 Exam Guide, Sample Questions, and other free/paid resources, The official AWS materials are more reliable than the other ones you’ll find over the Internet since the information you’ll get there is straight from the AWS Certification and Training team itself. Thus, you have to give more credit to what the official SAA-C03 Exam Guide says in deciding the AWS topics that you’ll focus on. 

1. Official Exam Guide for the AWS Certified Solutions Architect Associate SAA-C03 Exam

AWS Certified Solutions Architect Associate Exam Guide SAA-C03

 

2. AWS Certified Solutions Architect Associate SAA-C03 Video Course

AWS Certified Solutions Architect Associate Video Course SAA-C03

3. AWS Certified Solutions Architect Associate SAA-C03 Practice Exams

AWS Certified Solutions Architect Associate Practice Exam SAA-C02 SAA-C03

4. Official Sample Questions for the AWS Certified Solutions Architect Associate SAA-C03

AWS Certified Solutions Architect Associate SAA-C03 Official Sample Questions SAA-C03

 

Additional SAA-C03 Whitepapers 

For whitepapers, focus on the following:

  1. AWS Well-Architected Framework
  2. An Overview of the AWS Cloud Adoption Framework
  3. Cost Optimization Pillar – AWS Well-Architected Framework
  4. Disaster Recovery of On-Premises Applications to AWS
  5. Security Best Practices for Manufacturing OT

Core AWS Services to Focus On for the SAA-C03 Exam

  1. EC2 – As the most fundamental compute service offered by AWS, you should know about EC2 inside out.
  2. Lambda – Lambda is the common service used for serverless applications. Study how it is integrated with other AWS services to build a full-stack serverless app.
  3. Elastic Load Balancer – Load balancing is very important for a highly available system. Study the different types of ELBs, and the features each of them supports.
  4. Auto Scaling – Study what services in AWS can be auto-scaled, what triggers scaling, and how auto scaling increases/decreases the number of instances.
  5. Elastic Block Store – As the primary storage solution of EC2, study on the types of EBS volumes available. Also study how to secure, backup and restore EBS volumes.
  6. S3 / Glacier – AWS offers many types of S3 storage depending on your needs. Study what these types are and what differs between them. Also review on the capabilities of S3 such as hosting a static website, securing access to objects using policies, lifecycle policies, etc. Learn as much about S3 as you can.
  7. Storage Gateway – There are occasional questions about Storage Gateway in the exam. You should understand when and which type of Storage Gateway should be used compared to using services like S3 or EBS. You should also know the use cases and differences between DataSync and Storage Gateway.
  8. EFS – EFS is a service highly associated with EC2, much like EBS. Understand when to use EFS, compared to using S3, EBS or instance store. Exam questions involving EFS usually ask the trade off between cost and efficiency of the service compared to other storage services.
  9. RDS / Aurora – Know how each RDS database differs from one another, and how they are different from Aurora. Determine what makes Aurora unique, and when it should be preferred from other databases (in terms of function, speed, cost, etc). Learn about parameter groups, option groups, and subnet groups.
  10. DynamoDB – The exam includes lots of DynamoDB questions, so read as much about this service as you can. Consider how DynamoDB compares to RDS, Elasticache and Redshift. This service is also commonly used for serverless applications along with Lambda.
  11. Elasticache – Familiarize yourself with Elasticache redis and its functions. Determine the areas/services where you can place a caching mechanism to improve data throughput, such as managing session state of an ELB, optimizing RDS instances, etc.
  12. VPC/NACL/Security Groups – Study every service that is used to create a VPC (subnets, route tables, internet gateways, nat gateways, VPN gateways, etc). Also, review on the differences of network access control lists and security groups, and during which situations they are applied.
  13. Route 53 – Study the different types of records in Route 53. Study also the different routing policies. Know what hosted zones and domains are.
  14. IAM – Services such as IAM Users, Groups, Policies and Roles are the most important to learn. Study how IAM integrates with other services and how it secures your application through different policies. Also read on the best practices when using IAM.
  15. CloudWatch – Study how monitoring is done in AWS and what types of metrics are sent to CloudWatch. Also read upon Cloudwatch Logs, CloudWatch Alarms, and the custom metrics made available with CloudWatch Agent.
  16. CloudTrail – Familiarize yourself with how CloudTrail works, and what kinds of logs it stores as compared to CloudWatch Logs.
  17. Kinesis – Read about Kinesis sharding and Kinesis Data Streams. Have a high level understanding of how each type of Kinesis Stream works.
  18. CloudFront – Study how CloudFront helps speed up websites. Know what content sources CloudFront can serve from. Also, check the kinds of certificates CloudFront accepts.
  19. SQS – Gather info on why SQS is helpful in decoupling systems. Study how messages in the queues are being managed (standard queues, FIFO queues, dead letter queues). Know the differences between SQS, SNS, SES, and Amazon MQ.
  20. SNS – Study the function of SNS and what services can be integrated with it. Also be familiar with the supported recipients of SNS notifications.
  21. SWF / CloudFormation / OpsWorks – Study how these services function. Differentiate the capabilities and use cases of each of them. Have a high-level understanding of the kinds of scenarios they are usually used in.

Other SAA-C03 AWS Services that you should prepare for:

For the exam version ( SAA-C03 ), you should also know the following services: 

… plus a few more services and new SAA-C03 topics that we have recently added to our AWS Certified Solutions Architect Associate Practice Exams 

For more information, check out the SAA-C03 official exam guide for the new SAA-C03 version here

Based on our exam experience, you should also know when to use the following:

The AWS Documentation and FAQs will be your primary source of information. You can also visit Tutorials Dojo’s AWS Cheat Sheets to gain access to a repository of thorough content on the different AWS services mentioned above. Lastly, try out these services yourself by signing up on AWS and performing some lab exercises. Experiencing them on your own will help you greatly in remembering what each service is capable of.

Also check out this article: Top 5 FREE AWS Review Materials.

 

Common Exam Scenarios for the SAA-C03 exam 

Scenario

Solution

Domain 1: Design Resilient Architectures

Set up asynchronous data replication to another RDS DB instance hosted in another AWS Region

Create a Read Replica

A parallel file system for “hot” (frequently accessed) data

Amazon FSx For Lustre

Implement synchronous data replication across Availability Zones with automatic failover in Amazon RDS.

Enable Multi-AZ deployment in Amazon RDS.

Needs a storage service to host “cold” (infrequently accessed) data

Amazon S3 Glacier

Set up a relational database and a disaster recovery plan with an RPO of 1 second and RTO of less than 1 minute.

Use Amazon Aurora Global Database.

Monitor database metrics and send email notifications if a specific threshold has been breached.

Create an SNS topic and add the topic in the CloudWatch alarm.

Set up a DNS failover to a static website.

Use Route 53 with the failover option to a static S3 website bucket or CloudFront distribution.

Implement an automated backup for all the EBS Volumes.

Use Amazon Data Lifecycle Manager to automate the creation of EBS snapshots.

Monitor the available swap space of your EC2 instances

Install the CloudWatch agent and monitor the SwapUtilizationmetric.

Implement a 90-day backup retention policy on Amazon Aurora.

Use AWS Backup

Domain 2: Design High-Performing Architectures

Implement a fanout messaging.

Create an SNS topic with a message filtering policy and configure multiple SQS queues to subscribe to the topic.

A database that has a read replication latency of less than 1 second.

Use Amazon Aurora with cross-region replicas.

A specific type of Elastic Load Balancer that uses UDP as the protocol for communication between clients and thousands of game servers around the world.

Use Network Load Balancer for TCP/UDP protocols.

Monitor the memory and disk space utilization of an EC2 instance.

Install Amazon CloudWatch agent on the instance.

Retrieve a subset of data from a large CSV file stored in the S3 bucket.

Perform an S3 Select operation based on the bucket’s name and object’s key.

Upload 1 TB file to an S3 bucket.

Use Amazon S3 multipart upload API to upload large objects in parts.

Improve the performance of the application by reducing the response times from milliseconds to microseconds.

Use Amazon DynamoDB Accelerator (DAX)

Retrieve the instance ID, public keys, and public IP address of an EC2 instance.

Access the URL: http://169.254.169.254/latest/meta-data/ using the EC2 instance.

Route the internet traffic to the resources based on the location of the user.

Use Route 53 Geolocation Routing policy.

A fully managed ETL (extract, transform, and load) service provided by Amazon Web Services.

AWS Glue

A fully managed, petabyte-scale data warehouse service.

Amazon Redshift

Domain 3: Design Secure Applications and Architectures

Encrypt EBS volumes restored from the unencrypted EBS snapshots

Copy the snapshot and enable encryption with a new symmetric CMK while creating an EBS volume using the snapshot.

Limit the maximum number of requests from a single IP address.

Create a rate-based rule in AWS WAF and set the rate limit.

Grant the bucket owner full access to all uploaded objects in the S3 bucket.

Create a bucket policy that requires users to set the object’s ACL to bucket-owner-full-control.

Protect objects in the S3 bucket from accidental deletion or overwrite.

Enable versioning and MFA delete.

Access resources on both on-premises and AWS using on-premises credentials that are stored in Active Directory.

Set up SAML 2.0-Based Federation by using a Microsoft Active Directory Federation Service.

Secure the sensitive data stored in EBS volumes

Enable EBS Encryption

Ensure that the data-in-transit and data-at-rest of the Amazon S3 bucket is always encrypted

Enable Amazon S3 Server-Side or use Client-Side Encryption

Secure the web application by allowing multiple domains to serve SSL traffic over the same IP address.

Use AWS Certificate Manager to generate an SSL certificate. Associate the certificate to the CloudFront distribution and enable Server Name Indication (SNI).

Control the access for several S3 buckets by using a gateway endpoint to allow access to trusted buckets.

Create an endpoint policy for trusted S3 buckets.

Enforce strict compliance by tracking all the configuration changes made to any AWS services.

Set up a rule in AWS Config to identify compliant and non-compliant services.

Provide short-lived access tokens that act as temporary security credentials to allow access to AWS resources.

Use AWS Security Token Service

Encrypt and rotate all the database credentials, API keys, and other secrets on a regular basis.

Use AWS Secrets Manager and enable automatic rotation of credentials.

Domain 4: Design Cost-Optimized Architectures

A cost-effective solution for over-provisioning of resources.

Configure a target tracking scaling in ASG.

The application data is stored in a tape backup solution. The backup data must be preserved for up to 10 years.

Use AWS Storage Gateway to backup the data directly to Amazon S3 Glacier Deep Archive.

Accelerate the transfer of historical records from on-premises to AWS over the Internet in a cost-effective manner.

Use AWS DataSync and select Amazon S3 Glacier Deep Archive as the destination.

Globally deliver the static contents and media files to customers around the world with low latency.

Store the files in Amazon S3 and create a CloudFront distribution. Select the S3 bucket as the origin.

An application must be hosted to two EC2 instances and should continuously run for three years. The CPU utilization of the EC2 instances is expected to be stable and predictable.

Deploy the application to a Reserved instance.

Implement a cost-effective solution for S3 objects that are accessed less frequently.

Create an Amazon S3 lifecyle policy to move the objects to Amazon S3 Standard-IA.

Minimize the data transfer costs between two EC2 instances.

Deploy the EC2 instances in the same Region.

Import the SSL/TLS certificate of the application.

Import the certificate into AWS Certificate Manager or upload it to AWS IAM.

 

AWS Certified Solutions Architect Associate Video Course – SAA-C03

This is a concise Solutions Architect Associate video training course for the SAA-C03 exam. The goal of this video course is to equip you with the exam-specific knowledge that you need to understand in order to pass the SAA-C03 exam, presented in a highly visual form. Click here to enroll. Here is a sneak peek of our video course introduction:

Validate Your SAA-C03 Knowledge

When you are feeling confident with your review, it is best to validate your knowledge through sample exams. You can take this practice exam from AWS for free as additional material but do not expect your real exam to be on the same level of difficulty as this practice exam on the AWS website. Tutorials Dojo offers a very useful and well-reviewed set of practice tests for AWS Solutions Architect Associate SAA-C03 takers here. The practice test has over 400 unique questions and each question comes with detailed explanations, reference links, and cheat sheets. You can also pair our practice exams with our video course and exam study guide eBook to further help in your exam preparations.

If you have scored well on the Tutorials Dojo AWS Certified Solutions Architect Associate  SAA-C03 practice tests and you think you are ready, then go earn your certification! If you think you are lacking in certain areas, better review them again and take note of any hints in the questions that will help you select the correct answers. 

AWS Certified Solutions Architect Associate Practice Exam SAA-C02 SAA-C03

 

Sample SAA-C03 Practice Test Questions

Question 1

A tech company has a CRM application hosted on an Auto Scaling group of On-Demand EC2 instances with different instance types and sizes. The application is extensively used during office hours from 9 in the morning to 5 in the afternoon. Their users are complaining that the performance of the application is slow during the start of the day but then works normally after a couple of hours.

Which of the following is the MOST operationally efficient solution to implement to ensure the application works properly at the beginning of the day?

  1. Configure a Dynamic scaling policy for the Auto Scaling group to launch new instances based on the CPU utilization.
  2. Configure a Dynamic scaling policy for the Auto Scaling group to launch new instances based on the Memory utilization.
  3. Configure a Scheduled scaling policy for the Auto Scaling group to launch new instances before the start of the day.
  4. Configure a Predictive scaling policy for the Auto Scaling group to automatically adjust the number of Amazon EC2 instances

Correct Answer: 3

Scaling based on a schedule allows you to scale your application in response to predictable load changes. For example, every week the traffic to your web application starts to increase on Wednesday, remains high on Thursday, and starts to decrease on Friday. You can plan your scaling activities based on the predictable traffic patterns of your web application. 

To configure your Auto Scaling group to scale based on a schedule, you create a scheduled action. The scheduled action tells Amazon EC2 Auto Scaling to perform a scaling action at specified times. To create a scheduled scaling action, you specify the start time when the scaling action should take effect and the new minimum, maximum, and desired sizes for the scaling action. At the specified time, Amazon EC2 Auto Scaling updates the group with the values for minimum, maximum, and desired size specified by the scaling action. You can create scheduled actions for scaling one time only or for scaling on a recurring schedule.

Hence, configuring a Scheduled scaling policy for the Auto Scaling group to launch new instances before the start of the day is the correct answer. You need to configure a Scheduled scaling policy. This will ensure that the instances are already scaled up and ready before the start of the day since this is when the application is used the most.

The following options are both incorrect. Although these are valid solutions, it is still better to configure a Scheduled scaling policy as you already know the exact peak hours of your application. By the time either the CPU or Memory hits a peak, the application already has performance issues, so you need to ensure the scaling is done beforehand using a Scheduled scaling policy:

-Configure a Dynamic scaling policy for the Auto Scaling group to launch new instances based on the CPU utilization

-Configure a Dynamic scaling policy for the Auto Scaling group to launch new instances based on the Memory utilization

The option that says: Configure a Predictive scaling policy for the Auto Scaling group to automatically adjust the number of Amazon EC2 instances is incorrect. Although this type of scaling policy can be used in this scenario, it is not the most operationally efficient option. Take note that the scenario mentioned that the Auto Scaling group consists of Amazon EC2 instances with different instance types and sizes. Predictive scaling assumes that your Auto Scaling group is homogenous, which means that all EC2 instances are of equal capacity. The forecasted capacity can be inaccurate if you are using a variety of EC2 instance sizes and types on your Auto Scaling group. 

References: 
https://docs.aws.amazon.com/autoscaling/ec2/userguide/schedule_time.html
https://docs.aws.amazon.com/autoscaling/ec2/userguide/ec2-auto-scaling-scheduled-scaling.html
https://docs.aws.amazon.com/autoscaling/ec2/userguide/ec2-auto-scaling-predictive-scaling.html#predictive-scaling-limitations

Check out this AWS Auto Scaling Cheat Sheet:
https://tutorialsdojo.com/aws-auto-scaling/

Question 2

A financial application is composed of an Auto Scaling group of EC2 instances, an Application Load Balancer, and a MySQL RDS instance in a Multi-AZ Deployments configuration. To protect the confidential data of your customers, you have to ensure that your RDS database can only be accessed using the profile credentials specific to your EC2 instances via an authentication token.

As the Solutions Architect of the company, which of the following should you do to meet the above requirement?

  1. Enable the IAM DB Authentication.
  2. Configure SSL in your application to encrypt the database connection to RDS.
  3. Create an IAM Role and assign it to your EC2 instances which will grant exclusive access to your RDS instance.
  4. Use a combination of IAM and STS to restrict access to your RDS instance via a temporary token.

Correct Answer: 1

You can authenticate to your DB instance using AWS Identity and Access Management (IAM) database authentication. IAM database authentication works with MySQL and PostgreSQL. With this authentication method, you don’t need to use a password when you connect to a DB instance. Instead, you use an authentication token.

An authentication token is a unique string of characters that Amazon RDS generates on request. Authentication tokens are generated using AWS Signature Version 4. Each token has a lifetime of 15 minutes. You don’t need to store user credentials in the database, because authentication is managed externally using IAM. You can also still use standard database authentication.

IAM database authentication provides the following benefits:

  1. Network traffic to and from the database is encrypted using Secure Sockets Layer (SSL).

  2. You can use IAM to centrally manage access to your database resources, instead of managing access individually on each DB instance.

  3. For applications running on Amazon EC2, you can use profile credentials specific to your EC2 instance to access your database instead of a password, for greater security

Hence, enabling IAM DB Authentication is the correct answer based on the above reference.

Configuring SSL in your application to encrypt the database connection to RDS is incorrect because an SSL connection is not using an authentication token from IAM. Although configuring SSL to your application can improve the security of your data in flight, it is still not a suitable option to use in this scenario.

Creating an IAM Role and assigning it to your EC2 instances which will grant exclusive access to your RDS instance is incorrect because although you can create and assign an IAM Role to your EC2 instances, you still need to configure your RDS to use IAM DB Authentication.

Using a combination of IAM and STS to restrict access to your RDS instance via a temporary token is incorrect because you have to use IAM DB Authentication for this scenario, and not a combination of an IAM and STS. Although STS is used to send temporary tokens for authentication, this is not a compatible use case for RDS.

Reference:
https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/UsingWithRDS.IAMDBAuth.html

Check out this Amazon RDS cheat sheet:
https://tutorialsdojo.com/amazon-relational-database-service-amazon-rds/

Click here for more AWS Certified Solutions Architect Associate practice exam questions.

Check out our other AWS practice test courses here:

 

To increase your chances of passing the AWS Certified Solutions Architect Associate exam, we recommend using a combination of our video course, our practice tests, and our study guide eBook. You can view our triple bundles here.

 

Additional SAA-C03 Training Materials: High-Quality Video Courses for the AWS Certified Solutions Architect Associate Exam

There are a few top-rated AWS Certified Solutions Architect Associate SAA-C03 video courses that you can check out as well, which can help in your exam preparations. The list below is constantly updated based on feedback from our students on which course/s helped them the most during their exams.

 

Some Notes Regarding Your SAA-C03 Exam

The AWS Solutions Architect Associate (SAA-C03) exam loves to end questions that ask for highly available or cost-effective solutions. Be sure to understand the choices provided to you, and verify that they have the correct details. Some choices are very misleading such that it seems it is the most appropriate answer to the question but contains incorrect detail about some services. 

When unsure of which options are correct in a multi-select question, try to eliminate some of the choices that you believe are false. This will help narrow down the feasible answers to that question. The same goes for multiple-choice type questions. Be extra careful as well when selecting the number of answers you submit.

As mentioned in this review, you should be able to differentiate services that belong to one category from another. Common comparisons include:

  • EC2 vs ECS vs Lambda
  • S3 vs EBS vs EFS
  • CloudFormation vs OpsWorks vs Elastic Beanstalk
  • SQS vs SNS vs SES vs MQ
  • Security Group vs nACLs
  • The different S3 storage types vs Glacier
  • RDS vs DynamoDB vs Elasticache
  • RDS engines vs Aurora

The Tutorials Dojo Comparison of AWS Services contains excellent cheat sheets comparing these seemingly similar services which are crucial to solving the tricky scenario-based questions in the actual exam. By knowing each service’s capabilities and use cases, you can consider these types of questions already half-solved.

Lastly, be on the lookout for “key terms” that will help you realize the answer faster. Words such as millisecond latency, serverless, managed, highly available, most cost-effective, fault-tolerant, mobile, streaming, object storage, archival, polling, push notifications, etc are commonly seen in the exam. Time management is very important when taking AWS certification exams, so be sure to monitor the time you consume for each question.

💝 Valentine’s Sale! Get 30% OFF Any Reviewer. Use coupon code: PASSION-4-CLOUD & 10% OFF Store Credits/Gift Cards

Tutorials Dojo portal

Learn AWS with our PlayCloud Hands-On Labs

FREE AWS Exam Readiness Digital Courses

Tutorials Dojo Exam Study Guide eBooks

tutorials dojo study guide eBook

FREE AWS, Azure, GCP Practice Test Samplers

Subscribe to our YouTube Channel

Tutorials Dojo YouTube Channel

Follow Us On Linkedin

Recent Posts

The post AWS Certified Solutions Architect Associate Exam – SAA-C03 Study Path appeared first on Tutorials Dojo.

]]>
https://tutorialsdojo.com/aws-certified-solutions-architect-associate-saa-c03/feed/ 0 4505
AWS Savings Plan https://tutorialsdojo.com/aws-savings-plan/ https://tutorialsdojo.com/aws-savings-plan/#respond Mon, 09 Nov 2020 23:25:25 +0000 https://tutorialsdojo.com/?p=9830 AWS Savings Plan Cheat Sheet Savings Plan is a flexible pricing model that helps you save up cost on Amazon EC2, AWS Fargate, and AWS Lambda usage. You can purchase Savings Plans from any account, payer or linked.  By default, the benefit provided by Savings Plans is applicable to usage across all accounts within [...]

The post AWS Savings Plan appeared first on Tutorials Dojo.

]]>

AWS Savings Plan Cheat Sheet

  • Savings Plan is a flexible pricing model that helps you save up cost on Amazon EC2, AWS Fargate, and AWS Lambda usage.
  • You can purchase Savings Plans from any account, payer or linked. 
  • By default, the benefit provided by Savings Plans is applicable to usage across all accounts within an AWS Organization/consolidated billing family. You can also choose to restrict the benefit of Savings Plans to only the account that purchased them.
  • Similar to Reserved Instances, you have All Upfront, Partial upfront, or No upfront payment options.

Plan Types

  • Compute Savings Plans – provide the most flexibility and prices that are up to 66 percent off of On-Demand rates. These plans automatically apply to your EC2 instance usage, regardless of instance family (example, M5, C5, etc.), instance sizes (example, c5.large, c5.xlarge, etc.), Region (for example, us-east-1, us-east-2, etc.), operating system (for example, Windows, Linux, etc.), or tenancy (Dedicated, default, dedicated host). They also apply to your Fargate and Lambda usage. 
    • You can move a workload between different instance families, shift your usage between different regions, or migrate your application from Amazon EC2 to Amazon ECS using Fargate at any time and continue to receive the discounted rate provided by your Savings Plan.
  • EC2 Instance Savings Plans – provide savings up to 72 percent off On-Demand, in exchange for a commitment to a specific instance family in a chosen AWS Region (for example, M5 in N. Virginia US-East-1). These plans automatically apply to usage regardless of instance size, OS, and tenancy within the specified family in a region.
    • You can change your instance size within the instance family (example, from c5.xlarge to c5.2xlarge) or the operating system (example, from Windows to Linux), or move from Dedicated tenancy to Default and continue to receive the discounted rate provided by your Savings Plan.

Savings Plan vs RIs

 

Compute Savings Plans

EC2 Instance Savings Plans

Convertible RIs

Standard RIs

Savings over On-Demand

Up to 66 percent

Up to 72 percent

Up to 66 percent

Up to 72 percent

Automatically applies pricing to any instance family

Automatically applies pricing to any instance size

Regional only

Regional only

Automatically applies pricing to any tenancy or OS

Automatically applies to Amazon ECS using Fargate and Lambda

Automatically applies pricing across AWS Regions

Term length options of 1 or 3 years

Monitoring

  • The Savings Plans Inventory page shows a detailed overview of the Savings Plans you own.
  • If you’re a user in a linked account of AWS Organizations, you can view the Savings Plans owned by your specific linked account. 
  • If you’re a user in the payer account in AWS Organizations, you can view Savings Plans owned only by the payer account, or you can view Savings Plans owned by all accounts in AWS Organizations.
  • You can use AWS Budgets to set budgets for your Savings Plan utilization, coverage, and costs.

AWS Savings Plan Cheat Sheet References:

https://aws.amazon.com/savingsplans/
https://docs.aws.amazon.com/savingsplans/latest/userguide/what-is-savings-plans.html
https://aws.amazon.com/savingsplans/faq/

💝 Valentine’s Sale! Get 30% OFF Any Reviewer. Use coupon code: PASSION-4-CLOUD & 10% OFF Store Credits/Gift Cards

Tutorials Dojo portal

Learn AWS with our PlayCloud Hands-On Labs

FREE AWS Exam Readiness Digital Courses

Tutorials Dojo Exam Study Guide eBooks

tutorials dojo study guide eBook

FREE AWS, Azure, GCP Practice Test Samplers

Subscribe to our YouTube Channel

Tutorials Dojo YouTube Channel

Follow Us On Linkedin

Recent Posts

The post AWS Savings Plan appeared first on Tutorials Dojo.

]]>
https://tutorialsdojo.com/aws-savings-plan/feed/ 0 9830
AWS Certified DevOps Engineer Professional Exam Guide Study Path DOP-C02 https://tutorialsdojo.com/aws-certified-devops-engineer-professional-exam-guide-study-path-dop-c01-dop-c02/ https://tutorialsdojo.com/aws-certified-devops-engineer-professional-exam-guide-study-path-dop-c01-dop-c02/#respond Fri, 23 Aug 2019 04:22:28 +0000 https://tutorialsdojo.com/?p=4529 Bookmarks Study Materials AWS Services to Focus On Common Exam Scenarios Validate Your Knowledge This certification is the pinnacle of your DevOps career in AWS. The AWS Certified DevOps Engineer Professional (or AWS DevOps Pro) is the advanced certification of both AWS SysOps Administrator Associate and AWS [...]

The post AWS Certified DevOps Engineer Professional Exam Guide Study Path DOP-C02 appeared first on Tutorials Dojo.

]]>

This certification is the pinnacle of your DevOps career in AWS. The AWS Certified DevOps Engineer Professional (or AWS DevOps Pro) is the advanced certification of both AWS SysOps Administrator Associate and AWS Developer Associate. This is similar to how the AWS Solutions Architect Professional role is a more advanced version of the AWS Solutions Architect Associate. 

Generally, AWS recommends that you first take (and pass) both AWS SysOps Administrator Associate and AWS Developer Associate certification exams before taking on this certification. Previously, it was a prerequisite that you obtain the associate level certifications before you are allowed to go for the professional level. Last October 2018, AWS removed this ruling to provide customers a more flexible approach to the certifications. 

DOP-C02 Study Materials

The FREE AWS Exam Readiness course, official AWS sample questions, Whitepapers, FAQs, AWS Documentation, Re:Invent videos, forums, labs, AWS cheat sheets, practice tests, and personal experiences are what you will need to pass the exam. Since the DevOps Pro is one of the most difficult AWS certification exams out there, you have to prepare yourself with every study material you can get your hands on. If you need a review on the fundamentals of AWS DevOps, then do check out our review guides for the AWS SysOps Administrator Associate and AWS Developer Associate certification exams. Also, visit this AWS exam blueprint to learn more details about your certification exam.

For virtual classes, you can attend the DevOps Engineering on AWS and Systems Operations on AWS classes since they will teach you concepts and practices that are expected to be in your exam.

For whitepapers, focus on the following:

  1. Running Containerized Microservices on AWS
  2. Implementing Microservices on AWS
  3. Infrastructure as Code
  4. Introduction to DevOps
  5. Practicing Continuous Integration and Continuous Delivery on AWS
  6. Blue/Green Deployments on AWS whitepaper
  7. Development and Test on AWS
  8. Disaster Recovery of Workloads on AWS: Recovery in the Cloud
  9. AWS Multi-Region Fundamentals

Almost all online training you need can be found on the AWS web page. One digital course that you should check out is the Exam Readiness: AWS Certified DevOps Engineer – Professional course. This digital course contains lectures on the different domains of your exam, and they also provide a short quiz right after each lecture to validate what you have just learned.

Exam Readiness AWS DevOps Engineer Professional

 

Lastly, do not forget to study the AWS CLI, SDKs, and APIs. Since the DevOps Pro is also an advanced certification for Developer Associate, you need to have knowledge of programming and scripting in AWS. Go through the AWS documentation to review the syntax of CloudFormation template, Serverless Application Model template, CodeBuild buildspec, CodeDeploy appspec, and IAM Policy.

 

AWS Services to Focus On for the DOP-C02 Exam

Since this exam is a professional level one, you should already have a deep understanding of the AWS services listed under our SysOps Administrator Associate and Developer Associate review guides. In addition, you should familiarize yourself with the following services since they commonly come up in the DevOps Pro exam: 

  1. AWS CloudFormation
  2. AWS Lambda
  3. Amazon EventBridge
  4. Amazon CloudWatch Alarms
  5. AWS CodePipeline
  6. AWS CodeDeploy
  7. AWS CodeBuild
  8. AWS CodeCommit
  9. AWS Config
  10. AWS Systems Manager
  11. Amazon ECS
  12. Amazon Elastic Beanstalk
  13. AWS CloudTrail
  14. AWS OpsWorks
  15. AWS Trusted Advisor

The FAQs provide a good summary of each service, however, the AWS documentation contains more detailed information that you’ll need to study. These details will be the deciding factor in determining the correct choice from the incorrect choices in your exam. To supplement your review of the services, we recommend that you take a look at Tutorials Dojo’s AWS Cheat Sheets. Their contents are well-written and straight to the point, which will help reduce the time spent going through FAQs and documentation.

Common Exam Scenarios for DOP-C02

Scenario

Solution

Software Development and Lifecycle (SDLC) Automation

Automatically detect and prevent hardcoded secrets within AWS CodeCommit repositories.

Link the CodeCommit repositories to Amazon CodeGuru Reviewer to detect secrets in source code or configuration files, such as passwords, API keys, SSH keys, and access tokens.

An Elastic Beanstalk application must not have any downtime during deployment and requires an easy rollback to the previous version if an issue occurs.

Set up Blue/Green deployment, deploy a new version on a separate environment then swap environment URLs on Elastic Beanstalk.

A new version of an AWS Lambda application is ready to be deployed and the deployment should not cause any downtime. A quick rollback to the previous Lambda version must be available.

Publish a new version of the Lambda function. After testing, use the production Lambda Alias to point to this new version.

In an AWS Lambda application deployment, only 10% of the incoming traffic should be routed to the new version to verify the changes before eventually allowing all production traffic.

Set up Canary deployment for AWS Lambda. Create a Lambda Alias pointed to the new Version. Set Weighted Alias value for this Alias as 10%.

An application hosted in Amazon EC2 instances behind an Application Load Balancer. You must provide a safe way to upgrade the version on Production and allow easy rollback to the previous version.

Launch the application in Amazon EC2 that runs the new version with an Application Load Balancer (ALB) in front. Use Route 53 to change the ALB A-record Alias to the new ALB URL. Rollback by changing the A-record Alias to the old ALB.

An AWS OpsWorks application needs to safely deploy its new version on the production environment. You are tasked to prepare a rollback process in case of unexpected behavior.

Clone the OpsWorks Stack. Test it with the new URL of the cloned environment. Update the Route 53 record to point to the new version.

A development team needs full access to AWS CodeCommit but they should not be able to create/delete repositories.

Assign the developers with the AWSCodeCommitPowerUser IAM policy

During the deployment, you need to run custom actions before deploying the new version of the application using AWS CodeDeploy.

Add lifecycle hook action BeforeAllowTraffic

You need to run custom verification actions after the new version is deployed using AWS CodeDeploy.

Add lifecycle hook action AfterAllowTraffic

You need to set up AWS CodeBuild to automatically run after a pull request has been successfully merged using AWS CodeCommit

Create an Amazon EventBridge rule to detect pull requests and action set to trigger CodeBuild Project. Use AWS Lambda to update the pull request with the result of the project Build

You need to use AWS CodeBuild to create artifact and automatically deploy the new application version

Set CodeBuild to save artifact to S3 bucket. Use CodePipeline to deploy using CodeDeploy and set the build artifact from the CodeBuild output.

You need to upload the AWS CodeBuild artifact to Amazon S3

S3 bucket needs to have versioning and encryption enabled.

You need to review AWS CodeBuild Logs and have an alarm notification for build results on Slack

Send AWS CodeBuild logs to CloudWatch Log group. Create an Amazon EventBridge rule to detect the result of your build and target a Lambda function to send results to the Slack channel (or SNS notification)

Need to get a Slack notification for the status of the application deployments on AWS CodeDeploy

Create an Amazon EventBridge rule to detect the result of CodeDeploy job and target a notification to AWS SNS or a Lambda function to send results to Slack channel

Need to run an AWS CodePipeline every day for updating the development progress status

Create an Amazon EventBridge rule to run on schedule every day and set a target to the AWS CodePipeline ARN

Automate deployment of a Lambda function and test for only 10% of traffic for 10 minutes before allowing 100% traffic flow.

Use CodeDeploy and select deployment configuration CodeDeployDefault.LambdaCanary10Percent10Minutes

Deployment of Elastic Beanstalk application with absolutely no downtime. The solution must maintain full compute capacity during deployment to avoid service degradation.

Choose the “Rolling with additional Batch” deployment policy in Elastic Beanstalk

Deployment of Elastic Beanstalk application where the new version must not be mixed with the current version.

Choose the “Immutable deployments” deployment policy in Elastic Beanstalk

Configuration Management and Infrastructure-as-Code

The resources on the parent CloudFormation stack need to be referenced by other nested CloudFormation stacks

Use Export on the Output field of the main CloudFormation stack and use Fn::ImportValue function to import the value on the other stacks

On which part of the CloudFormation template should you define the artifact zip file on the S3 bucket?

The artifact file is defined on the AWS::Lambda::Function code resource block

Need to define the AWS Lambda function inline in the CloudFormation template

On the AWS::Lambda::Function code resource block, the inline function must be enclosed inside the ZipFile section.

Use CloudFormation to update Auto Scaling Group and only terminate the old instances when the newly launched instances become fully operational

Set AutoScalingReplacingUpdate : WillReplace property to TRUE to have CloudFormation retain the old ASG until the instances on the new ASG are healthy.  

You need to scale-down the EC2 instances at night when there is low traffic using OpsWorks.

Create Time-based instances for automatic scaling of predictable workload.

Can’t install an agent on on-premises servers but need to collect information for migration

Deploy the Agentless Discovery Connector VM on your on-premises data center to collect information.

Syntax for CloudFormation with an Amazon ECS cluster with ALB

Use the AWS::ECS::Service element for the ECS Cluster,
AWS::ECS::TaskDefinition element for the ECS Task Definitions, and the AWS::ElasticLoadBalancingV2
::LoadBalancer
element for the ALB.

Monitoring and Logging

Need to centralize audit and collect configuration setting on all regions of multiple accounts

Setup an Aggregator on AWS Config.

Consolidate CloudTrail log files from multiple AWS accounts

Create a central S3 bucket with bucket policy to grant cross-account permission. Set this as destination bucket on the CloudTrail of the other AWS accounts.

Ensure that CloudTrail logs on the S3 bucket are protected and cannot be tampered with.

Enable Log File Validation on CloudTrail settings

Need to collect/investigate application logs from EC2 or on-premises server

Install the unified CloudWatch Agent to send the logs to CloudWatch Logs for storage and viewing.

Need to review logs from running ECS Fargate tasks

Enable awslogs log driver on the Task Definition and add the required logConfiguration parameter.

Need to run real-time analysis for collected application logs

Send logs to CloudWatch Logs, create a Lambda subscription filter, Elasticsearch subscription filter, or Kinesis stream filter.

Need to be automatically notified if you are reaching the limit of running EC2 instances or limit of Auto Scaling Groups

Track service limits with Trusted Advisor on CloudWatch Alarms using the ServiceLimitUsage metric.

Need a near real-time dashboard with a feature to detect violations for compliance

Use AWS Config to record all configuration changes and store the data reports to Amazon S3. Use Amazon QuickSight to analyze the dataset.

Security and Compliance

Need to monitor the latest Common Vulnerabilities and Exposures (CVE) of EC2 instances.

Use Amazon Inspector to automate security vulnerability assessments to test the network accessibility of the EC2 instances and the security state of applications that run on the instances.

Need to secure the buildspec.yml file which contains the AWS keys and database password stored in plaintext.

Store these values as encrypted parameter on SSM Parameter Store.

Using default IAM policies for AWSCodeCommitPowerUser but must be limited to a specific repository only

Attach additional policy with Deny rule and custom condition if it does not match the specific repository or branch

You need to secure an S3 bucket by ensuring that only HTTPS requests are allowed for compliance purposes.

Create an S3 bucket policy that Deny if checks for condition aws:SecureTransport is false

Need to store a secret, database password, or variable, in the most cost-effective solution

Store the variable on SSM Parameter Store and enable encryption

Need to generate a secret password and have it rotated automatically at regular intervals

Store the secret on AWS Secrets Manager and enable key rotation.

Several team members, with designated roles, need to be granted permission to use AWS resources

Assign AWS managed policies on the IAM accounts such as, ReadOnlyAccess, AdministratorAccess, PowerUserAccess

Apply latest patches on EC2 and automatically create an AMI

Use Systems Manager automation to execute an Automation Document that installs OS patches and creates a new AMI.

Need to have a secure SSH connection to EC2 instances and have a record of all commands executed during the session

Install SSM Agent on EC2 and use SSM Session Manager for the SSH access. Send the session logs to S3 bucket or CloudWatch Logs for auditing and review.

Ensure that the managed EC2 instances have the correct application version and patches installed.

Use SSM Inventory to have a visibility of your managed instances and identify their current configurations.

Apply custom patch baseline from a custom repository and schedule patches to managed instances

Use SSM Patch Manager to define a custom patch baseline and schedule the application patches using SSM Maintenance Windows

Incident and Event Response

There are missing assets in File Gateway but exist directly in the S3 bucket

Run the RefreshCache command for Storage Gateway to refresh the cached inventory of objects for the specified file share.

Need to filter a certain event in CloudWatch Logs

Set up a CloudWatch metric filter to search the particular events.

Need to get a notification if somebody deletes files in your S3 bucket

Setup Amazon S3 Event Notifications to get notifications based on specified S3 events on a particular bucket.

Need to be notified when an RDS Multi-AZ failover happens

Setup Amazon RDS Event Notifications to detect specific events on RDS.

Get a notification if somebody uploaded IAM access keys on any public GitHub repositories

Create an Amazon EventBridge rule for the AWS_RISK_CREDENTIALS_EXPOSED event from AWS Health Service. Use AWS Step Functions to automatically delete the IAM key.

Get notified on Slack when your EC2 instance is having an AWS-initiated maintenance event

Create an Amazon EventBridge rule for the AWS Health Service to detect EC2 Events. Target a Lambda function that will send a notification to the Slack channel

Get notified of any AWS maintenance or events that may impact your EC2 or RDS instances

Create an Amazon EventBridge rule for detecting any events on AWS Health Service and send a message to an SNS topic or invoke a Lambda function.

Monitor scaling events of your Amazon EC2 Auto Scaling Group such as launching or terminating an EC2 instance.

Use Amazon EventBridge Events for monitoring the Auto Scaling Service and monitor the EC2 Instance-Launch Successful and EC2 Instance-Terminate Successful events.

View object-level actions of S3 buckets such as upload or deletion of object in CloudTrail

Set up Data events on your CloudTrail trail to record object-level API activity on your S3 buckets.

Execute a custom action if a specific CodePipeline stage has a FAILED status

Create an Amazon EventBridge rule to detect failed state on the CodePipeline service, and set a target to SNS topic for notification or invoke a Lambda function to perform custom action.

Automatically rollback a deployment in AWS CodeDeploy when the number of healthy instances is lower than the minimum requirement.

On CodeDeploy, create a deployment alarm that is integrated with Amazon CloudWatch. Track the MinimumHealthyHosts metric for the threshold of EC2 instances and trigger the rollback if the alarm is breached.

Need to complete QA testing before deploying a new version to the production environment

Add a Manual approval step on AWS CodePipeline, and instruct the QA team to approve the step before the pipeline can resume the deployment.

Get notified for OpsWorks auto-healing events

Create an Amazon EventBridge rule for the OpsWorks Service to track the auto-healing events

Resilient Cloud Solutions

Need to deploy stacks across multiple AWS accounts and regions

Use AWS CloudFormation StackSets to extend the capability of stacks to create, update, or delete stacks across multiple accounts and AWS Regions with a single operation.

Need to ensure that both the application and the database are running in the event that one Availability Zone becomes unavailable.

Deploy your application on multiple Availability Zones and set up your Amazon RDS database to use Multi-AZ Deployments.

In the event of an AWS Region outage, you have to make sure that both your application and database will still be running to avoid any service outages.

Create a copy of your deployment on the backup AWS region. Set up an RDS Read-Replica on the backup region.

Automatically switch traffic to the backup region when your primary AWS region fails

Set up Route 53 Failover routing policy with health check enabled on your primary region endpoint.

Need to ensure the availability of a legacy application running on a single EC2 instance

Set up an Auto Scaling Group with MinSize=1 and MaxSize=1 configuration to set a fixed count and ensure that it will be replaced when the instance becomes unhealthy

Ensure that every EC2 instance on an Auto Scaling group downloads the latest code first before being attached to a load balancer

Create an Auto Scaling Lifecycle hook and configure the Pending:Wait hook with the action to download all necessary packages.

Ensure that all EC2 instances on an Auto Scaling group upload all log files in the S3 bucket before being terminated.

Use the Auto Scaling Lifecycle and configure the Terminating:Wait hook with the action to upload all logs to the S3 bucket.

 

Validate Your Knowledge

After your review, you should take some practice tests to measure your preparedness for the real exam. AWS offers a sample practice test for free which you can find here. You can also opt to buy the longer AWS sample practice test at aws.training, and use the discount coupon you received from any previously taken certification exams. Be aware though that the sample practice tests do not mimic the difficulty of the real DevOps Pro exam.

Therefore, we highly encourage using other mock exams such as our very own AWS Certified DevOps Engineer Professional Practice Exam course which contains high-quality questions with complete explanations on correct and incorrect answers, visual images and diagrams, YouTube videos as needed, and also contains reference links to official AWS documentation as well as our cheat sheets and study guides. You can also pair our practice exams with our AWS Certified DevOps Engineer Professional Exam Study Guide eBook to further help in your exam preparations.

 

Sample Practice Test Questions For DOP-C02:

Question 1

An application is hosted in an Auto Scaling group of Amazon EC2 instances with public IP addresses in a public subnet. The instances are configured with a user data script which fetch and install the required system dependencies of the application from the Internet upon launch. A change was recently introduced to prohibit any Internet access from these instances to improve the security but after its implementation, the instances could not get the external dependencies anymore. Upon investigation, all instances are properly running but the hosted application is not starting up completely due to the incomplete installation.

Which of the following is the MOST secure solution to solve this issue and also ensure that the instances do not have public Internet access?

  1. Download all of the external application dependencies from the public Internet and then store them to an S3 bucket. Set up a VPC endpoint for the S3 bucket and then assign an IAM instance profile to the instances in order to allow them to fetch the required dependencies from the bucket.
  2. Deploy the Amazon EC2 instances in a private subnet and associate Elastic IP addresses on each of them. Run a custom shell script to disassociate the Elastic IP addresses after the application has been successfully installed and is running properly.
  3. Use a NAT gateway to disallow any traffic to the VPC which originated from the public Internet. Deploy the Amazon EC2 instances to a private subnet then set the subnet’s route table to use the NAT gateway as its default route.
  4. Set up a brand new security group for the Amazon EC2 instances. Use a whitelist configuration to only allow outbound traffic to the site where all of the application dependencies are hosted. Delete the security group rule once the installation is complete. Use AWS Config to monitor the compliance.

Correct Answer: 1

A VPC endpoint enables you to privately connect your VPC to supported AWS services and VPC endpoint services powered by PrivateLink without requiring an Internet gateway, NAT device, VPN connection, or AWS Direct Connect connection. Instances in your VPC do not require public IP addresses to communicate with resources in the service. Traffic between your VPC and the other service does not leave the Amazon network.

Endpoints are virtual devices. They are horizontally scaled, redundant, and highly available VPC components that allow communication between instances in your VPC and services without imposing availability risks or bandwidth constraints on your network traffic.

There are two types of VPC endpoints: interface endpoints and gateway endpoints. You can create the type of VPC endpoint required by the supported service. S3 and DynamoDB are using Gateway endpoints while most of the services are using Interface endpoints.

You can use an S3 bucket to store the required dependencies and then set up a VPC Endpoint to allow your EC2 instances to access the data without having to traverse the public Internet.

Hence, the correct answer is the option that says: Download all of the external application dependencies from the public Internet and then store them to an S3 bucket. Set up a VPC endpoint for the S3 bucket and then assign an IAM instance profile to the instances in order to allow them to fetch the required dependencies from the bucket.

The option that says: Deploy the Amazon EC2 instances in a private subnet and associate Elastic IP addresses on each of them. Run a custom shell script to disassociate the Elastic IP addresses after the application has been successfully installed and is running properly is incorrect because it is possible that the custom shell script may fail and the disassociation of the Elastic IP addresses might not be fully implemented which will allow the EC2 instances to access the Internet.

The option that says: Use a NAT gateway to disallow any traffic to the VPC which originated from the public Internet. Deploy the Amazon EC2 instances to a private subnet then set the subnet’s route table to use the NAT gateway as its default route is incorrect because although a NAT Gateway can safeguard the instances from any incoming traffic that were initiated from the Internet, it still permits them to send outgoing requests externally.

The option that says: Set up a brand new security group for the Amazon EC2 instances. Use a whitelist configuration to only allow outbound traffic to the site where all of the application dependencies are hosted. Delete the security group rule once the installation is complete. Use AWS Config to monitor the compliance is incorrect because this solution has a high operational overhead since the actions are done manually. This is susceptible to human error such as in the event that the DevOps team forgets to delete the security group. The use of AWS Config will just monitor and inform you about the security violation but it won’t do anything to remediate the issue.

References:
https://docs.aws.amazon.com/vpc/latest/userguide/vpc-endpoints.html
https://docs.aws.amazon.com/vpc/latest/userguide/vpce-gateway.html

Check out this Amazon VPC Cheat Sheet:
https://tutorialsdojo.com/amazon-vpc/

Question 2

A DevOps engineer has been tasked to implement a reliable solution to maintain all of their Windows and Linux servers both in AWS and in on-premises data center. There should be a system that allows them to easily update the operating systems of their servers and apply the core application patches with minimum management overhead. The patches must be consistent across all levels in order to meet the company’s security compliance.

Which of the following is the MOST suitable solution that you should implement?

  1. Configure and install AWS Systems Manager agent on all of the EC2 instances in your VPC as well as your physical servers on-premises. Use the Systems Manager Patch Manager service and specify the required Systems Manager Resource Groups for your hybrid architecture. Utilize a preconfigured patch baseline and then run scheduled patch updates during maintenance windows.
  2. Configure and install the AWS OpsWorks agent on all of your EC2 instances in your VPC and your on-premises servers. Set up an OpsWorks stack with separate layers for each OS then fetch a recipe from the Chef supermarket site (supermarket.chef.io) to automate the execution of the patch commands for each layer during maintenance windows.
  3. Develop a custom python script to install the latest OS patches on the Linux servers. Set up a scheduled job to automatically run this script using the cron scheduler on Linux servers. Enable Windows Update in order to automatically patch Windows servers or set up a scheduled task using Windows Task Scheduler to periodically run the python script.
  4. Store the login credentials of each Linux and Windows servers on the AWS Systems Manager Parameter Store. Use Systems Manager Resource Groups to set up one group for your Linux servers and another one for your Windows servers. Remotely login, run, and deploy the patch updates to all of your servers using the credentials stored in the Systems Manager Parameter Store and through the use of the Systems Manager Run Command.

Correct Answer: 1

AWS Systems Manager Patch Manager automates the process of patching managed instances with both security-related and other types of updates. You can use the Patch Manager to apply patches for both operating systems and applications. (On Windows Server, application support is limited to updates for Microsoft applications.) You can patch fleets of Amazon EC2 instances or your on-premises servers and virtual machines (VMs) by operating system type. This includes supported versions of Windows Server, Ubuntu Server, Red Hat Enterprise Linux (RHEL), SUSE Linux Enterprise Server (SLES), CentOS, Amazon Linux, and Amazon Linux 2. You can scan instances to see only a report of missing patches, or you can scan and automatically install all missing patches.

Patch Manager uses patch baselines, which include rules for auto-approving patches within days of their release, as well as a list of approved and rejected patches. You can install patches on a regular basis by scheduling patching to run as a Systems Manager maintenance window task. You can also install patches individually or to large groups of instances by using Amazon EC2 tags. You can add tags to your patch baselines themselves when you create or update them.

resource group is a collection of AWS resources that are all in the same AWS Region and that match criteria provided in a query. You build queries in the AWS Resource Groups (Resource Groups) console or pass them as arguments to Resource Groups commands in the AWS CLI.

With AWS Resource Groups, you can create a custom console that organizes and consolidates information based on criteria that you specify in tags. After you add resources to a group you created in Resource Groups, use AWS Systems Manager tools such as Automation to simplify management tasks on your resource group. You can also use the resource groups you create as the basis for viewing monitoring and configuration insights in Systems Manager.

Hence, the correct answer is: Configure and install AWS Systems Manager agent on all of the EC2 instances in your VPC as well as your physical servers on-premises. Use the Systems Manager Patch Manager service and specify the required Systems Manager Resource Groups for your hybrid architecture. Utilize a preconfigured patch baseline and then run scheduled patch updates during maintenance windows.

The option which uses an AWS OpsWorks agent is incorrect because the OpsWorks service is primarily used for application deployment and not for applying application patches or upgrading the operating systems of your servers.

The option which uses a custom python script is incorrect because this solution entails a high management overhead since you need to develop a new script and maintain a number of cron schedulers in your Linux servers and Windows Task Scheduler jobs on your Windows servers.

The option which uses the AWS Systems Manager Parameter Store is incorrect because this is not a suitable service to use to handle the patching activities of your servers. You have to use AWS Systems Manager Patch Manager instead.

References:
https://docs.aws.amazon.com/systems-manager/latest/userguide/systems-manager-patch.html
https://docs.aws.amazon.com/systems-manager/latest/userguide/systems-manager-resource-groups.html

Check out this AWS Systems Manager Cheat Sheet:
https://tutorialsdojo.com/aws-systems-manager/

Click here for more AWS Certified DevOps Engineer Professional practice exam questions.

More AWS reviewers can be found here:

 

To get more in-depth insights on the hardcore concepts that you should know to pass the DevOps Pro exam, we do highly recommend that you also get our DevOps Engineer Professional Study Guide eBook.

At this point, you should already be very knowledgeable on the following domains:

  1. CI/CD, Application Development and Automation
  2. Configuration Management and Infrastructure as Code
  3. Security, Monitoring and Logging
  4. Incident Mitigation and Event Response
  5. Implementing  High Availability, Fault Tolerance, and Disaster Recovery

Additional Training Materials For DOP-C02

There are a few AWS Certified DevOps Engineer – Professional video courses that you can check out as well, which can complement your exam preparations:

  1. AWS Certified DevOps Engineer – Professional by Adrian Cantrill

As an AWS DevOps practitioner, you shoulder a lot of roles and responsibilities. Many professionals in the industry have attained proficiency through continuous practice and producing results of value. Therefore, you should properly review all the concepts and details that you need to learn so that you can also achieve what others have achieved.

The day before your exam, be sure to double-check the schedule, location, and items to bring for your exam. During the exam itself, you have 180 minutes to answer all questions and recheck your answers. Be sure to manage your time wisely. It will also be very beneficial for you to review your notes before you go in to refresh your memory. The AWS DevOps Pro certification is very tough to pass, and the choices for each question can be very misleading if you do not read them carefully. Be sure to understand what is being asked in the questions, and what options are offered to you. With that, we wish you all the best in your exam!

💝 Valentine’s Sale! Get 30% OFF Any Reviewer. Use coupon code: PASSION-4-CLOUD & 10% OFF Store Credits/Gift Cards

Tutorials Dojo portal

Learn AWS with our PlayCloud Hands-On Labs

FREE AWS Exam Readiness Digital Courses

Tutorials Dojo Exam Study Guide eBooks

tutorials dojo study guide eBook

FREE AWS, Azure, GCP Practice Test Samplers

Subscribe to our YouTube Channel

Tutorials Dojo YouTube Channel

Follow Us On Linkedin

Recent Posts

The post AWS Certified DevOps Engineer Professional Exam Guide Study Path DOP-C02 appeared first on Tutorials Dojo.

]]>
https://tutorialsdojo.com/aws-certified-devops-engineer-professional-exam-guide-study-path-dop-c01-dop-c02/feed/ 0 4529
AWS Certified Developer Associate Exam Guide Study Path DVA-C02 https://tutorialsdojo.com/aws-certified-developer-associate-exam-guide-study-path-dva-c02/ https://tutorialsdojo.com/aws-certified-developer-associate-exam-guide-study-path-dva-c02/#respond Fri, 23 Aug 2019 02:58:18 +0000 https://tutorialsdojo.com/?p=4518 Bookmarks AWS Certified Developer Study Materials AWS Services to Focus On Common Exam Scenarios AWS Certified Developer Associate DVA-C02 Video Course Validate Your Knowledge The AWS Certified Developer Associate DVA-C02 certification is for those who are interested in handling cloud-based applications and services. Typically, applications developed in [...]

The post AWS Certified Developer Associate Exam Guide Study Path DVA-C02 appeared first on Tutorials Dojo.

]]>

The AWS Certified Developer Associate DVA-C02 certification is for those who are interested in handling cloud-based applications and services. Typically, applications developed in AWS are sold as products in the AWS Marketplace. This allows other customers to use the customized, cloud-compatible application for their own business needs. Because of this, AWS developers should be proficient in using the AWS CLI, APIs and SDKs for application development.

The AWS Certified Developer Associate exam (or AWS CDA for short) will test your ability to:

  • Demonstrate an understanding of core AWS services, uses, and basic AWS architecture best practices.
  • Demonstrate proficiency in developing, deploying, and debugging cloud-based applications using AWS.

Having prior experience in programming and scripting for both standard, containerized, and/or serverless applications will greatly make your review easier. Additionally, we recommend having an AWS account available for you to play around with to better visualize parts in your review that involve code. For more details regarding your exam, you can check out this AWS exam blueprint and official sample DVA-C02 questions for the AWS Certified Developer Associate exam.

AWS Certified Developer DVA-C02 Study Materials

If you are not well-versed in the fundamentals of AWS, we suggest that you visit our AWS Certified Cloud Practitioner review guide to get started. AWS also offers a free virtual course called AWS Cloud Practitioner Essentials that you can take in their AWS SkillsBuilder training portal or via their AWS APN sites like the https://portal.tutorialsdojo.com website. Knowing the basic concepts and services of AWS will make your understanding of these challenging concepts more coherent and understandable.

The primary study materials you’ll be using for your review are the: FREE AWS Exam Readiness video course, official AWS sample questions, AWS whitepapers, FAQs, AWS cheat sheets, and AWS practice exams.

Exam Readiness AWS Certified Developer Associate Exam DVA-C02

For whitepapers, they include the following:

  1. Microservices on AWS – This paper introduces the ways you can implement a microservice system on different AWS Compute platforms. You should study how these systems are built and the reasoning behind the chosen services for that system.
  2. Running Containerized Microservices on AWS – This paper talks about the best practices in deploying a containerized microservice system in AWS. Focus on the example scenarios where the best practices are applied, how they are applied, and using which services to do so.
  3. Optimizing Enterprise Economics with Serverless Architectures – Read upon the use cases of serverless in different platforms. Understand when it is best to use serverless vs maintaining your own servers. Also, familiarize yourself with the AWS services that are under the serverless toolkit.
  4. AWS Serverless Multi-Tier Architectures with Amazon API Gateway and AWS Lambda – Learn how to implement common Serverless patterns on applications such as microservices, mobile backends, and single-page applications. You can use API Gateway, AWS Lambda, and other services to reduce the development and operations cycles required to create and
    manage multi-tiered applications.

  5. Practicing Continuous Integration and Continuous Delivery on AWS Accelerating Software Delivery with DevOps – If you are a developer aiming for the DevOps track, then this whitepaper is packed with practices for you to learn. CI/CD involves many stages that allow you to deploy your applications faster. Therefore, you should study the different deployment methods and understand how each of them works. Also, familiarize yourself with the implementation of CI/CD in AWS. We recommend performing a lab of this in your AWS account.
  6. Blue/Green Deployments on AWS – Blue/Green Deployments is a popular deployment method that you should learn as an AWS Developer. Study how blue/green deployments are implemented and using what set of AWS services. It is also crucial that you understand the scenarios where blue/green deployments are beneficial and where they are not. Do NOT mix up your blue environment with your green environment.
  7. AWS Security Best Practices – Understand the security best practices and their purpose in your environment. Some services offer more than one form of security feature, such as multiple key management schemes for encryption. It is important that you can determine which form is most suitable to the given scenarios in your exam.
  8. AWS Well-Architected Framework This whitepaper is one of the most important papers that you should study for the exam. It discusses the different pillars that make up a well-architected cloud environment. Expect the scenarios in your exam to be heavily based on these pillars. Each pillar will have a corresponding whitepaper of its own that discusses the respective pillar in more detail.

Also, check out this article: Top 5 FREE AWS Review Materials.

 

AWS Services For DVA-C02 to Focus On 

AWS offers extensive documentation and well-written FAQs for all of its services. These two will be your primary source of information when studying AWS. You need to be well-versed in a number of AWS products and services since you will almost always be using them in your work. I recommend checking out Tutorials Dojo’s AWS Cheat Sheets which provides a summarized but highly informative set of notes and tips for your review of these services.

 

Services to study for:

  1. Amazon EC2 / ELB / Auto Scaling – Be comfortable with integrating EC2 to ELBs and Auto Scaling. Study the commonly used AWS CLI commands, APIs, and SDK code under these services. Focus as well on security, maintaining high availability, and enabling network connectivity from your ELB to your EC2 instances.
  2. AWS Elastic Beanstalk – Know when Elastic Beanstalk is more appropriate to use than other compute solutions or infrastructure as a code solution like CloudFormation or OpsWorks. Experiment with the service yourself in your AWS account, and understand how you can deploy and maintain your own application in Beanstalk.
  3. Amazon ECS – Study how you can manage your own cluster using ECS. Also, figure out how ECS can be integrated to a CI/CD pipeline. Be sure to read the FAQs thoroughly since the exam includes multiple questions about containers.
  4. AWS Lambda – The best way to learn Lambda is to create a function yourself. Also, remember that Lambda allows custom runtimes that a customer can provide himself. Figure out what services can be integrated with Lambda, and how Lambda functions can capture and manipulate incoming events. Lastly, study the Serverless Application Model (SAM).
  5. Amazon RDS / Amazon Aurora – Understand how RDS integrates with your application through EC2, ECS, Elastic Beanstalk and more. Compare RDS to DynamoDB and Elasticache and determine when RDS is best used. Also know when it is better to use Amazon Aurora than Amazon RDS, and when RDS is more useful than hosting your own database inside an EC2 instance.
  6. Amazon DynamoDB – You should have a complete understanding of the DynamoDB service as this is very crucial in your exam. Read the DynamoDB documentation since it is more detailed and informative than the FAQ. As a developer, you should also know how to provision your own DynamoDB table, and you should be capable of tweaking its settings to meet application requirements.
  7. Amazon Elasticache – Elasticache is a caching service that you’ll be encountering often in the exam. Compare and contrast Redis from Memcached. Determine when Elasticache is more suitable than DynamoDB or RDS.
  8. Amazon S3 – S3 is usually your go-to storage for objects. Study how you can secure your objects through KMS encryption, ACLs, and bucket policies. Know how S3 stores your objects to keep them highly durable and available. Also, learn about lifecycle policies. Compare S3 to EBS and EFS to know when is S3 more preferred than the other two.
  9. Amazon EFS – EFS is used to set up file systems for multiple EC2 instances. Compare and contrast S3 to EFS and EBS. Also, study on file encryption and optimizing EFS performance.
  10. Amazon Kinesis– There are usually tricky questions on Kinesis so you should read its documentation too. Focus on Kinesis Data Streams. Also, have an idea of the other Kinesis services. Familiarize yourself with Kinesis APIs, Kinesis Sharding, and integration with storage services such as S3 or compute services such as Lambda.
  11. Amazon API Gateway – API gateway is usually used together with AWS Lambda as part of the serverless application model. Understand API Gateway’s structure such as resources, stages, and methods. Learn how you can combine API Gateway with other AWS services such as Lambda or CloudFront. Determine how you can secure your APIs so that only a select number of people can execute them.
  12. Amazon Cognito – Cognito is used for mobile and web authentication. You usually encounter Cognito questions in the exam along with Lambda, API Gateway, and DynamoDB. This usually involves some mobile application requiring an easy sign up/sign in feature from AWS. It is highly suggested that you try using Cognito to better understand its features.
  13. Amazon SQS – Study the purpose of different SQS queues, timeouts, and how your messages are handled inside queues. Messages in an SQS queue are not deleted when polled, so be sure to read on that as well. There are different polling mechanisms in SQS, so you should compare and contrast each one.
  14. Amazon CloudWatch – CloudWatch is your primary monitoring tool for all your AWS services. Be sure to know what metrics can be found under CloudWatch monitoring, and what metrics require a CloudWatch agent installed. Also study CloudWatch Logs, CloudWatch Alarms, and Billing monitoring. Differentiate the kinds of logs stored in CloudWatch vs logs stored in CloudTrail.
  15. AWS IAM – IAM is the security center of your cloud. Therefore, you should familiarize yourself with the different IAM features. Study how IAM policies are written, and what each section in the policy means. Understand the usage of IAM user roles and service roles. You should have read upon the best practices whitepaper in securing your AWS account through IAM.
  16. AWS KMS – KMS contains keys that you use to encrypt EBS, S3, and other services. Know what these services are. Learn the different types of KMS keys and in which situations is each type of key used.
  17. AWS CodeBuild / AWS CodeCommit / AWS CodeDeploy / AWS CodePipeline – These are your tools in implementing CI/CD in AWS. Study how you can build applications in CodeBuild (buildspec), and how you’ll prepare configuration files (appspec) for CodeDeploy. CodeCommit is a git repository so having knowledge in Git will be beneficial. I suggest to build a simple pipeline of your own in CodePipeline to see how you should manage your code deployments. It is also important to learn how you can rollback to your previous application version after a failed deployment. The whitepapers above should have explained in-place deployments and blue/green deployments, and how to perform automation. 
  18. AWS CloudFormation – Study the structure of CloudFormation scripts and how you can use them to build your infrastructure. Be comfortable with both json and yaml formats. Read a bit about stacksets. List down the services that use CloudFormation in the backend for provisioning AWS resources, such as AWS SAM, and processes such as in CI/CD.

Aside from the concepts and services, you should study about the AWS CLI, the different commonly used APIs (for services such as EC2, EBS or Lambda), and the AWS SDKs. Read up on the AWS Serverless Application Model (AWS SAM) and AWS Server Migration Services as well as these may come up in the exam. It will also be very helpful to have experience interacting with AWS APIs and SDKs, and troubleshooting any errors that you encounter while using them.

 

DVA-C02 Common Exam Scenarios

Scenario

Solution

DVA-C02 Domain 1: Development with AWS Services

An application running on a local server is converted to a Lambda function. When the function was tested, an Unable to import module error shows.

Install the missing modules in your application’s folder and package the folder into a ZIP file. Upload the ZIP file to AWS Lambda.

A Lambda function needs temporary storage to store files while executing.

Store the files in the /tmp directory

A Lambda function needs to read from an RDS database inside a private subnet

Configure the function to connect to the VPC where the database is hosted

A Developer has an application that uses a RESTful API hosted in API Gateway. The API requests are failing with a "No 'Access-Control-Allow-Origin' header is present on the requested resource" error message.

Enable CORS in the API Gateway Console.

A developer needs to expose a Lambda function publicly via HTTP

Create a function URL for the Lambda function and use the NONE auth type

A website integrated with API Gateway requires user requests to reach a Lambda function backend without intervention from API Gateway. Which integration type should be used?

AWS_PROXY

Which Amazon ElastiCache data store supports set sorting and ranking of cached datasets?

Redis

DVA-C02 Domain 2: Security

A developer wants to redact Personal Identifiable Information (PII) from files retrieved from S3 on the fly

Use S3 Object Lambda

A company has two AWS accounts. IAM users from Account A need to access resources in Account B.

1. In Account B, create an IAM role with a trust policy for the Developers in Account A.

2. Update the IAM role’s permission to access the resources in Account B

3. In Account A, update the permission of the IAM users to assume the IAM role in Account B

You need to authenticate website users using their social media identity profiles.

Use Amazon Cognito Identity Pools

How do you deny non-HTTPS requests to an S3 bucket?

In the bucket policy, create a statement that denies a request when the aws:SecureTransport condition is set to false.

A developer wants to retrieve RDS database credentials from central storage. The storage service must support automatic rotation.

Store the database credentials in AWS Secrets Manager

A developer wants to grant users from other accounts permission to invoke a Lambda function.

Update the Lambda function’s resource-based policy

DVA-C02 Domain 3: Deployment

A developer needs a reliable framework for building serverless applications in AWS

AWS SAM

What section must be added to a CloudFormation template to include resources defined by AWS SAM?

Transform

A CloudFormation stack creation process failed unexpectedly

CloudFormation will roll back by deleting resources that it has already created.

A CloudFormation template will be used to create resources across multiple AWS accounts

Use CloudFormation StackSets

A developer is deploying a new feature for an application. 10% of the traffic must be shifted to the new version in the first increment, and the remaining 90% should be redirected after some minutes.

Canary

In AWS Amplify Hosting, in which file do you add commands for unit tests?

amplify.yml

A developer wants to release a new AWS ElasticBeanstalk application to 2 EC2 instances at a time while keeping full capacity.

Use Rolling with additional batch

DVA-C02 Domain 4: Troubleshooting and Optimization

A developer wants to use AWS X-Ray to trace requests made by an application in an EC2 instance. How can the developer enable AWS X-Ray on the instance?

Install the AWS X-Ray daemon on the EC2 instance

An application uses Amazon CloudFront to distribute a static website. The developer wants to redirect requests to specific URLs based on the user’s location.

  1. Create a CloudFront URL that redirects requests based on the CloudFront-Viewer-Country header’s value.
  2. Associate the CloudFront URL with the distribution’s Viewer Request event.

An application is running in a fleet of EC2 instances. How can the developer aggregate the application logs in a central location?

Install the CloudWatch agent in the EC2 instances and send the application logs to CloudWatch Logs

An application uses a DynamoDB table with Global Secondary Index. DynamoDB requests are returning an ProvisionedThroughputExceededException error even though the table has sufficient capacity. Why is this happening?

The write capacity of the GSI is less than the base table.

A serverless application is composed of AWS Lambda, DynamoDB, and API Gateway. Users are complaining about getting HTTP 504 errors.

The API requests are reaching the maximum integration timeout for API Gateway (29 seconds)

Relevant API/CLI commands

A Developer needs to decode an encoded authorization failure message.

Use the aws sts decode-authorization-message command.

How can a Developer verify permission to call a CLI command without actually making a request?

Use the --dry-run parameter along with the CLI command.

A Developer needs to deploy a CloudFormation template from a local computer.

Use the aws cloudformation package and aws cloudformation deploy command

A Developer has to ensure that no applications can fetch a message from an SQS queue that’s being processed or has already been processed.

Increase the VisibilityTimeout value using the ChangeMessageVisibility API and delete the message using the DeleteMessage API.

A Developer has created an IAM Role for an application that uploads files to an S3 bucket. Which API call should the Developer use to allow the application to make upload requests?

Use the AssumeRole API

 

AWS Certified Developer Associate Video Course

This is a succinct and straight-to-the-point Developer Associate video training course that will equip you with the exam-specific knowledge that you need to understand in order to pass the AWS Certified Developer exam. It’s ideal for learners who don’t have much time to prepare for the exam. Click here to enroll. Here is a sneak peek of our video course introduction:

 

Validate Your DVA-C02 Knowledge

The AWS CDA exam will be packed with tricky questions. It would be great if you could get a feel of how the questions are structured through practice tests. Luckily, Tutorials Dojo offers a great set of practice questions for you to try out here. These practice tests will help validate your knowledge of what you’ve learned so far and fill in any missing details that you might have skipped in your review. You can also pair our practice exams with our AWS Certified Developer Associate Exam Study Guide eBook to further help in your exam preparations.

 

AWS Certified Developer Sample Practice Test Questions:

Question 1

A programmer is developing a Node.js application that will be run on a Linux server in their on-premises data center. The application will access various AWS services such as S3, DynamoDB, and ElastiCache using the AWS SDK.

Which of the following is the MOST suitable way to provide access for the developer to accomplish the specified task?

  1. Create an IAM role with the appropriate permissions to access the required AWS services. Assign the role to the on-premises Linux server.
  2. Go to the AWS Console and create a new IAM user with programmatic access. In the application server, create the credentials file at ~/.aws/credentials with the access keys of the IAM user.
  3. Create an IAM role with the appropriate permissions to access the required AWS services and assign the role to the on-premises Linux server. Whenever the application needs to access any AWS services, request temporary security credentials from STS using the AssumeRole API.
  4. Go to the AWS Console and create a new IAM User with the appropriate permissions. In the application server, create the credentials file at ~/.aws/credentials with the username and the hashed password of the IAM User.

Correct Answer: 2

If you have resources that are running inside AWS that need programmatic access to various AWS services, then the best practice is always to use IAM roles. However, applications running outside of an AWS environment will need access keys for programmatic access to AWS resources. For example, monitoring tools running on-premises and third-party automation tools will need access keys.

Access keys are long-term credentials for an IAM user or the AWS account root user. You can use access keys to sign programmatic requests to the AWS CLI or AWS API (directly or using the AWS SDK).

In order to use the AWS SDK for your application, you have to create your credentials file first at ~/.aws/credentials for Linux servers or at C:\Users\USER_NAME\.aws\credentials for Windows users and then save your access keys.

Hence, the correct answer is: Go to the AWS Console and create a new IAM user with programmatic access. In the application server, create the credentials file at ~/.aws/credentials with the access keys of the IAM user.

The option that says: Create an IAM role with the appropriate permissions to access the required AWS services and assign the role to the on-premises Linux server. Whenever the application needs to access any AWS services, request for temporary security credentials from STS using the AssumeRole API is incorrect because the scenario says that the application is running in a Linux server on-premises and not on an EC2 instance. You cannot directly assign an IAM Role to a server on your on-premises data center. Although it may be possible to use a combination of STS and IAM Role, the use of access keys for AWS SDK is still preferred, especially if the application server is on-premises.

The option that says: Create an IAM role with the appropriate permissions to access the required AWS services. Assign the role to the on-premises Linux server is also incorrect because, just as mentioned above, the use of an IAM Role is not a suitable solution for this scenario.

The option that says: Go to the AWS Console and create a new IAM User with the appropriate permissions. In the application server, create the credentials file at ~/.aws/credentials with the username and the hashed password of the IAM User is incorrect. An IAM user’s username and password can only be used to interact with AWS via its Management Console. These credentials are intended for human use and are not suitable for use in automated systems, such as applications and scripts that make programmatic calls to AWS services.

References:
https://aws.amazon.com/developers/getting-started/nodejs/
https://docs.aws.amazon.com/general/latest/gr/aws-sec-cred-types.html#access-keys-and-secret-access-keys
https://aws.amazon.com/blogs/security/guidelines-for-protecting-your-aws-account-while-using-programmatic-access/

Check out this AWS IAM Cheat Sheet:
https://tutorialsdojo.com/aws-identity-and-access-management-iam/

Question 2

A developer is moving a legacy web application from their on-premises data center to AWS. The application is used simultaneously by thousands of users, and their session states are stored in memory. The on-premises server usually reaches 100% CPU Utilization every time there is a surge in the number of people accessing the application.

Which of the following is the best way to re-factor the performance and availability of the application’s session management once it is migrated to AWS?

  1. Use an ElastiCache for Redis cluster to store the user session state of the application.
  2. Store the user session state of the application using CloudFront.
  3. Use an ElastiCache for Memcached cluster to store the user session state of the application.
  4. Use Sticky Sessions with Local Session Caching.

Correct Answer: 1

Amazon ElastiCache for Redis is a blazing fast in-memory data store that provides sub-millisecond latency to power internet-scale real-time applications. Built on open-source Redis and compatible with the Redis APIs, ElastiCache for Redis works with your Redis clients and uses the open Redis data format to store your data. Your self-managed Redis applications can work seamlessly with ElastiCache for Redis without any code changes. ElastiCache for Redis combines the speed, simplicity, and versatility of open-source Redis with manageability, security, and scalability from Amazon to power the most demanding real-time applications in Gaming, Ad-Tech, E-Commerce, Healthcare, Financial Services, and IoT.

In order to address scalability and provide a shared data storage for sessions that can be accessible from any individual web server, you can abstract the HTTP sessions from the web servers themselves. A common solution to for this is to leverage an In-Memory Key/Value store such as Redis and Memcached. While Key/Value data stores are known to be extremely fast and provide sub-millisecond latency, the added network latency and added cost are the drawbacks. An added benefit of leveraging Key/Value stores is that they can also be utilized to cache any data, not just HTTP sessions, which can help boost the overall performance of your applications.

With Redis, you can keep your data on disk with a point in time snapshot which can be used for archiving or recovery. Redis also lets you create multiple replicas of a Redis primary. This allows you to scale database reads and to have highly available clusters. Hence, the correct answer for this scenario is to use an ElastiCache for Redis cluster to store the user session state of the application.

The option that says: Store the user session state of the application using CloudFront is incorrect because CloudFront is not suitable for storing user session data. It is primarily used as a content delivery network.

The option that says: Use an ElastiCache for Memcached cluster to store the user session state of the application is incorrect. Although using ElastiCache is a viable answer, Memcached is not as highly available as Redis.

The option that says: Use Sticky Sessions with Local Session Caching is incorrect. Although this is also a viable solution, it doesn’t offer durability and high availability compared to a distributed session management solution. The best solution for this scenario is to use an ElastiCache for Redis cluster.

References:
https://aws.amazon.com/caching/session-management
https://aws.amazon.com/elasticache/redis-vs-memcached/
https://aws.amazon.com/elasticache/redis/

Check out this Amazon Elasticache Cheat Sheet:
https://tutorialsdojo.com/amazon-elasticache/

Click here for more AWS Certified Developer Associate practice exam questions.

Check out our other AWS practice test courses here:

AWS Certification

 

To increase your chances of passing the AWS Certified Developer Associate exam, we recommend using a combination of our video course, our practice tests, and our study guide eBook. You can view our triple bundles here.

 

The AWS Certified Developer certification is one of the most sought-after certifications in the DevOps industry. It validates your knowledge of the AWS Cloud and foundational DevOps practices. It is an achievement of its own if you become AWS certified. Hence, it will be best if you could get proper sleep the day before your exam. Review any notes that you have written down, and go over the incorrect items in your practice tests if you took it. You should also check again the venue, the time, and the things needed for your exam. As so, we wish you the best of luck and the best of outcomes!

💝 Valentine’s Sale! Get 30% OFF Any Reviewer. Use coupon code: PASSION-4-CLOUD & 10% OFF Store Credits/Gift Cards

Tutorials Dojo portal

Learn AWS with our PlayCloud Hands-On Labs

FREE AWS Exam Readiness Digital Courses

Tutorials Dojo Exam Study Guide eBooks

tutorials dojo study guide eBook

FREE AWS, Azure, GCP Practice Test Samplers

Subscribe to our YouTube Channel

Tutorials Dojo YouTube Channel

Follow Us On Linkedin

Recent Posts

The post AWS Certified Developer Associate Exam Guide Study Path DVA-C02 appeared first on Tutorials Dojo.

]]>
https://tutorialsdojo.com/aws-certified-developer-associate-exam-guide-study-path-dva-c02/feed/ 0 4518
AWS Certified SysOps Administrator Associate Exam Guide Study Path SOA-C02 https://tutorialsdojo.com/aws-certified-sysops-administrator-associate-exam-guide-study-path-soa-c02/ https://tutorialsdojo.com/aws-certified-sysops-administrator-associate-exam-guide-study-path-soa-c02/#respond Fri, 23 Aug 2019 02:56:39 +0000 https://tutorialsdojo.com/?p=4513 Bookmarks SOA-C02 Study Materials AWS Services to Focus On Additional Services To Review Exam Labs Common Exam Scenarios Validate Your Knowledge If you are a Systems Administrator or a DevOps Engineer, then this certification will test your knowledge on various technical concepts in AWS relating [...]

The post AWS Certified SysOps Administrator Associate Exam Guide Study Path SOA-C02 appeared first on Tutorials Dojo.

]]>

If you are a Systems Administrator or a DevOps Engineer, then this certification will test your knowledge on various technical concepts in AWS relating to Continuous Integration/Continuous Deployment (CI/CD), Automation, Monitoring, and many more. Your experience in these fields will come in handy in passing the exam, but this should be complemented by actual and relevant AWS knowledge. In the AWS Certified SysOps Administrator Associate SOA-C02 Exam (or AWS SysOps for short), there is a combination of multi-choice/multi-response questions and a series of Exam Labs which will test your ability to perform the following:

  • Deploy, manage, and operate scalable, highly available, and fault-tolerant systems on AWS  
  • Implement and control the flow of data to and from AWS  
  • Select the appropriate AWS service based on compute, data, or security requirements  
  • Identify appropriate use of AWS operational best practices  
  • Estimate AWS usage costs and identify operational cost control mechanisms  
  • Migrate on-premises workloads to AWS 

Given the scope of the questions and Exam Labs, you should learn the concepts of the AWS architecture, the AWS Operational Framework, as well as the AWS CLI and AWS SDK/API tools. Having prior knowledge of fundamental networking and security will also be very valuable. This article aims to provide you a straightforward guide on how to properly prepare for your upcoming AWS exam.

NOTE: As of March 28, 2023, the AWS Certified SysOps Administrator – Associate exam will not include exam labs until further notice. This removal of exam labs is temporary while AWS evaluates the exam labs and make improvements to provide an optimal candidate experience. With this change, the exam will consist of 65 multiple-choice questions and multiple-response questions, with an exam time of 130 minutes

All of the relevant information for your upcoming SOA-C02 exam can be found on the Official Exam Guide for the AWS Certified SysOps Administrator Associate exam. The exam guide should be your reliable source of relevant information for your upcoming SOA-C02 certification test.

AWS Certified SysOps Administrator – Associate SOA-C02 official exam guide

 

AWS Certified SysOps Administrator SOA-C02 Exam Domains

The official AWS Certified SysOps Administrator Associate SOA-C02 Exam Guide provides a list of exam domains, relevant topics, and services that you should focus on. There are 6 exam domains for the SOA-C02 certification test with corresponding exam coverage percentages as shown below:

  • Domain 1: Monitoring, Logging, and Remediation – 20%

  • Domain 2: Reliability and Business Continuity – 16%

  • Domain 3: Deployment, Provisioning, and Automation – 18%

  • Domain 4: Security and Compliance – 16%

  • Domain 5: Networking and Content Delivery – 18%

  • Domain 6: Cost and Performance Optimization – 12%

Apparently, the first domain: “Monitoring, Logging, and Remediation” has the biggest exam coverage with 20%, so you have to focus on the topics under this section. Both Domains 3 and 5 have similar percentages in the SOA-C02 exam of 18% each, as well as Domains 2 and 4 with 16% coverage. The least amount of coverage for the SOA-C02 exam would be Domain 6, which is about Cost and Performance Optimization.

AWS Certified SysOps Administrator SOA-C02 Exam Topics

The official SOA-C02 Exam Guide doesn’t just share the list of exam domains and a detailed description for each test domain. It also comes with a list of relevant tools, technologies, and concepts that will be covered on the SOA-C02 exam. Here is a non-exhaustive list of relevant AWS services and features that would appear on the SOA-C02 exam, based on the present information in the official exam guide. Remember that this list could change at any time, but this information is still helpful in determining the pertinent AWS services that you should focus on more.

Analytics:

  • Amazon Elasticsearch Service (Amazon ES)

Application Integration:

AWS Cost Management:

Compute:

Database:

Management, Monitoring, and Governance:

Migration and Transfer:

Networking and Content Delivery:

Security, Identity, and Compliance:

Storage:

AWS Certified SysOps Administrator SOA-C02 Study Materials

The official AWS sample questions, whitepapers, AWS Documentation, AWS cheat sheets, SOA-C02 video course, and AWS practice exams will be your primary study materials for this exam. There are several whitepapers that you should read and familiarize yourself too.

By having an AWS account, you can do some hands-on labs that will help understand the different cloud concepts better. Since the exam itself contains multiple scenario questions, using the services and applying them in practice will allow you to determine the types of situations they are applied in. 

Exam Readiness AWS Certified SysOps Administrator Associate

Additional details regarding your AWS SOA exam can be seen in this AWS exam blueprint.

The whitepapers listed below are arranged in such a way that you will learn the concepts first, before proceeding to application and best practices. If you need a refresh on your AWS fundamentals, go check out our guide on the AWS Certified Cloud Practitioner Exam before proceeding below.

  1. Amazon Virtual Private Cloud Connectivity OptionsStudy how you can connect different VPCs together, your VPCs to your on-premises network, and vice versa.
  2. Development and Test on AWS – Study how you can leverage AWS to create development and test environments, implement pipelines and automation, and perform different validation tests for your applications.
  3. Backup and Recovery Approaches on AWS – Learn which AWS services offer backup and restore features. It is also important to know how these backups are stored and secured, and select the correct storage options for them.
  4. How AWS Pricing Works – Study the fundamental drivers of cost in AWS, the pricing models of commonly used services in compute, storage, and database, and how to optimize your costs. 
  5. AWS Cloud Security – You should study the different security features in AWS – including infrastructure, account, network, application, and data security. Determine which aspects of security are your responsibilities, and which are AWS’.
  6. AWS Security Best Practices – This whitepaper complements the previous one. Understand the security best practices and their purpose in your environment. Some services offer more than one form of security feature, such as multiple key management schemes for encryption. It is important that you can determine which form is most suitable to the given scenarios in your exam.
  7. Architecting for the Cloud: AWS Best Practices – Be sure to understand the best practices in AWS since exam questions will focus their scenarios around these best practices. The whitepaper contains a number of design principles with examples for each. 
  8. AWS Well-Architected FrameworkThis whitepaper is one of the most important papers that you should study for the SOA-C02 exam. It discusses the different pillars that make up a well-architected cloud environment. 

Optional whitepapers:

  1. Overview of Deployment Options on AWS – This is an optional whitepaper that you can read to be aware of your deployment options in AWS. There is a chance that this might come up in the exam.
  2. AWS Disaster Recovery Plans – As a SysOps Administrator, you should be familiar with your DR options when outages occur. Having knowledge of DR will determine how fast you can recover your infrastructure.

AWS Services to Focus On for the SOA-C02 Exam

AWS offers extensive documentation and well-written FAQs for all of its services. These two will be your primary source of information when studying. Furthermore, as an AWS SysOps Administrator, you need to be well-versed in a number of AWS products and services since you will almost always be using them in your work. I recommend checking out Tutorials Dojo’s AWS Cheat Sheets which provide a summarized but highly informative set of notes and tips for your review of these services.

Core services to study:

  1. EC2 – As the most fundamental compute service offered by AWS, you should know about EC2 inside out.
  2. Elastic Load Balancer – Load balancing is very important for a highly available system. Study the different types of ELBs, and the features each of them supports.
  3. Auto Scaling – Study what services in AWS can be auto-scaled, what triggers scaling, and how auto scaling increases/decreases the number of instances.
  4. Elastic Block Store – As the primary storage solution of EC2, study the types of EBS volumes available. Also study how to secure, backup, and restore EBS volumes.
  5. S3 / GlacierStudy the S3 storage types and what differs between them. Also review the capabilities of S3 such as hosting a static website, securing access to objects using policies, lifecycle policies, etc. Learn as much about S3 as you can.
  6. VPC – Study every service that is used to create a VPC (subnets, route tables, internet gateways, nat gateways, VPN gateways, etc). Also, review the differences between network access control lists and security groups, and during which situations they are applied.
  7. Route 53 – Study the different types of records in Route 53. Also, study the different routing policies. Know what hosted zones and domains are.
  8. RDS – Know how each RDS database differs from one another, and how they are different from Aurora. Determine what makes Aurora unique, and when it should be preferred to other databases (in terms of function, speed, cost, etc). Learn about parameter groups, option groups, and subnet groups.
  9. DynamoDB – Consider how DynamoDB compares to RDS, Elasticache, and Redshift. This service is also commonly used for serverless applications along with Lambda.
  10. Elasticache – Familiarize yourself with Elasticache redis and its functions. Determine the areas/services where you can place a caching mechanism to improve data throughput, such as managing the session state of an ELB, optimizing RDS instances, etc.
  11. SQS – Gather info on why SQS is helpful in decoupling systems. Study how messages in the queues are being managed (standard queues, FIFO queues, dead letter queues). Know the differences between SQS, SNS, SES, and Amazon MQ.
  12. SNS – Study the function of SNS and what services can be integrated with it. Also, be familiar with the supported recipients of SNS notifications.
  13. IAM – Services such as IAM Users, Groups, Policies, and Roles are the most important to learn. Study how IAM integrates with other services and how it secures your application through different policies. Also, read on the best practices when using IAM.
  14. CloudWatch – Study how monitoring is done in AWS and what types of metrics are sent to CloudWatch. Also read upon CloudWatch Logs, CloudWatch Alarms, and the custom metrics made available with CloudWatch Agent.
  15. CloudTrail – Familiarize yourself with how CloudTrail works, and what kinds of logs it stores as compared to CloudWatch Logs.
  16. Config – Be familiar with the situations where AWS Config is useful.
  17. CloudFormation – Study how CloudFormation is used to automate infrastructure deployment. Learn the basic makeup of a CloudFormation template, stack, and stack set.
  18. KMS – Familiarize how KMS integrates with other services in storing encryption keys.
  19. Secrets Manager –  Understand how Secrets Manager stores secrets and how you can use them with other AWS services.
  20. Parameter Store – Know when to use Parameter store and how compute services like EC2, ECS, and Lambda utilize it. 
  21. DataSync – Familiarize which AWS services can be used to migrate data from an on-premises data center.

Some Additional Services We Recommend to Review for SOA-C02: 

  1. Trusted Advisor
  2. Systems Manager
  3. CloudFront
  4. Cost and Billing Management Console
  5. OpsWorks
  6. Direct Connect

For the exam version (SOA-C02), you should also know the following services:

  1. Amazon FSx
  2. AWS Backup
  3. EC2 Image Builder
  4. S3 Transfer Acceleration
  5. AWS Global Accelerator
  6. RDS Proxy
  7. IAM Access Analyzer

AWS Certified SysOps Administrator SOA-C02 Exam Labs

Note: AWS has temporarily removed the exam labs section until further notice.

The SOA-CO2 includes an exam labs section where you have to perform SysOps related tasks on the AWS Management Console. To prepare for this, make sure to play around with the different AWS services covered in the exam. You don’t need to memorize all the configurations for each service. But you have to be really good at navigating the AWS Management Console to understand where you can configure the requirements in each exam lab. Focus on preparing for exam labs on setting up a VPC, CloudWatch, Load Balancer, Auto Scaling, CloudFormation, and S3.

View our sample exam lab here.

Here is a sample exam lab video walkthrough:

Common Exam Scenarios for the AWS Certified SysOps Administrator SOA-C02 Exam

Scenario

Solution

SOA-C02 Domain 1: Monitoring, Logging, and Remediation

You need to set up an alert that notifies the IT manager about EC2 instances service limits.

Use Amazon EventBridge to detect and react to changes in the status of Trusted Advisor checks

You need to track the deletion and rotation of CMKs.

Use AWS CloudTrail to log AWS KMS API calls

You need to investigate if the traffic is reaching the EC2 instance.

Use VPC flow logs

You need to ensure that the SSH protocol is always disabled on private servers.

Use AWS Config Rules

You need to retrieve the instance metadata of an EC2 instance.

http://169.254.169.254/latest/

You have to monitor the CPU usage of a single process in your EC2 instance.

Use the CloudWatch Agent procstat plugin to monitor system utilization.

You need to generate a report on the replication and encryption status of all of the objects stored in the S3 bucket.

Use S3 Inventory

Metric to use to alarm when all instances behind an ALB becomes unhealthy

AWS/ApplicationELB HealthyHostCount <= 0

Monitor restricted CIDR changes on a security group and remove them automatically.

Use AWS Config to evaluate the security group and AWS Systems Manager Automation document to remove the unwanted CIDR range. 

Monitor CreateUser API call via email

Utilize Amazon EventBridge, declare CloudTrail as a source, and CreateUser as an event pattern. Create an SNS topic and set it as an event target on Amazon EventBridge.

SOA-C02 Domain 2: Reliability and Business Continuity

When the incoming message traffic increases, the EC2 instances fall behind and it takes too long to process the messages.

Create an Auto Scaling group that can scale out based on the number of messages in the queue.

You need to log the client’s IP address, latencies, request paths, and server responses that go through your Application Load Balancer.

Enable access logging in ALB and store the logs on an S3 bucket.

You need to determine which cipher is used for the SSL connection in your ELB.

Enable Server Order Preference

You need to monitor the total number of requests or connections in your load balancer.

Monitor the SurgeQueueLength metric

You need to ensure that the backups of an Amazon Redshift cluster are always available.

Configure the Amazon Redshift cluster to automatically copy snapshots of a cluster to another region.

Highly available File Server that supports SMB and manages file permissions using Windows Access Control List (ACL).

Multi-AZ Amazon FSx for Windows File Server

Slow load time when uploading objects to S3

S3 Transfer Acceleration

PercentIOLimit metric hits 100% on EFS

Create a new Max I/O performance mode EFS file system and migrate data to the new file system using AWS DataSync.

Must ensure data integrity when performing EBS backups

Build a Lambda function that uses CreateImage API to generate AMI of the EC2 instance and include a reboot parameter. Create an Amazon EventBridge rule to execute the Lambda function daily.

SOA-C02 Domain 3: Deployment, Provisioning, and Automation

You must remotely execute shell scripts and securely manage the configuration of EC2 instances.

Use Systems Manager Run Command

You need to identify the configuration changes in the CloudFormation resources.

Use drift detection

Requires a CloudFormation template that can be reused for multiple environments. If the template has been updated, all the stack that is referencing it will automatically use the updated configuration.

Use Nested Stacks

You need to automate the process of updating the CloudFomration templates to map to the latest AMI IDs.

Use CloudFormation with Systems Manager Parameter Store

The eviction count in Amazon ElastiCache for Memcached has exceeded its threshold.

Scale the cluster by increasing the number of nodes.

You need to provide each department with a new AWS account with governance guardrails and a defined baseline in place.

Set up AWS Control Tower

An S3 bucket must be configured to move objects older than 60 days to the Infrequent Access storage class.

Set up a lifecycle policy

You need to monitor all the COPY and UNLOAD traffic in the Redshift cluster.

Enable Enhanced VPC routing on the Redshift cluster.

A total of 500 TB of data needs to be transferred to Amazon S3 in the fastest way. 

Use multiple AWS Snowball devices.

TLS certificate should be renewed automatically.

Request a public certificate via AWS Certificate Manager (ACM)

Get cost expenses of each AWS user account.

Enable the createdBy tag in the Billing and Management console

Provisioning instances on ASG takes time because of software dependencies installed via the UserData script.

EC2  Image Builder

Get cost expenses of each AWS user account.

Enable the createdBy tag in the Billing and Management console

SOA-C02 Domain 4: Security and Compliance

You have to rotate an existing CMK with imported key material every 6 months

Create a new CMK with imported key material and update the key ID to point to the new CMK

A company needs to restrict access to the data in an S3 bucket.

Use S3 ACL and bucket policy

Mitigate malicious attacks such as SQL injection and DDoS attacks from unknown origins.

Use AWS WAF and Shield

You need to define an IAM policy to enable the user to pass a role to an AWS service.

Define iam:PassRole in the IAM policy

You need to create a solution that allows multiple EC2 instances in a private subnet to use AWS KMS and the traffic must not pass through the public Internet.

Configure a VPC endpoint

You need to encrypt all the objects at rest in your S3 bucket.

Use SS3-S3, SSE-KMS or SSE-C

Enable authentication to AWS services using Active Directory Federation Services.

Amazon Cognito user pool

Create a bucket policy to only allow AWS accounts in the organization to access an S3 bucket.

Set principal to (*) and create a condition for PrincipalOrgId

Read, update, delete messages from SQS queues from an instance.

Create a policy with sqs:SendMessage, sqs:ReceiveMessage, sqs:DeleteMessage, and attach the policy to a new role that can perform API calls to AWS. Associate the new role to the instance.

RDS credentials should not be hardcoded on Lambda functions.

Use Secrets Manager to store credentials.

SOA-C02 Domain 5: Networking and Content Delivery

You need to allow the EC2 instances in your VPC that support IPv6 to connect to the Internet but block any incoming connection.

Set up an egress-only Internet gateway

You have to establish a dedicated connection between their on-premises network and their Amazon VPC.

Set up a Direct Connect connection

You need to increase the cache hit ratio for a CloudFront web distribution.

Add a Cache-Control max-age and increase the TTL by specifying the longest value for max-age

You need to ensure that users are consistently directed to the AWS region nearest to them.

Set up a Route 53 Geoproximity routing policy

A company plans to implement a hybrid cloud architecture. You need to allow your resources on AWS the connectivity to external networks.

Assign an Internet Gateway to the VPC
Create a Virtual Private Gateway

Users being served desktop version on mobile phones.

Add a User-Agent header to the list of origin custom header on CloudFront.

DNS record at the apex domain.

ALIAS record

SOA-C02 Domain 6: Cost and Performance Optimization

You have to automate the process of patching managed instances with security-related updates.

Use AWS Systems Manager Patch Manager

You need to analyze the data hosted in Amazon S3 using standard SQL.

Use Amazon Athena

Improving the site speed of a static S3 web hosting with customers around the globe

Create a CloudFront web distribution and set Amazon S3 as the origin.

You need to implement a solution to enforce the tagging of all instances that will be launched in the VPC.

Use AWS Service Catalog TagOption library

You need to get billing alerts once it reaches a certain limit.

Enable billing alerts in Account Preferences of the AWS Console.

Resize an Amazon Elasticache for Redis cluster.

Use online resizing for Amazon Elasticache Redis cluster.

No sharing of Reserved Instance (RI) discounts between AWS accounts in the Organization.

Disable RI discount sharing via management account and provision instances using individual AWS accounts.

 

Validate Your SOA-C02 Knowledge

Once you have finished your review and you are more than confident in your knowledge, test yourself with some practice exams available online. AWS offers a practice exam that you can try out at their AWS SkillBuilder portal. Tutorials Dojo also offers a top-notch set of AWS Certified SysOps Administrator Associate practice tests. Each test contains unique questions that will surely help verify if you have missed out on anything important that might appear on your exam. You can also pair our practice exams with our AWS Certified SysOps Administrator Associate Exam Study Guide eBook and video courses to further help in your exam preparations.

SOA-C03 SOA-C02

 

Sample Practice Questions For SOA-C02 Exam:

Question 1

A financial start-up has recently adopted a hybrid cloud infrastructure with AWS Cloud. They are planning to migrate their online payments system that supports an IPv6 address and uses an Oracle database in a RAC configuration. As the AWS Consultant, you have to make sure that the application can initiate outgoing traffic to the Internet but blocks any incoming connection from the Internet.

Which of the following options would you do to properly migrate the application to AWS?

  1. Migrate the Oracle database to an EC2 instance. Launch an EC2 instance to host the application and then set up a NAT Instance.
  2. Migrate the Oracle database to RDS. Launch an EC2 instance to host the application and then set up a NAT gateway instead of a NAT instance for better availability and higher bandwidth.
  3. Migrate the Oracle database to RDS. Launch the application on a separate EC2 instance and then set up a NAT Instance.
  4. Migrate the Oracle database to an EC2 instance. Launch the application on a separate EC2 instance and then set up an egress-only Internet gateway.

Correct Answer: 4

An egress-only Internet gateway is a horizontally scaled, redundant, and highly available VPC component that allows outbound communication over IPv6 from instances in your VPC to the Internet, and prevents the Internet from initiating an IPv6 connection with your instances.

An instance in your public subnet can connect to the Internet through the Internet gateway if it has a public IPv4 address or an IPv6 address. Similarly, resources on the Internet can initiate a connection to your instance using its public IPv4 address or its IPv6 address; for example, when you connect to your instance using your local computer.

IPv6 addresses are globally unique, and are therefore public by default. If you want your instance to be able to access the Internet but want to prevent resources on the Internet from initiating communication with your instance, you can use an egress-only Internet gateway. To do this, create an egress-only Internet gateway in your VPC, and then add a route to your route table that points all IPv6 traffic (::/0) or a specific range of IPv6 address to the egress-only Internet gateway. IPv6 traffic in the subnet that’s associated with the route table is routed to the egress-only Internet gateway.

Remember that a NAT device in your private subnet does not support IPv6 traffic. As an alternative, create an egress-only Internet gateway for your private subnet to enable outbound communication to the Internet over IPv6 and prevent inbound communication. An egress-only Internet gateway supports IPv6 traffic only.

Take note that the application that will be migrated is using an Oracle database on a RAC configuration, which is not supported by RDS.

Hence, the correct answer is: Migrate the Oracle database to an EC2 instance. Launch the application on a seperate EC2 instance and then set-up an egress-only Internet gateway.

The options that say: Migrate the Oracle database to an EC2 instance. Launch an EC2 instance to host the application and then set up a NAT instance and Migrate the Oracle database to RDS. Launch the application on a seperate EC2 instance and then set up a NAT instance are incorrect because a NAT instance are incorrect because a NAT instance does not support IPv6 address. You have to use an egress-only Internet gateway instead. In addition, RDS does not support Oracle RAC, which is why, you have to launch the database in an EC2 instance.

The options that say: Migrating the Oracle database to RDS.  Launch an EC2 instance to host the application and then setting up a NAT gateway instead of a NAT instance for better availability and higher bandwith is incorrect as RDS does not support Oracle RAC. Although it is true that NAT Gateway provides better availability and higher bandwidth than NAT instance, it still does not support IPv6 address, unlike an egress-only Internet gateway.

References:
https://docs.aws.amazon.com/vpc/latest/userguide/vpc-migrate-ipv6.html
https://docs.aws.amazon.com/vpc/latest/userguide/egress-only-internet-gateway.html

Check out this Amazon VPC Cheat Sheet:
https://tutorialsdojo.com/amazon-vpc/

Question 2

A leading tech consultancy firm has an AWS Virtual Private Cloud (VPC) with one public subnet. They have recently deployed a new blockchain application to an EC2 instance. After a month, management has decided that the application should be modified to also support IPv6 addresses.

Which of the following should you do to satisfy the requirement?

Option 1:

  1. Associate an IPv6 Gateway with your VPC and Subnets

  2. Update the Route Tables and Security Group Rules

  3. Enable Enhanced Networking in your EC2 instance

  4. Assign IPv6 Addresses to the EC2 Instance

Option 2:

  1. Attach an Egress-Only Internet Gateway to the VPC and Subnets

  2. Update the Route Tables

  3. Update the Security Group Rules

  4. Assign IPv6 Addresses to the EC2 instance

Option 3:

  1. Associate an IPv6 CIDR Block with the VPC and Subnets

  2. Update the Route Tables

  3. Update the Security Group Rules

  4. Assign IPv6 Addresses to the EC2 Instance

Option 4:

  1. Enable Enhanced Networking in your EC2 instance

  2. Update the Route Tables

  3. Update the Security Group Rules

  4. Assign IPv6 Addresses to the EC2 Instance

Correct Answer: 3

If you have an existing VPC that supports IPv4 only, and resources in your subnet that are configured to use IPv4 only, you can enable IPv6 support for your VPC and resources. Your VPC can operate in dual-stack mode — your resources can communicate over IPv4, or IPv6, or both. IPv4 and IPv6 communication are independent of each other. You cannot disable IPv4 support for your VPC and subnets; this is the default IP addressing system for Amazon VPC and Amazon EC2.

The following provides an overview of the steps to enable your VPC and subnets to use IPv6:

Step 1:

Associate an IPv6 CIDR Block with Your VPC and Subnets – Associate an Amazon-provided IPv6 CIDR block with your VPC and with your subnets.

Step 2:

Update Your Route Tables – Update your route tables to route your IPv6 traffic. For a public subnet, create a route that routes all IPv6 traffic from the subnet to the Internet gateway. For a private subnet, create a route that routes all Internet-bound IPv6 traffic from the subnet to an egress-only Internet gateway.

Step 3:

Update Your Security Group Rules – Update your security group rules to include rules for IPv6 addresses. This enables IPv6 traffic to flow to and from your instances. If you’ve created custom network ACL rules to control the flow of traffic to and from your subnet, you must include rules for IPv6 traffic.

Step 4:

Assign IPv6 Addresses to Your Instances – Assign IPv6 addresses to your instances from the IPv6 address range of your subnet.

Hence, the correct answer is:

1. Associate an IPv6 CIDR Block with the VPC and Subnets

2. Update the Route Tables

3. Update the Security Group Rules

4. Assign IPv6 Addresses to the EC2 Instance

The option with the step that says: Enable Enhanced Networking in your EC2 instance is incorrect because this is not required to enable IPv6. You also don’t need to associate an IPv6 Gateway with your VPC and Subnets. What you need to do is to associate an IPv6 CIDR Block, not an IPv6 Gateway.

The option with the step that says: Associate a NAT Gateway with your VPC and Subnets is incorrect. First, a NAT Gateway is mainly used to allow instances in a private subnet to initiate outbound internet traffic but does not facilitate inbound traffic, which is not directly related to supporting IPv6 for both inbound and outbound connectivity. Additionally, Enhanced Networking just improves network performance for EC2 instances through higher bandwidth and lower latency but it does not relate to the management of IPv6 addresses.

The option with the step that says: Attach an Egress-Only Internet Gateway to the VPC and Subnets is incorrect because this type of gateway simply enables outbound-only access to the Internet over IPv6 from your VPC. The use of an Egress-Only Internet Gateway is not warranted in this scenario.

References:
https://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/vpc-migrate-ipv6.html
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-eni.html

Check out this Amazon VPC Cheat Sheet:
https://tutorialsdojo.com/amazon-vpc/

Click here for more AWS Certified SysOps Administrator Associate practice exam questions.

Check out our other AWS practice test courses here:

 

Additional Training Materials: High-Quality SOA-C02 Video Courses

There are a few top-rated AWS Certified SysOps Administrator Associate video courses that you can check out as well, which can help in your exam preparations. The list below is constantly updated based on feedback from our students on which course/s helped them the most during their exams.

Based on consensus, any of these video courses plus our practice test course and our AWS Certified SysOps Administrator Associate Study Guide eBook were enough to pass this tough exam.

It is best to get some rest before the day of your exam and review any notes that you have written down. If you have done well in the practice tests, go over the questions where you made a mistake and understand why so. If you are not feeling so confident after trying the practice tests, you can just reschedule your exam and take your time preparing. The AWS SOA certification is one of the most sought-after certifications in the SysOps Administration field. The exam will not be easy to pass, but it’ll be worth it when you do.

💝 Valentine’s Sale! Get 30% OFF Any Reviewer. Use coupon code: PASSION-4-CLOUD & 10% OFF Store Credits/Gift Cards

Tutorials Dojo portal

Learn AWS with our PlayCloud Hands-On Labs

FREE AWS Exam Readiness Digital Courses

Tutorials Dojo Exam Study Guide eBooks

tutorials dojo study guide eBook

FREE AWS, Azure, GCP Practice Test Samplers

Subscribe to our YouTube Channel

Tutorials Dojo YouTube Channel

Follow Us On Linkedin

Recent Posts

The post AWS Certified SysOps Administrator Associate Exam Guide Study Path SOA-C02 appeared first on Tutorials Dojo.

]]>
https://tutorialsdojo.com/aws-certified-sysops-administrator-associate-exam-guide-study-path-soa-c02/feed/ 0 4513