Last updated on June 11, 2023

AWS Certified Solutions Architect Associate Practice Questions with Explanations Part 1

AWS Solutions Architect is consistently among the top paying IT certifications, considering that Amazon Web Services is the leading cloud services platform in the world with almost 50% market share.

But before you become an AWS Certified Solutions Architect Professional, you have to pass the Associate exam first and this is where AWS practice tests come in. It is possible that you have read all of the available AWS documentations online yet still fail the exam!

Some people are using brain dumps for the AWS Certified Solutions Architect Associate exam which is totally absurd and highly unprofessional because these dumps will not only hinder you from attaining an in-depth AWS knowledge, these can also result with you failing the actual AWS exam since Amazon regularly updates the exam coverage. Hence, we highly recommend that you study hard, read white papers, do some hands-on training, and use AWS practice tests instead.

Here are 5 AWS Certified Solutions Architect Associate practice exam questions that you can add in your review sessions (correct answer and explanations are listed below).

1. An online health record system that provides centralized health records of all citizens has been migrated to AWS. The system is hosted in one large EBS-backed EC2 instance which hosts both its web server and database. Which of the following does not happen when you stop a running EBS-backed EC2 instance?

A. Any Amazon EBS volume remains attached to the instance, and their data persists.

B. In most cases, the instance is migrated to a new underlying host computer when it’s restarted.

C. Any data stored in the RAM of the underlying host computer or the instance store volumes of the host computer are gone.

D. If it is in the EC2-Classic platform, the instance retains its associated Elastic IP addresses.

2.You are a new Solutions Architect working for a financial company. Your manager wants to have the ability to automatically transfer obsolete data from their S3 bucket to a low cost storage system in AWS. What is the best solution you can provide to them?

A. Use an EC2 instance and a scheduled job to transfer the obsolete data from their S3 location to Amazon Glacier.

B. Use Lifecycle Policies in S3 to move obsolete data to Glacier.

C. Use AWS SQS.

D. Use AWS SWF.

3. You are automating the creation of EC2 instances in your VPC. Hence, you wrote a python script to trigger the Amazon EC2 API to request 50 EC2 instances in a single Availability Zone. However, you noticed that after 20 successful requests, subsequent requests failed. What could be a reason for this issue and how would you resolve it?

A. There was an issue with the Amazon EC2 API. Just resend the requests and these will be provisioned successfully.

B. By default, AWS allows you to provision a maximum of 20 instances per region. Select a different region and retry the failed request.

C. By default, AWS allows you to provision a maximum of 20 instances per Availability Zone. Select a different Availability Zone and retry the failed request.

D. There is a soft limit of 20 instances per region which is why subsequent requests failed. Just submit the limit increase form to AWS and retry the failed requests once approved.

4. An application which is currently hosted on an On-Demand EC2 instance with an attached EBS volume is scheduled to be decommissioned. Which of the following does not happen to your data when the instance is terminated?

A. For EBS-backed instances, the root volume is persisted by default.

B. By default, EBS volumes that are created and attached to an instance at launch are deleted when that instance is terminated.

C. For EBS-backed instances, the root volume is deleted by default.

D. For Instance Store-Backed AMI, all the data is deleted.

5. You are trying to convince a team to use Amazon RDS Read Replica for your multi-tier web application. What are two benefits of using read replicas? (Choose 2)

A. It provides elasticity to your Amazon RDS database.

B. Allows both read and write operations on the read replica to complement the primary database.

C. Improves performance of the primary database by taking workload from it.

D. Automatic failover in the case of Availability Zone service failures.

E. It enhances the read performance of your primary database.

 

And here are the correct answers and explanations.

Question 1 

1. An online health record system that provides centralized health records of all citizens has been migrated to AWS. The system is hosted in one large EBS-backed EC2 instance which hosts both its web server and database. Which of the following does not happen when you stop a running EBS-backed EC2 instance?

A. Any Amazon EBS volume remains attached to the instance, and their data persists.

B. In most cases, the instance is migrated to a new underlying host computer when it’s restarted.

C. Any data stored in the RAM of the underlying host computer or the instance store volumes of the host computer are gone.

D. If it is in the EC2-Classic platform, the instance retains its associated Elastic IP addresses.

Correct Answer: D

All of these happen when you stop a running EBS-backed EC2 instance except for option 4. The instance retains its associated Elastic IP addresses if it is in the EC2-VPC platform and not on EC2-Classic.

When you stop a running instance, the following happens:

  • The instance performs a normal shutdown and stops running; its status changes to stopping and then stopped.
  • Any Amazon EBS volume remains attached to the instance, and their data persists.
  • Any data stored in the RAM of the host computer or the instance store volumes of the host computer are gone.
  • In most cases, the instance is migrated to a new underlying host computer when it’s started.
  • EC2-Classic: AWS releases the public and private IPv4 addresses for the instance when you stop the instance, and assign new ones when you restart it.
  • Tutorials dojo strip
  • EC2-VPC: The instance retains its private IPv4 addresses and any IPv6 addresses when stop and restart. AWS releases the public IPv4 address and assigns a new one when you restart it.
  • EC2-Classic: AWS disassociates any Elastic IP address that’s associated with the instance. You’re charged for Elastic IP addresses that aren’t associated with an instance. When you restart the instance, you must associate the Elastic IP address with the instance; AWS doesn’t do this automatically.
  • EC2-VPC: The instance retains its associated Elastic IP addresses. You’re charged for any Elastic IP addresses associated with a stopped instance

Reference:

https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/Stop_Start.html 

Check out this Amazon EC2 Cheat Sheet:

 

Question 2 

You are a new Solutions Architect working for a financial company. Your manager wants to have the ability to automatically transfer obsolete data from their S3 bucket to a low cost storage system in AWS. What is the best solution you can provide to them?

A. Use an EC2 instance and a scheduled job to transfer the obsolete data from their S3 location to Amazon Glacier.

B. Use Lifecycle Policies in S3 to move obsolete data to Glacier.

C. Use AWS SQS.

D. Use AWS SWF.

Correct Answer: B

In this scenario, you can use lifecycle policies in S3 to automatically move obsolete data to Glacier.

Lifecycle configuration in Amazon S3 enables you to specify the lifecycle management of objects in a bucket. The configuration is a set of one or more rules, where each rule defines an action for Amazon S3 to apply to a group of objects. These actions can be classified as follows:

  • Transition actions – In which you define when objects transition to another storage class. For example, you may choose to transition objects to the STANDARD_IA (IA, for infrequent access) storage class 30 days after creation, or archive objects to the GLACIER storage class one year after creation.
  • Expiration actions – In which you specify when the objects expire. Then Amazon S3 deletes the expired objects on your behalf.

Option 1 is incorrect because you don’t need to create a scheduled job in EC2 as you can just simply use the lifecycle policy in S3. 

Options 3 and 4 are incorrect as SQS and SWF are not storage services.

References:

http://docs.aws.amazon.com/AmazonS3/latest/dev/object-lifecycle-mgmt.html

https://aws.amazon.com/blogs/aws/archive-s3-to-glacier/

Check out this Amazon S3 Cheat Sheet:

 

Question 3 

You are automating the creation of EC2 instances in your VPC. Hence, you wrote a python script to trigger the Amazon EC2 API to request 50 EC2 instances in a single Availability Zone. However, you noticed that after 20 successful requests, subsequent requests failed. What could be a reason for this issue and how would you resolve it?

A. There was an issue with the Amazon EC2 API. Just resend the requests and these will be provisioned successfully.

B. By default, AWS allows you to provision a maximum of 20 instances per region. Select a different region and retry the failed request.

C. By default, AWS allows you to provision a maximum of 20 instances per Availability Zone. Select a different Availability Zone and retry the failed request.

D. There is a soft limit of 20 instances per region which is why subsequent requests failed. Just submit the limit increase form to AWS and retry the failed requests once approved.

Correct Answer: D

You are limited to running up to a total of 20 On-Demand instances across the instance family, purchasing 20 Reserved Instances and requesting Spot Instances per your dynamic Spot limit per region. If you wish to run more than 20 instances, complete the Amazon EC2 instance request form.

References:

Free AWS Courses

https://docs.aws.amazon.com/general/latest/gr/aws_service_limits.html#limits_ec2

https://aws.amazon.com/ec2/faqs/#How_many_instances_can_I_run_in_Amazon_EC2

 

Question 4 

An application which is currently hosted on an On-Demand EC2 instance with an attached EBS volume is scheduled to be decommissioned. Which of the following does not happen to your data when the instance is terminated?

A. For EBS-backed instances, the root volume is persisted by default.

B. By default, EBS volumes that are created and attached to an instance at launch are deleted when that instance is terminated.

C. For EBS-backed instances, the root volume is deleted by default.

D. For Instance Store-Backed AMI, all the data is deleted.

Correct Answer: A

An EBS volume is off-instance storage that can persist independently from the life of an instance. You continue to pay for the volume usage as long as the data persists.

Amazon EBS diagram

By default, EBS volumes that are attached to a running instance automatically detach from the instance with their data intact when that instance is terminated. The volume can then be reattached to a new instance, enabling quick recovery. If you are using an EBS-backed instance, you can stop and restart that instance without affecting the data stored in the attached volume. The volume remains attached throughout the stop-start cycle. This enables you to process and store the data on your volume indefinitely, only using the processing and storage resources when required. The data persists on the volume until the volume is deleted explicitly. The physical block storage used by deleted EBS volumes is overwritten with zeroes before it is allocated to another account. If you are dealing with sensitive data, you should consider encrypting your data manually or storing the data on a volume protected by Amazon EBS encryption

By default, EBS volumes that are created and attached to an instance at launch are deleted when that instance is terminated. You can modify this behavior by changing the value of the flag DeleteOnTermination to false when you launch the instance. This modified value causes the volume to persist even after the instance is terminated, and enables you to attach the volume to another instance.

Option 1 is incorrect. When the instance is terminated, the volume of an EBS-backed instance is deleted by default unless the DeleteOnTermination flag is set to false.

Reference: 

http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EBSVolumes.html

Check out this Amazon EBS Cheat Sheet:

 

Question 5 

You are trying to convince a team to use Amazon RDS Read Replica for your multi-tier web application. What are two benefits of using read replicas? (Choose 2)

A. It provides elasticity to your Amazon RDS database.

B. Allows both read and write operations on the read replica to complement the primary database.

C. Improves performance of the primary database by taking workload from it.

D. Automatic failover in the case of Availability Zone service failures.

E. It enhances the read performance of your primary database.

Correct Answers: A and C

Amazon RDS Read Replicas provide enhanced performance and durability for database (DB) instances. This feature makes it easy to elastically scale out beyond the capacity constraints of a single DB instance for read-heavy database workloads. You can create one or more replicas of a given source DB Instance and serve high-volume application read traffic from multiple copies of your data, thereby increasing aggregate read throughput. Read replicas can also be promoted when needed to become standalone DB instances. Read replicas are available in Amazon RDS for MySQL, MariaDB, and PostgreSQL as well as Amazon Aurora.

Option 2 is incorrect as the Read Replica only offers read operations.

Option 4 is incorrect as this is a benefit of Multi-AZ and not of a Read Replica.

Option 5 is incorrect because a Read Replica does not enhance the read performance of your primary database.

Reference:

https://aws.amazon.com/rds/details/read-replicas/

Check out this Amazon RDS Cheat Sheet:

https://tutorialsdojo.com/amazon-relational-database-service-amazon-rds/

 

If you think you need further practice, check out this comprehensive practice test course with 390 questions in 6 sets, which simulates the latest exam format and provides top-notch explanations for each question.

Good luck in your exam preparations!

Get 20% Off – Christmas Big Sale on All Practice Exams, Video Courses, and eBooks!

Tutorials Dojo portal

Learn AWS with our PlayCloud Hands-On Labs

Tutorials Dojo Exam Study Guide eBooks

tutorials dojo study guide eBook

FREE AWS Exam Readiness Digital Courses

FREE AWS, Azure, GCP Practice Test Samplers

Subscribe to our YouTube Channel

Tutorials Dojo YouTube Channel

Follow Us On Linkedin

Recent Posts