Last updated on July 31, 2024
Here are 10 AWS Certified Solutions Architect Professional SAP-C02 practice exam questions to help you gauge your readiness for the actual exam.
Question 1
A data analytics startup has been chosen to develop a data analytics system that will track all statistics in the Fédération Internationale de Football Association (FIFA) World Cup, which will also be used by other 3rd-party analytics sites. The system will record, store and provide statistical data reports about the top scorers, goal scores for each team, average goals, average passes, average yellow/red cards per match, and many other details. FIFA fans all over the world will frequently access the statistics reports every day and thus, it should be durably stored, highly available, and highly scalable. In addition, the data analytics system will allow the users to vote for the best male and female FIFA player as well as the best male and female coach. Due to the popularity of the FIFA World Cup event, it is projected that there will be over 10 million queries on game day and could spike to 30 million queries over the course of time.
Which of the following is the most cost-effective solution that will meet these requirements?
Option 1
- Launch a MySQL database in Multi-AZ RDS deployments configuration with Read Replicas.
- Generate the FIFA reports by querying the Read Replica.
- Configure a daily job that performs a daily table cleanup.
Option 2
- Launch a MySQL database in Multi-AZ RDS deployments configuration.
- Configure the application to generate reports from ElastiCache to improve the read performance of the system.
- Utilize the default expire parameter for items in ElastiCache.
Option 3
- Generate the FIFA reports from MySQL database in Multi-AZ RDS deployments configuration with Read Replicas.
- Set up a batch job that puts reports in an S3 bucket.
- Launch a CloudFront distribution to cache the content with a TTL set to expire objects daily
Option 4
1. Launch a Multi-AZ MySQL RDS instance.
2. Query the RDS instance and store the results in a DynamoDB table.
3. Generate reports from DynamoDB table.
4. Delete the old DynamoDB tables every day.
Question 2
A company provides big data services to enterprise clients around the globe. One of the clients has 60 TB of raw data from their on-premises Oracle data warehouse. The data is to be migrated to Amazon Redshift. However, the database receives minor updates on a daily basis while major updates are scheduled every end of the month. The migration process must be completed within approximately 30 days before the next major update on the Redshift database. The company can only allocate 50 Mbps of Internet connection for this activity to avoid impacting business operations.
Which of the following actions will satisfy the migration requirements of the company while keeping the costs low?
- Create a new Oracle Database on Amazon RDS. Configure Site-to-Site VPN connection from the on-premises data center to the Amazon VPC. Configure replication from the on-premises database to Amazon RDS. Once replication is complete, create an AWS Schema Conversion Tool (SCT) project with AWS DMS task to migrate the Oracle database to Amazon Redshift. Monitor and verify if the data migration is complete before the cut-over.
- Create an AWS Snowball Edge job using the AWS Snowball console. Export all data from the Oracle data warehouse to the Snowball Edge device. Once the Snowball device is returned to Amazon and data is imported to an S3 bucket, create an Oracle RDS instance to import the data. Create an AWS Schema Conversion Tool (SCT) project with AWS DMS task to migrate the Oracle database to Amazon Redshift. Copy the missing daily updates from Oracle in the data center to the RDS for Oracle database over the Internet. Monitor and verify if the data migration is complete before the cut-over.
- Since you have a 30-day window for migration, configure VPN connectivity between AWS and the company’s data center by provisioning a 1 Gbps AWS Direct Connect connection. Launch an Oracle Real Application Clusters (RAC) database on an EC2 instance and set it up to fetch and synchronize the data from the on-premises Oracle database. Once replication is complete, create an AWS DMS task on an AWS SCT project to migrate the Oracle database to Amazon Redshift. Monitor and verify if the data migration is complete before the cut-over.
- Create an AWS Snowball import job to request for a Snowball Edge device. Use the AWS Schema Conversion Tool (SCT) to process the on-premises data warehouse and load it to the Snowball Edge device. Install the extraction agent on a separate on-premises server and register it with AWS SCT. Once the Snowball Edge imports data to the S3 bucket, use AWS SCT to migrate the data to Amazon Redshift. Configure a local task and AWS DMS task to replicate the ongoing updates to the data warehouse. Monitor and verify that the data migration is complete.
Question 3
A fintech startup has developed a cloud-based payment processing system that accepts credit card payments as well as cryptocurrencies such as Bitcoin, Ripple, and the likes. The system is deployed in AWS which uses EC2, DynamoDB, S3, and CloudFront to process the payments. Since they are accepting credit card information from the users, they are required to be compliant with the Payment Card Industry Data Security Standard (PCI DSS). On the recent 3rd-party audit, it was found that the credit card numbers are not properly encrypted and hence, their system failed the PCI DSS compliance test. You were hired by the fintech startup to solve this issue so they can release the product in the market as soon as possible. In addition, you also have to improve performance by increasing the proportion of your viewer requests that are served from CloudFront edge caches instead of going to your origin servers for content.
In this scenario, what is the best option to protect and encrypt the sensitive credit card information of the users and to improve the cache hit ratio of your CloudFront distribution?
- Add a custom SSL in the CloudFront distribution. Configure your origin to add
User-Agent
andHost
headers to your objects to increase your cache hit ratio. - Configure the CloudFront distribution to use Signed URLs. Configure your origin to add a
Cache-Control max-age
directive to your objects, and specify the longest practical value formax-age
to increase your cache hit ratio. -
Create an origin access control (OAC) and add it to the CloudFront distribution. Configure your origin to add
User-Agent
andHost
headers to your objects to increase your cache hit ratio. - Configure the CloudFront distribution to enforce secure end-to-end connections to origin servers by using HTTPS and field-level encryption. Configure your origin to add a
Cache-Control max-age
directive to your objects, and specify the longest practical value formax-age
to increase your cache hit ratio.
Question 4
A global financial company is launching its new trading platform in AWS which allows people to buy and sell their Bitcoin, Ethereum, Ripple, and other cryptocurrencies, as well as access to various financial reports. To meet the anti-money laundering and counter-terrorist financing (AML/CFT) measures compliance, all report files of the trading platform must not be accessible in certain countries which are listed in the Financial Action Task Force (FATF) list of non-cooperative countries or territories. You were given a task to ensure that the company complies with this requirement to avoid hefty monetary penalties. In this scenario, what is the best way to satisfy this security requirement in AWS while still delivering content to users around the globe with lower latency?
- Deploy the trading platform using Elastic Beanstalk and deny all incoming traffic from the IP addresses of the blacklisted countries in the Network Access Control List (ACL) of the VPC.
- Use Route 53 with a Geolocation routing policy that blocks all traffic from the blacklisted countries.
- Use Route 53 with a Geoproximity routing policy that blocks all traffic from the blacklisted countries.
- Create a CloudFront distribution with Geo-Restriction enabled to block all of the blacklisted countries from accessing the trading platform.
Question 5
A company is using AWS Organizations to manage their multi-account and multi-region AWS infrastructure. They are currently doing large-scale automation for their key daily processes to save costs. One of these key processes is sharing specified AWS resources, which an organizational account owns, with other AWS accounts of the company using AWS RAM. There is already an existing service which was previously managed by a separate organization account moderator, who also maintained the specific configuration details.
In this scenario, what could be a simple and effective solution that would allow the service to perform its tasks on the organization accounts on the moderator’s behalf?
- Attach an IAM role on the service detailing all the allowed actions that it will be able to perform. Install an SSM agent in each of the worker VMs. Use AWS Systems Manager to build automation workflows that involve the daily key processes.
- Use trusted access by running the
enable-sharing-with-aws-organization
command in the AWS RAM CLI. Mirror the configuration changes that was performed by the account that previously managed this service. - Configure a service-linked role for AWS RAM and modify the permissions policy to specify what the role can and cannot do. Lastly, modify the trust policy of the role so that other processes can utilize AWS RAM.
- Enable cross-account access with AWS Organizations in the Resource Access Manager Console. Mirror the configuration changes that was performed by the account that previously managed this service.
Question 6
A telecommunications company is planning to host a WordPress website on an Amazon ECS Cluster which uses the Fargate launch type. For security purposes, the database credentials should be provided to the WordPress image by using environment variables. Your manager instructed you to ensure that the credentials are secure when passed to the image and that they cannot be viewed on the cluster itself. The credentials must be kept in a dedicated storage with lifecycle management and key rotation.
Which of the following is the most suitable solution in this scenario that you can implement with the least effort?
- Store the database credentials using the AWS Systems Manager Parameter Store and then encrypt them using AWS KMS. Create an IAM Role for your Amazon ECS task execution role and reference it with your task definition, which allows access to both KMS and the Parameter Store. Within your container definition, specify secrets with the name of the environment variable to set in the container and the full ARN of the Systems Manager Parameter Store parameter containing the sensitive data to present to the container.
- In the ECS task definition file of the ECS Cluster, store the database credentials and encrypt with KMS. Store the task definition JSON file in a private S3 bucket and ensure that HTTPS is enabled on the bucket to encrypt the data in-flight. Create an IAM role to the ECS task definiton script that allows access to the specific S3 bucket and then pass the <code>–cli-input-json</code> parameter when calling the ECS register-task-definition. Reference the task definition JSON file in the S3 bucket which contains the database credentials.
- Store the database credentials using the AWS Secrets Manager and then encrypt them using AWS KMS. Create an IAM Role for your Amazon ECS task execution role and reference it with your task definition which allows access to both KMS and AWS Secrets Manager. Within your container definition, specify secrets with the name of the environment variable to set in the container and the full ARN of the Secrets Manager secret which contains the sensitive data, to present to the container.
- Migrate the container cluster to Amazon Elastic Kubernetes Service (EKS). Create manifest files for deployment and use the Kubernetes Secrets objects to store the database credentials. Reference the secrets on the manifest file using the
secretKeyRef
to use them a environment variables. Configure EKS to rotate the secret values automatically.
Question 7
A leading media company has a hybrid architecture where its on-premises data center is connected to AWS via a Direct Connect connection. They also have a repository of over 50-TB digital videos and media files. These files are stored on their on-premises tape library and are used by their Media Asset Management (MAM) system. Due to the sheer size of their data, they want to implement an automated catalog system that will enable them to search their files using facial recognition. A catalog will store the faces of the people who are present in these videos including a still image of each person. Eventually, the media company would like to migrate these media files to AWS including the MAM video contents.
Which of the following options provides a solution which uses the LEAST amount of ongoing management overhead and will cause MINIMAL disruption to the existing system?
- Integrate the file system of your local data center to AWS Storage Gateway by setting up a file gateway appliance on-premises. Utilize the MAM solution to extract the media files from the current data store and send them into the file gateway. Build a collection using Amazon Rekognition by populating a catalog of faces from the processed media files. Use an AWS Lambda function to invoke Amazon Rekognition Javascript SDK to have it fetch the media file from the S3 bucket which is backing the file gateway, retrieve the needed metadata, and finally, persist the information into the MAM solution.
- Use Amazon Kinesis Video Streams to set up a video ingestion stream and with Amazon Rekognition, build a collection of faces. Stream the media files from the MAM solution into Kinesis Video Streams and configure the Amazon Rekognition to process the streamed files. Launch a stream consumer to retrieve the required metadata, and push the metadata into the MAM solution. Finally, configure the stream to store the files in an S3 bucket.
- Set up a tape gateway appliance on-premises and connect it to your AWS Storage Gateway. Configure the MAM solution to fetch the media files from the current archive and push them into the tape gateway to be stored in Amazon Glacier. Using Amazon Rekognition, build a collection from the catalog of faces. Utilize a Lambda function which invokes the Rekognition Javascript SDK to have Amazon Rekognition process the video directly from the tape gateway in real-time, retrieve the required metadata, and push the metadata into the MAM solution.
- Request for an AWS Snowball Storage Optimized device to migrate all of the media files from the on-premises library into Amazon S3. Provision a large EC2 instance and allow it to access the S3 bucket. Install an open-source facial recognition tool on the instance like OpenFace or OpenCV. Process the media files to retrieve the metadata and push this information into the MAM solution. Lastly, copy the media files to another S3 bucket.
Question 8
An IT consultancy company has multiple offices located in San Francisco, Frankfurt, Tokyo, and Manila. The company is using AWS Organizations to easily manage its several AWS accounts which are being used by its regional offices and subsidiaries. A new AWS account was recently added to a specific organizational unit (OU) which is responsible for the overall systems administration. The solutions architect noticed that the account is using a root-created Amazon ECS Cluster with an attached service-linked role. For regulatory purposes, the solutions architect created a custom SCP that would deny the new account from performing certain actions in relation to using ECS. However, after applying the policy, the new account could still perform the actions that it was supposed to be restricted from doing.
Which of the following is the most likely reason for this problem?
- The default SCP grants all permissions attached to every root, OU, and account. To apply stricter permissions, this policy is required to be modified.
- There is an SCP attached to a higher-level OU that permits the actions of the service-linked role. This permission would therefore be inherited by the current OU, and override the SCP placed by the administrator.
- The ECS service is being run outside the jurisdiction of the organization. SCPs affect only the principals that are managed by accounts that are part of the organization.
- SCPs do not affect any service-linked role. Service-linked roles enable other AWS services to integrate with AWS Organizations and can’t be restricted by SCPs.
Question 9
A government agency has multiple VPCs in various AWS regions across the United States that need to be linked up to an on-premises central office network in Washington, D.C. The central office requires inter-region VPC access over a private network that is dedicated to each region for enhanced security and more predictable data transfer performance. Your team is tasked to quickly build this network mesh and to minimize the management overhead to maintain these connections.
Which of the following options is the most secure, highly available, and durable solution that you should use to set up this kind of interconnectivity?
- Create a link aggregation group (LAG) in the central office network to aggregate multiple connections at a single AWS Direct Connect endpoint in order to treat them as a single, managed connection. Use AWS Direct Connect Gateway to achieve inter-region VPC access to all of your AWS resources. Create a virtual private gateway in each VPC and then create a public virtual interface for each AWS Direct Connect connection to the Direct Connect Gateway.
- Implement a hub-and-spoke network topology in each region that routes all traffic through a network transit center using AWS Transit Gateway. Route traffic between VPCs and the on-premises network over AWS Site-to-Site VPN.
- Utilize AWS Direct Connect Gateway for inter-region VPC access. Create a virtual private gateway in each VPC, then create a private virtual interface for each AWS Direct Connect connection to the Direct Connect gateway.
- Enable inter-region VPC peering which allows peering relationships to be established between VPCs across different AWS regions. This will ensure that the traffic will always stay on the global AWS backbone and will never traverse the public Internet.
Question 10
A stocks brokerage firm hosts its legacy application on Amazon EC2 in a private subnet of its Amazon VPC. The application is accessed by the employees from their corporate laptops through a proprietary desktop program. The company network is peered with the AWS Direct Connect (DX) connection to provide a fast and reliable connection to the private EC2 instances inside the VPC. To comply with the strict security requirements of financial institutions, the firm is required to encrypt its network traffic that flows from the employees’ laptops to the resources inside the VPC.
Which of the following solution will comply with this requirement while maintaining the consistent network performance of Direct Connect?
- Using the current Direct Connect connection, create a new public virtual interface and input the network prefixes that you want to advertise. Create a new site-to-site VPN connection to the VPC over the Internet. Configure the employees’ laptops to connect to this VPN.
- Using the current Direct Connect connection, create a new public virtual interface and input the network prefixes that you want to advertise. Create a new site-to-site VPN connection to the VPC with the BGP protocol using the DX connection. Configure the company network to route employee traffic to this VPN.
- Using the current Direct Connect connection, create a new private virtual interface and input the network prefixes that you want to advertise. Create a new site-to-site VPN connection to the VPC over the Internet. Configure the employees’ laptops to connect to this VPN.
- Using the current Direct Connect connection, create a new private virtual interface and input the network prefixes that you want to advertise. Create a new site-to-site VPN connection to the VPC with the BGP protocol using the DX connection. Configure the company network to route employee traffic to this VPN.
For more practice questions like these and to further prepare you for the actual AWS Certified Solutions Architect Professional SAP-C02 exam, we recommend that you take our top-notch AWS Certified Solutions Architect Professional Practice Exams, which have been regarded as the best in the market.
Also check out our AWS Certified Solutions Architect Professional SAP-C02 Exam Study Guide here.