Ends in
00
days
00
hrs
00
mins
00
secs
ENROLL NOW

Get any AWS Specialty Mock Test for FREE when you Buy 2 AWS Pro-Level Practice Tests – as LOW as $10.49 USD each ONLY!

AWS Certified SysOps Administrator Associate SOA-C02 Sample Exam Questions

Home » Others » AWS Certified SysOps Administrator Associate SOA-C02 Sample Exam Questions

AWS Certified SysOps Administrator Associate SOA-C02 Sample Exam Questions

Last updated on February 21, 2024

Here are 10 AWS Certified SysOps Administrator Associate SOA-C02 practice exam questions to help you gauge your readiness for the actual exam.

Question 1

A financial start-up has recently adopted a hybrid cloud infrastructure with AWS Cloud. They are planning to migrate their online payments system that supports an IPv6 address and uses an Oracle database in a RAC configuration. As the AWS Consultant, you have to make sure that the application can initiate outgoing traffic to the Internet but blocks any incoming connection from the Internet.

Which of the following options would you do to properly migrate the application to AWS?

  1. Migrate the Oracle database to an EC2 instance. Launch an EC2 instance to host the application and then set up a NAT Instance.
  2. Migrate the Oracle database to RDS. Launch an EC2 instance to host the application and then set up a NAT gateway instead of a NAT instance for better availability and higher bandwidth.
  3. Migrate the Oracle database to RDS. Launch the application on a separate EC2 instance and then set up a NAT Instance.
  4. Migrate the Oracle database to an EC2 instance. Launch the application on a separate EC2 instance and then set up an egress-only Internet gateway.

Correct Answer: 4

An egress-only Internet gateway is a horizontally scaled, redundant, and highly available VPC component that allows outbound communication over IPv6 from instances in your VPC to the Internet, and prevents the Internet from initiating an IPv6 connection with your instances.

An instance in your public subnet can connect to the Internet through the Internet gateway if it has a public IPv4 address or an IPv6 address. Similarly, resources on the Internet can initiate a connection to your instance using its public IPv4 address or its IPv6 address; for example, when you connect to your instance using your local computer.

IPv6 addresses are globally unique, and are therefore public by default. If you want your instance to be able to access the Internet but want to prevent resources on the Internet from initiating communication with your instance, you can use an egress-only Internet gateway. To do this, create an egress-only Internet gateway in your VPC, and then add a route to your route table that points all IPv6 traffic (::/0) or a specific range of IPv6 address to the egress-only Internet gateway. IPv6 traffic in the subnet that’s associated with the route table is routed to the egress-only Internet gateway.

Remember that a NAT device in your private subnet does not support IPv6 traffic. As an alternative, create an egress-only Internet gateway for your private subnet to enable outbound communication to the Internet over IPv6 and prevent inbound communication. An egress-only Internet gateway supports IPv6 traffic only.

Take note that the application that will be migrated is using an Oracle database on a RAC configuration, which is not supported by RDS.

Hence, the correct answer is: Migrate the Oracle database to an EC2 instance. Launch the application on a seperate EC2 instance and then set-up an egress-only Internet gateway.

The options that say: Migrate the Oracle database to an EC2 instance. Launch an EC2 instance to host the application and then set up a NAT instance and Migrate the Oracle database to RDS. Launch the application on a seperate EC2 instance and then set up a NAT instance are incorrect because a NAT instance are incorrect because a NAT instance does not support IPv6 address. You have to use an egress-only Internet gateway instead. In addition, RDS does not support Oracle RAC, which is why, you have to launch the database in an EC2 instance.

The options that say: Migrating the Oracle database to RDS.  Launch an EC2 instance to host the application and then setting up a NAT gateway instead of a NAT instance for better availability and higher bandwith is incorrect as RDS does not support Oracle RAC. Although it is true that NAT Gateway provides better availability and higher bandwidth than NAT instance, it still does not support IPv6 address, unlike an egress-only Internet gateway.

References:
https://docs.aws.amazon.com/vpc/latest/userguide/vpc-migrate-ipv6.html
https://docs.aws.amazon.com/vpc/latest/userguide/egress-only-internet-gateway.html

 

Check out this Amazon VPC Cheat Sheet:
https://tutorialsdojo.com/amazon-vpc/

Question 2

A leading tech consultancy firm has an AWS Virtual Private Cloud (VPC) with one public subnet. They have recently deployed a new blockchain application to an EC2 instance. After a month, management has decided that the application should be modified to also support IPv6 addresses.

Which of the following should you do to satisfy the requirement?

Option 1

  1. Associate a NAT Gateway with your VPC and Subnets
  2. Update the Route Tables and Security Group Rules
  3. Enable Enhanced Networking in your EC2 instance
  4. Assign IPv6 Addresses to the EC2 Instance

Option 2

  1. Attach an Egress-Only Internet Gateway to the VPC and Subnets
  2. Update the Route Tables
  3. Update the Security Group Rules
  4. Assign IPv6 Addresses to the EC2 instance
  5. Configure the instance to use DHCPv6

Option 3

  1. Associate an IPv6 CIDR Block with the VPC and Subnets
  2. Update the Route Tables
  3. Update the Security Group Rules
  4. Assign IPv6 Addresses to the EC2 Instance
Tutorials dojo strip

Option 4

  1. Enable Enhanced Networking in your EC2 instance
  2. Update the Route Tables
  3. Update the Security Group Rules
  4. Assign IPv6 Addresses to the EC2 Instance

Correct Answer: 3

If you have an existing VPC that supports IPv4 only, and resources in your subnet that are configured to use IPv4 only, you can enable IPv6 support for your VPC and resources. Your VPC can operate in dual-stack mode — your resources can communicate over IPv4, or IPv6, or both. IPv4 and IPv6 communication are independent of each other. You cannot disable IPv4 support for your VPC and subnets; this is the default IP addressing system for Amazon VPC and Amazon EC2.

The following provides an overview of the steps to enable your VPC and subnets to use IPv6:

Step 1:

Associate an IPv6 CIDR Block with Your VPC and Subnets – Associate an Amazon-provided IPv6 CIDR block with your VPC and with your subnets.

Step 2:

Update Your Route Tables – Update your route tables to route your IPv6 traffic. For a public subnet, create a route that routes all IPv6 traffic from the subnet to the Internet gateway. For a private subnet, create a route that routes all Internet-bound IPv6 traffic from the subnet to an egress-only Internet gateway.

Step 3:

Update Your Security Group Rules – Update your security group rules to include rules for IPv6 addresses. This enables IPv6 traffic to flow to and from your instances. If you’ve created custom network ACL rules to control the flow of traffic to and from your subnet, you must include rules for IPv6 traffic.

Step 4:

Assign IPv6 Addresses to Your Instances – Assign IPv6 addresses to your instances from the IPv6 address range of your subnet.

Hence, the correct answer is:

1. Associate an IPv6 CIDR Block with the VPC and Subnets

2. Update the Route Tables

3. Update the Security Group Rules

4. Assign IPv6 Addresses to the EC2 Instance

The option with the step that says: Enable Enhanced Networking in your EC2 instance is incorrect because this is not required to enable IPv6. You also don’t need to associate an IPv6 Gateway with your VPC and Subnets. What you need to do is to associate an IPv6 CIDR Block, not an IPv6 Gateway.

The option with the step that says: Associate a NAT Gateway with your VPC and Subnets is incorrect. First, a NAT Gateway is mainly used to allow instances in a private subnet to initiate outbound internet traffic but does not facilitate inbound traffic, which is not directly related to supporting IPv6 for both inbound and outbound connectivity. Additionally, Enhanced Networking just improves network performance for EC2 instances through higher bandwidth and lower latency but it does not relate to the management of IPv6 addresses.

The option with the step that says: Attach an Egress-Only Internet Gateway to the VPC and Subnets is incorrect because this type of gateway simply enables outbound-only access to the Internet over IPv6 from your VPC. The use of an Egress-Only Internet Gateway is not warranted in this scenario.

References:
https://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/vpc-migrate-ipv6.html
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-eni.html

Check out this Amazon VPC Cheat Sheet:
https://tutorialsdojo.com/amazon-vpc/

Question 3

A company uses Amazon Route 53 to register the domain name of an online timesheet application named: “www.tutorialsdojo-timesheet.com” and deployed the application on ECS. After a few months, a new version of the timesheet application is ready to be deployed which contains bug fixes and new features. The DevOps team launched a separate ECS instance for the new version and they instructed you to direct the initial set of traffic to the new version so they can do their production verification tests. Once verified that the new version is working, you can now totally route all traffic coming from the www.tutorialsdojo-timesheet.com domain to the new ECS instance.

Which of the following would you do to smoothly deploy the new application version?

  1. Launch a resource record based on the Geoproximity routing policy
  2. Launch a resource record based on the Latency routing policy
  3. Launch 2 resource records based on the Failover Routing policy
  4. Launch 2 resource records based on the Weighted Routing policy

Correct Answer: 4

Weighted routing lets you associate multiple resources with a single domain name (example.com) or subdomain name (acme.example.com) and choose how much traffic is routed to each resource. This can be useful for a variety of purposes, including load balancing and testing new versions of software.

To configure weighted routing, you create records that have the same name and type for each of your resources. You assign each record a relative weight that corresponds with how much traffic you want to send to each resource. Amazon Route 53 sends traffic to a resource based on the weight that you assign to the record as a proportion of the total weight for all records in the group.

For example, if you want to send a tiny portion of your traffic to one resource and the rest to another resource, you might specify weights of 1 and 255. The resource with a weight of 1 gets 1/256th of the traffic (1/1+255), and the other resource gets 255/256ths (255/1+255). You can gradually change the balance by changing the weights. If you want to stop sending traffic to a resource, you can change the weight for that record to 0.

Hence, the correct answer is: Launching 2 resource records based on the Weighted Routing policy.

The option that says: Launching a resource record based on the Geoproximity routing policy is incorrect as this is only used when you want to route traffic based on the location of your resources and, optionally, shift traffic from resources in one location to resources in another.

The option that says: Launching a resource record based on the Latency routing policy is incorrect as this is used when you have resources in multiple AWS Regions, and you want to route traffic to the region that provides the best latency.

The option that says: Launching a resource record based on the Failover routing policy is incorrect because this type is only used when you want to route traffic to a resource when the resource is healthy or to a different resource when the first resource is unhealthy.

Reference:
https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/routing-policy.html#routing-policy-weighted

Check out this Amazon Route 53 Cheat Sheet:
https://tutorialsdojo.com/amazon-route-53/

Latency Routing vs. Geoproximity Routing vs. Geolocation Routing:
https://tutorialsdojo.com/latency-routing-vs-geoproximity-routing-vs-geolocation-routing/

Comparison of AWS Services Cheat Sheets:
https://tutorialsdojo.com/comparison-of-aws-services/

Question 4

A retail company is using AWS Organizations to manage user accounts. The consolidated billing feature is enabled to consolidate billing and payment for multiple AWS accounts. Member account owners requested to get the benefits of Reserved Instances (RIs) but they don’t want to share RIs with other members of the AWS Organization.

Which steps should the SysOps administrator perform to achieve the requirements?

  1. Go to Billing Preferences in the management account and disable RI discount sharing. Then, purchase the RIs using individual member accounts.
  2. Go to Billing Preferences in the management account and disable RI discount sharing. Then, purchase the RIs using the management account.
  3. Disable RI discount sharing in each of the member accounts. Then, purchase the RIs using the management account.
  4. Disable RI discount sharing in each of the member accounts. Then, purchase RIs in the member accounts only.

Correct Answer: 1

RI discounts apply to accounts in an organization’s consolidated billing family depending upon whether RI sharing is turned on or off for the accounts. By default, RI sharing for all accounts in an organization is turned on. The management account of an organization can change this setting by turning off RI sharing for an account.

If RI sharing is turned off for an account in an organization, then:

– RI discounts apply only to the account that purchased the RIs.

– RI discounts from other accounts in the organization’s consolidated billing family don’t apply.

– The charges accrued on that account are still added to the organization’s consolidated bill and are paid by the management account.

Hence, the correct answer is: Go to Billing Preferences in the management account and disable RI discount sharing. Then, purchase the RIs using individual member accounts.

The option that says: Go to Billing Preferences in the management account and disable RI discount sharing. Then, purchase the RIs using the management account is incorrect because you need to purchase the RIs on the individual account for the RI discounts to apply on the member account.

The option that says: Disable RI discount sharing in each of the member accounts. Then, purchase the RIs using the management account is incorrect because you can’t modify RI discount sharing on member accounts. You can only perform the action using the management account.

The option that says: Disable RI discount sharing in each of the member accounts. Then, purchase RIs in the member accounts only is incorrect because a member account does not have the privilege to disable RI discount sharing. This action can only be performed by the management account in the organization.

References:
https://aws.amazon.com/premiumsupport/knowledge-center/ec2-ri-consolidated-billing/
https://docs.aws.amazon.com/awsaccountbilling/latest/aboutv2/ri-turn-off.html

Check out our AWS Billing and Cost Management Cheat Sheet:
https://tutorialsdojo.com/aws-billing-and-cost-management/

Question 5

A real-estate company is hosting a website on a set of Amazon EC2 instances behind an Application Load Balancer. The SysOps administrator used CloudFront for its content distribution and set the ALB as the origin. He also created a CNAME record in Route 53 that sends all traffic through the CloudFront distribution. Users started to report that they are being served with the desktop version of the website when using mobile phones.

Which action can help the SysOps administrator resolve the issue?

  1. Set the cache behavior of the CloudFront distribution to forward the User-Agent header.
  2. Update the CloudFront distribution origin settings. Add a User-Agent header to the list of origin custom headers.
  3. Activate the Enable IPv6 setting on the Application Load Balancer (ALB). Update origin settings of the CloudFront distribution to use the dualstack endpoint.
  4. Activate the dualstack setting on the Application Load Balancer (ALB).

Correct Answer: 2

If you want CloudFront to cache different versions of your objects based on the device that a user is using to view your content, we recommend that you configure CloudFront to forward one or more of the following headers to your custom origin:

– CloudFront-Is-Desktop-Viewer

– CloudFront-Is-Mobile-Viewer

– CloudFront-Is-SmartTV-Viewer

– CloudFront-Is-Tablet-Viewer

Based on the value of the User-Agent header, CloudFront sets the value of these headers to true or false before forwarding the request to your origin. If a device falls into more than one category, more than one value might be true. For example, for some tablet devices, CloudFront might set both CloudFront-Is-Mobile-Viewer and CloudFront-Is-Tablet-Viewer to true.

Hence, the correct answer is: Update the CloudFront distribution origin settings. Add a User-Agent header to the list of origin custom headers.

The option that says: Set the cache behavior of the CloudFront distribution to forward the User-Agent header is incorrect because you can’t set the cache behavior of a CloudFront distribution to forward the User-Agent header. This is configured in the Origin Custom Headers setting.

The option that says: Activate the Enable IPv6 setting on the Application Load Balancer (ALB). Update origin settings of the CloudFront distribution to use the dualstack endpoint is incorrect because the Enable IPv6 setting is only used for a CloudFront distribution if you have users on IPv6 networks who want to access your content.

The option that says: Activate the dualstack setting on the Application Load Balancer (ALB) is incorrect because this is only used to associate an IPv6 address to an internet-facing load balancer.

References:
https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/add-origin-custom-headers.html
https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/RequestAndResponseBehaviorCustomOrigin.html

Check out this Amazon CloudFront Cheat Sheet:
https://tutorialsdojo.com/amazon-cloudfront/

Question 6

A company has recently adopted a hybrid cloud infrastructure. They plan to establish a dedicated connection between their on-premises network and their Amazon VPC. In the next couple of months, they will migrate their applications and move their data from their on-premises network to AWS, which is why they need a more consistent network experience than Internet-based connections.

Which of the following options should be implemented for this scenario?

  1. Set up a VPN Connection
  2. Set up a Direct Connect connection
  3. Set up a VPC peering
  4. Set up an AWS VPN CloudHub

Correct Answer: 2

AWS Direct Connect is a cloud service solution that makes it easy to establish a dedicated network connection from your premises to AWS. Using AWS Direct Connect, you can establish private connectivity between AWS and your datacenter, office, or colocation environment, which in many cases can reduce your network costs, increase bandwidth throughput, and provide a more consistent network experience than Internet-based connections.

The requirement in the scenario is a dedicated connection between their on-premises network and their Amazon VPC. Among the options given, only AWS Direct Connect can satisfy this requirement. AWS Direct Connect lets you establish a dedicated network connection between your network and one of the AWS Direct Connect locations.

Hence, the correct answer is: Set up a Direct Connect connection.

The options that say: Set up a VPN Connection and Set up an AWS VPN CloudHub are incorrect because a VPN is an Internet-based connection, unlike Direct Connect which provides a dedicated connection. An Internet-based connection means that the traffic from the VPC and to the on-premises network traverses the public Internet, which is why it is slow. You should use Direct Connect instead.

The option that says: Set up a VPC peering is incorrect because VPC Peering is mainly used to connect two or more VPCs.

References:
https://aws.amazon.com/directconnect/
https://docs.aws.amazon.com/directconnect/latest/UserGuide/Welcome.html

Check out this AWS Direct Connect Cheat Sheet:
https://tutorialsdojo.com/aws-direct-connect/

Question 7

A financial company is launching an online web portal that will be hosted in an Auto Scaling group of Amazon EC2 instances across multiple Availability Zones behind an Application Load Balancer (ALB). To allow HTTP and HTTPS traffic, the SysOps Administrator configured the Network ACL and the Security Group of both the ALB and EC2 instances to allow inbound traffic on ports 80 and 443. However, the online portal is still unreachable over the public internet after the deployment.

How can the Administrator fix this issue?

  1. In the Security Group, add a new rule to allow outbound traffic on port 80 and port 443.
  2. Allow ephemeral ports in the Security Group by adding a new rule to allow outbound traffic on ports 1024 – 65535.
  3. Allow ephemeral ports in the Network ACL by adding a new rule to allow outbound traffic on ports 1024 – 65535.
  4. In the Network ACL, add a new rule to allow inbound traffic on ports 1024 – 65535.

Correct Answer: 3

AWS Exam Readiness Courses

To enable the connection to a service running on an instance, the associated network ACL must allow both inbound traffic on the port that the service is listening on as well as allow outbound traffic from ephemeral ports. When a client connects to a server, a random port from the ephemeral port range (1024-65535) becomes the client’s source port.

The designated ephemeral port then becomes the destination port for return traffic from the service, so outbound traffic from the ephemeral port must be allowed in the network ACL. By default, network ACLs allow all inbound and outbound traffic. If your network ACL is more restrictive, then you need to explicitly allow traffic from the ephemeral port range.

You might want to use a different range for your network ACLs depending on the type of client that you’re using or with which you’re communicating. The client that initiates the request chooses the ephemeral port range. The range varies depending on the client’s operating system.

– Many Linux kernels (including the Amazon Linux kernel) use ports 32768-61000.

– Requests originating from Elastic Load Balancing use ports 1024-65535.

– Windows operating systems through Windows Server 2003 use ports 1025-5000.

– Windows Server 2008 and later versions use ports 49152-65535.

– A NAT gateway uses ports 1024-65535.

– AWS Lambda functions use ports 1024-65535.

For example, if a request comes into a web server in your VPC from a Windows XP client on the Internet, your network ACL must have an outbound rule to enable traffic destined for ports 1025-5000.

If an instance in your VPC is the client initiating a request, your network ACL must have an inbound rule to enable traffic destined for the ephemeral ports specific to the type of instance (Amazon Linux, Windows Server 2008, and so on).

In practice, to cover the different types of clients that might initiate traffic to public-facing instances in your VPC, you can open ephemeral ports 1024-65535. However, you can also add rules to the ACL to deny traffic on any malicious ports within that range. Ensure that you place the deny rules earlier in the table than the allow rules that open the wide range of ephemeral ports.

Since NACLs are stateless, you will also need to explicitly add outbound allow rules for ports 80 and 443.

Hence, the correct answer is: Allow ephemeral ports in the Network ACL by adding a new rule to allow outbound traffic on ports 1024 – 65535.

The option that says: In the Security Group, add a new rule to allow outbound traffic on port 80 and port 443 is incorrect because security groups are stateful, which means that if you send a request from your instance, the response traffic for that request is allowed to flow in regardless of inbound security group rules. Conversely, responses to allowed inbound traffic are allowed to flow out, regardless of outbound rules.

The option that says: In the Network ACL, add a new rule to allow inbound traffic on ports 1024 – 65535 is incorrect because these ports should be added on the outbound rule and not on the NACL inbound rule. Ephemeral ports can be added to both the inbound and outbound rules of your NACL, however, in this scenario, the clients are accessing the web portal from the public Internet. If the clients or the requests originate from the same VPC, then this is the only time that you have to add the ephemeral ports on the inbound rules.

The option that says: Allow ephemeral ports in the Security Group by adding a new rule to allow outbound traffic on ports 1024 – 65535 is incorrect as you don’t have to manually allow the ephemeral ports to your security groups. This should be done in your Network ACL.

References: 
https://aws.amazon.com/premiumsupport/knowledge-center/resolve-connection-sg-acl-inbound/
https://docs.aws.amazon.com/vpc/latest/userguide/vpc-network-acls.html#nacl-ephemeral-ports
https://aws.amazon.com/premiumsupport/knowledge-center/connect-http-https-ec2/

Security Group vs. NACL:
https://tutorialsdojo.com/security-group-vs-nacl/

Question 8

A leading national bank migrated its on-premises infrastructure to AWS. The SysOps Administrator noticed that the cache hit ratio of the CloudFront web distribution is less than 15%.

Which combination of actions should he do to increase the cache hit ratio for the distribution? (Select TWO.)

  1. In the Cache Behavior settings of your distribution, configure to forward only the query string parameters for which your origin will return unique objects.
  2. Set the Viewer Protocol Policy of your web distribution to only use HTTPS to serve media content.
  3. Use Signed URLs to your CloudFront web distribution.
  4. Always add the Accept-Encoding header to compress all the content for each and every request.
  5. Configure your origin to add a Cache-Control max-age directive to your objects, and specify the longest practical value for max-age to increase your TTL.

Correct Answer: 1,5

One of the purposes of using CloudFront is to reduce the number of requests that your origin server must respond to directly. This reduces the load on your origin server and also reduces latency because more objects are served from CloudFront edge locations, which are closer to your users.

The more requests that CloudFront is able to serve from edge caches as a proportion of all requests (that is, the greater the cache hit ratio), the fewer viewer requests that CloudFront needs to forward to your origin to get the latest version or a unique version of an object. You can view the percentage of viewer requests that are hits, misses, and errors in the CloudFront console.

You can improve performance by increasing the proportion of your viewer requests that are served from CloudFront edge caches instead of going to your origin servers for content; that is, by improving the cache hit ratio for your distribution.

This can be done by doing any of the following:

1. Increase the TTL of your objects

2. Configure the distribution to forward only the required query string parameters, cookies, or request headers for which your origin will return unique objects.

3. Remove Accept-Encoding header when compression is not needed

4. Serving Media Content by using HTTP

Hence, the correct answers are: 

– In the Cache Behavior settings of your distribution, configure to forward only the query string parameters for which your origin will return unique objects.

– Configure your origin to add a Cache-Control max-age directive to your objects, and specify the longest practical value for max-age to increase your TTL.

The option that says: Set the Viewer Protocol Policy of your web distribution to only use HTTPS to serve media content is incorrect because it is actually recommended to use HTTP instead.

The option that says: Use Signed URLs to your CloudFront web distribution is incorrect because this is primarily used to secure your content and not for improving the cache hit ratio.

The option that says: Always add the Accept-Encoding header to compress all the content for each and every request is incorrect because you should actually remove the compression if it is not needed in order to improve your cache hit ratio.

References:
https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/cache-hit-ratio.html
https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/Expiration.html
https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/cache-hit-ratio-explained.html

Check out this Amazon CloudFront Cheat Sheet:
https://tutorialsdojo.com/amazon-cloudfront/

Question 9

A company has several applications and workloads running on AWS that are managed by various teams. The SysOps Administrator has been instructed to configure alerts to notify the teams in the event that the resource utilization exceeded the defined threshold.

Which of the following is the MOST suitable AWS service that the Administrator should use?

  1. AWS Trusted Advisor
  2. AWS Budgets
  3. Amazon CloudWatch Billing Alarm
  4. AWS Cost Explorer

Correct Answer: 2

AWS Budgets give you the ability to set custom budgets that alert you when your costs or usage exceed (or are forecasted to exceed) your budgeted amount.

You can also use AWS Budgets to set reservation utilization or coverage targets and receive alerts when your utilization drops below the threshold you define. Reservation alerts are supported for Amazon EC2, Amazon RDS, Amazon Redshift, Amazon ElastiCache, and Amazon OpenSearch Service reservations.

The AWS Budgets Dashboard is your hub for creating, tracking and inspecting your budgets. From the AWS Budgets Dashboard, you can create, edit, and manage your budgets, as well as view the status of each of your budgets. You can also view additional details about your budgets, such as a high-level variance analysis and a budget criteria summary.

Budgets can be created at the monthly, quarterly, or yearly level, and you can customize the start and end dates. You can further refine your budget to track costs associated with multiple dimensions, such as AWS service, linked account, tag, and others. Budget alerts can be sent via email and/or Amazon Simple Notification Service (SNS) topic.

Hence, the correct answer is AWS Budgets.

Amazon CloudWatch Billing Alarm is incorrect. Although you can use this to monitor your estimated AWS charges, this service still does not allow you to set coverage targets and receive alerts when your utilization drops below the threshold you define.

AWS Cost Explorer is incorrect because it only lets you visualize, understand, and manage your AWS costs and usage over time. You cannot define any threshold using this service, unlike AWS Budgets.

AWS Trusted Advisor is incorrect because this is just an online tool that provides you real-time guidance to help you provision your resources following AWS best practices.

 References:

https://aws.amazon.com/aws-cost-management/aws-budgets/https://docs.aws.amazon.com/awsaccountbilling/latest/aboutv2/billing-what-is.html

 Check out this AWS Billing and Cost Management Cheat Sheet:
https://tutorialsdojo.com/aws-billing-and-cost-management/

Question 10

A government organization has implemented a file gateway to keep copies of the home drives of their employees in a separate S3 bucket. As the SysOps Administrator, you noticed that most files are rarely accessed after 60 days but it is required that the files should still be available immediately in the event of a surprise audit.

In this scenario, what can you do to reduce the storage costs while continuing to provide access to the files for the employees?

  1. Enable versioning on the S3 bucket.
  2. Set up a lifecycle policy that moves the employee files older than 60 days to Infrequent Access storage class.
  3. Create a lifecycle policy to move files older than 60 days to Glacier Deep Archive storage class.
  4. Set up an S3 bucket policy to limit user access to only newer files that are created in less than 60 days.

Correct Answer: 2

You can add rules in a lifecycle configuration to tell Amazon S3 to transition objects to another Amazon S3 storage class. For example:

– When you know objects are infrequently accessed, you might transition them to the STANDARD_IA storage class.

– You might want to archive objects that you don’t need to access in real-time to the GLACIER storage class.

In a lifecycle configuration, you can define rules to transition objects from one storage class to another to save on storage costs. When you don’t know the access patterns of your objects or your access patterns are changing over time, you can transition the objects to the INTELLIGENT_TIERING storage class for automatic cost savings.

Hence, the correct answer is: Set up a lifecycle policy that moves the employee files older than 60 days to Infrequent Access storage class.

The option that says: Enabling versioning on the S3 bucket is incorrect because versioning will not lower the cost of your S3 storage.

The option that says: Creating a lifecycle policy to move files older than 60 days to Glacier Deep Archive storage class is incorrect. Although Glacier Deep Archive is cheaper than S3-IA, with this type of storage class, it will take hours to retrieve your data in the event of a surprise audit.

The option that says: Setting up an S3 bucket policy to limit user access to only newer files that are created in less than 60 days is incorrect because S3 Bucket Policy is primarily used for access management and not for storage lifecycle.

References:
https://docs.aws.amazon.com/AmazonS3/latest/dev/lifecycle-transition-general-considerations.html
https://aws.amazon.com/s3/storage-classes/

Check out this Amazon S3 Cheat Sheet:
https://tutorialsdojo.com/amazon-s3/

For more practice questions like these and to further prepare you for the actual AWS Certified SysOps Administrator Associate SOA-C02 exam, we recommend that you take our top-notch AWS Certified SysOps Administrator Associate Practice Exams, which have been regarded as the best in the market. 

Also check out our AWS Certified SysOps Administrator Associate SOA-C02 Exam Study Guide here.

Get any AWS Specialty Mock Test for FREE when you Buy 2 AWS Pro-Level Practice Tests – as LOW as $10.49 USD each ONLY!

Tutorials Dojo portal

Learn AWS with our PlayCloud Hands-On Labs

Tutorials Dojo Exam Study Guide eBooks

tutorials dojo study guide eBook

FREE AWS Exam Readiness Digital Courses

Subscribe to our YouTube Channel

Tutorials Dojo YouTube Channel

FREE AWS, Azure, GCP Practice Test Samplers

Follow Us On Linkedin

Recent Posts

Written by: Jon Bonso

Jon Bonso is the co-founder of Tutorials Dojo, an EdTech startup and an AWS Digital Training Partner that provides high-quality educational materials in the cloud computing space. He graduated from Mapúa Institute of Technology in 2007 with a bachelor's degree in Information Technology. Jon holds 10 AWS Certifications and is also an active AWS Community Builder since 2020.

AWS, Azure, and GCP Certifications are consistently among the top-paying IT certifications in the world, considering that most companies have now shifted to the cloud. Earn over $150,000 per year with an AWS, Azure, or GCP certification!

Follow us on LinkedIn, YouTube, Facebook, or join our Slack study group. More importantly, answer as many practice exams as you can to help increase your chances of passing your certification exams on your first try!

View Our AWS, Azure, and GCP Exam Reviewers Check out our FREE courses

Our Community

~98%
passing rate
Around 95-98% of our students pass the AWS Certification exams after training with our courses.
200k+
students
Over 200k enrollees choose Tutorials Dojo in preparing for their AWS Certification exams.
~4.8
ratings
Our courses are highly rated by our enrollees from all over the world.

What our students say about us?