Ends in
00
days
00
hrs
00
mins
00
secs
ENROLL NOW

Get any AWS Specialty Mock Test for FREE when you Buy 2 AWS Pro-Level Practice Tests – as LOW as $10.49 USD each ONLY!

AWS Certified Developer Associate DVA-C02 Sample Exam Questions

Home » Others » AWS Certified Developer Associate DVA-C02 Sample Exam Questions

AWS Certified Developer Associate DVA-C02 Sample Exam Questions

Last updated on February 16, 2024

Here are 10 AWS Certified Developer Associate DVA-C02 practice exam questions to help you gauge your readiness for the actual exam.

Question 1

A programmer is developing a Node.js application that will be run on a Linux server in their on-premises data center. The application will access various AWS services such as S3, DynamoDB, and ElastiCache using the AWS SDK.

Which of the following is the MOST suitable way to provide access for the developer to accomplish the specified task?

  1. Create an IAM role with the appropriate permissions to access the required AWS services. Assign the role to the on-premises Linux server.
  2. Go to the AWS Console and create a new IAM user with programmatic access. In the application server, create the credentials file at ~/.aws/credentials with the access keys of the IAM user.
  3. Create an IAM role with the appropriate permissions to access the required AWS services and assign the role to the on-premises Linux server. Whenever the application needs to access any AWS services, request temporary security credentials from STS using the AssumeRole API.
  4. Go to the AWS Console and create a new IAM User with the appropriate permissions. In the application server, create the credentials file at ~/.aws/credentials with the username and the hashed password of the IAM User.

     

Correct Answer: 2

If you have resources that are running inside AWS that need programmatic access to various AWS services, then the best practice is always to use IAM roles. However, applications running outside of an AWS environment will need access keys for programmatic access to AWS resources. For example, monitoring tools running on-premises and third-party automation tools will need access keys.

Access keys are long-term credentials for an IAM user or the AWS account root user. You can use access keys to sign programmatic requests to the AWS CLI or AWS API (directly or using the AWS SDK).

In order to use the AWS SDK for your application, you have to create your credentials file first at ~/.aws/credentials for Linux servers or at C:\Users\USER_NAME\.aws\credentials for Windows users and then save your access keys.

Hence, the correct answer is: Go to the AWS Console and create a new IAM user with programmatic access. In the application server, create the credentials file at ~/.aws/credentials with the access keys of the IAM user.

The option that says: Create an IAM role with the appropriate permissions to access the required AWS services and assign the role to the on-premises Linux server. Whenever the application needs to access any AWS services, request for temporary security credentials from STS using the AssumeRole API is incorrect because the scenario says that the application is running in a Linux server on-premises and not on an EC2 instance. You cannot directly assign an IAM Role to a server on your on-premises data center. Although it may be possible to use a combination of STS and IAM Role, the use of access keys for AWS SDK is still preferred, especially if the application server is on-premises.

The option that says: Create an IAM role with the appropriate permissions to access the required AWS services. Assign the role to the on-premises Linux server is also incorrect because, just as mentioned above, the use of an IAM Role is not a suitable solution for this scenario.

The option that says: Go to the AWS Console and create a new IAM User with the appropriate permissions. In the application server, create the credentials file at ~/.aws/credentials with the username and the hashed password of the IAM User is incorrect. An IAM user’s username and password can only be used to interact with AWS via its Management Console. These credentials are intended for human use and are not suitable for use in automated systems, such as applications and scripts that make programmatic calls to AWS services.

References:
https://aws.amazon.com/developers/getting-started/nodejs/
https://docs.aws.amazon.com/general/latest/gr/aws-sec-cred-types.html#access-keys-and-secret-access-keys
https://aws.amazon.com/blogs/security/guidelines-for-protecting-your-aws-account-while-using-programmatic-access/

Check out this AWS IAM Cheat Sheet:

https://tutorialsdojo.com/aws-identity-and-access-management-iam/

Question 2

A developer is moving a legacy web application from their on-premises data center to AWS. The application is used simultaneously by thousands of users, and their session states are stored in memory. The on-premises server usually reaches 100% CPU Utilization every time there is a surge in the number of people accessing the application.

Which of the following is the best way to re-factor the performance and availability of the application’s session management once it is migrated to AWS?

  1. Use an ElastiCache for Redis cluster to store the user session state of the application.
  2. Store the user session state of the application using CloudFront.
  3. Use an ElastiCache for Memcached cluster to store the user session state of the application.
  4. Use Sticky Sessions with Local Session Caching.

Correct Answer: 1

Amazon ElastiCache for Redis is a blazing fast in-memory data store that provides sub-millisecond latency to power internet-scale real-time applications. Built on open-source Redis and compatible with the Redis APIs, ElastiCache for Redis works with your Redis clients and uses the open Redis data format to store your data. Your self-managed Redis applications can work seamlessly with ElastiCache for Redis without any code changes. ElastiCache for Redis combines the speed, simplicity, and versatility of open-source Redis with manageability, security, and scalability from Amazon to power the most demanding real-time applications in Gaming, Ad-Tech, E-Commerce, Healthcare, Financial Services, and IoT.

In order to address scalability and provide a shared data storage for sessions that can be accessible from any individual web server, you can abstract the HTTP sessions from the web servers themselves. A common solution to for this is to leverage an In-Memory Key/Value store such as Redis and Memcached. While Key/Value data stores are known to be extremely fast and provide sub-millisecond latency, the added network latency and added cost are the drawbacks. An added benefit of leveraging Key/Value stores is that they can also be utilized to cache any data, not just HTTP sessions, which can help boost the overall performance of your applications.

With Redis, you can keep your data on disk with a point in time snapshot which can be used for archiving or recovery. Redis also lets you create multiple replicas of a Redis primary. This allows you to scale database reads and to have highly available clusters. Hence, the correct answer for this scenario is to use an ElastiCache for Redis cluster to store the user session state of the application.

The option that says: Store the user session state of the application using CloudFront is incorrect because CloudFront is not suitable for storing user session data. It is primarily used as a content delivery network.

The option that says: Use an ElastiCache for Memcached cluster to store the user session state of the application is incorrect. Although using ElastiCache is a viable answer, Memcached is not as highly available as Redis.

The option that says: Use Sticky Sessions with Local Session Caching is incorrect. Although this is also a viable solution, it doesn’t offer durability and high availability compared to a distributed session management solution. The best solution for this scenario is to use an ElastiCache for Redis cluster.

References:
https://aws.amazon.com/caching/session-management
https://aws.amazon.com/elasticache/redis-vs-memcached/
https://aws.amazon.com/elasticache/redis/

Check out this Amazon Elasticache Cheat Sheet:
https://tutorialsdojo.com/amazon-elasticache/

Question 3

A developer is working with an AWS Serverless Application Model (AWS SAM) application composed of several AWS Lambda functions. The developer runs the application locally on his laptop using sam local commands. While testing, one of the functions returns Access denied errors. Upon investigation, the developer discovered that the Lambda function is using the AWS SDK to make API calls within a sandbox AWS account.

Which combination of steps must the developer do to resolve the issue? (Select TWO)

  1. Use the aws configure command with the --profile parameter to add a named profile with the sandbox AWS account’s credentials.

  2. Create an AWS SAM CLI configuration file at the root of the SAM project folder. Add the AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY environment variables to it.

  3. Add the AWS credentials of the sandbox AWS account to the Globals section of the template.yml file and reference them in the AWS::Serverless::Function properties section of the Lambda function.

  4. Tutorials dojo strip
  5. Run the function using sam local invoke with the --profile parameter.

  6. Run the function using sam local invoke with the --parameter-overrides parameter.

Correct Answer: 1,4

AWS Lambda functions have an associated execution role that provides permissions to interact with other AWS services. However, when you run AWS Lambda functions locally using the SAM CLI, you’re simulating the execution environment of the Lambda function but not replicating the AWS execution context, including the IAM execution role. This means that the function won’t automatically assume any IAM execution role and instead will rely on the credentials stored in ~/.aws/credentials file.

When testing locally with AWS SAM, you can specify a named profile from your AWS CLI configuration using the --profile parameter with the sam local invoke command. This will instruct the SAM CLI to use the credentials from the specified profile when invoking the Lambda function. You can run the aws configure  with the --profile option to set the credentials for a named profile.

In the scenario, the developer must first set up the sandbox AWS account’s credentials using aws configure --profile sandbox. This creates a named profile ‘sandbox’ (note that you can use any name for the profile). For local testing with the SAM CLI, the developer can then specify this profile using the command sam local invoke --profile sandbox. This ensures that the locally executed Lambda function utilizes the correct credentials to access resources in the sandbox AWS account.

Hence, the correct answers are:

– Use the aws configure command with the --profile parameter to add a named profile with the sandbox AWS account’s credentials.

– Run the function using sam local invoke with the --profile parameter.

The option that says: Create an AWS SAM CLI configuration file at the root of the SAM project folder. Add the AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY environment variables to it is incorrect. The SAM CLI relies on the AWS credentials stored in the /.aws/credentials file, which can be set through the aws configure command. While it’s technically possible to place application credentials in a configuration file, SAM CLI doesn’t support sourcing AWS credentials from it for authentication.

The option that says: Add the AWS credentials of the sandbox AWS account to the Globals section of the template.yml file and reference them in the AWS::Serverless::Function properties section of the Lambda function is incorrect. The Globals section in a SAM template.yaml is primarily used for setting properties that apply to all AWS resources of a certain type. It’s not a storage location for AWS credentials. Moreover, the AWS::Serverless::Function resource property does not have fields for AWS credentials. Even if you were to add the credentials as environment variables, it still wouldn’t grant the locally running function the permissions associated with those credentials.

The option that says: Run the function using sam local invoke with the --parameter-overrides parameter is incorrect. The --parameter-overrides option is typically used to change template parameters during local testing. For instance, if you had a parameter in your SAM template for setting an environment variable, the --parameter-overrides option would allow you to test with different values for those parameters. Still, it does not interact with nor modify AWS credentials.

References:

https://docs.aws.amazon.com/serverless-application-model/latest/developerguide/sam-cli-command-reference-sam-local-invoke.html
https://aws.amazon.com/blogs/aws/aws-serverless-application-model-sam-command-line-interface-build-test-and-debug-serverless-apps-locally/
https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-files.html

Check out this AWS SAM Cheat Sheet:
https://tutorialsdojo.com/aws-serverless-application-model-sam/

Question 4

A web application is uploading large files, which are over 4 GB in size, in an S3 bucket called http://data.tutorialsdojo.com every 30 minutes. You want to minimize the time required to upload each file. Which of the following should you do to minimize upload time?

  1. Use the Multipart upload API.
  2. Enable Transfer Acceleration in the bucket.
  3. Use the BatchWriteItem API.
  4. Use the Putltem API.

Correct Answer: 1

Multipart upload allows you to upload a single object as a set of parts. Each part is a contiguous portion of the object’s data. You can upload these object parts independently and in any order. If transmission of any part fails, you can retransmit that part without affecting other parts. After all parts of your object are uploaded, Amazon S3 assembles these parts and creates the object. In general, when your object size reaches 100 MB, you should consider using multipart uploads instead of uploading the object in a single operation.

Using multipart upload provides the following advantages:

    – Improved throughput – You can upload parts in parallel to improve throughput.

Quick recovery from any network issues – Smaller part size minimizes the impact of restarting a failed upload due to a network error.

Pause and resume object uploads – You can upload object parts over time. Once you initiate a multipart upload, there is no expiry; you must explicitly complete or abort the multipart upload.

Begin an upload before you know the final object size – You can upload an object as you are creating it.

 

Hence, using the Multipart Upload API is the correct answer in this scenario.

The options: Use the BatchWriteItem API and Use the Putltem API are incorrect because these are DynamoDB APIs and not S3.

The option that says: Enable Transfer Acceleration in the bucket is incorrect because although Transfer Acceleration will significantly reduce the upload time to S3, the bucket in the scenario won’t be able to turn on this feature. Take note that the name of the bucket used for Transfer Acceleration must be DNS-compliant and must not contain periods (“.”).

Reference:
https://docs.aws.amazon.com/AmazonS3/latest/dev/uploadobjusingmpu.html
https://docs.aws.amazon.com/AmazonS3/latest/dev/transfer-acceleration.html

Check out this Amazon S3 Cheat Sheet:
https://tutorialsdojo.com/aws-cheat-sheet-amazon-s3/

S3 Transfer Acceleration vs Direct Connect vs VPN vs Snowball vs Snowmobile:
https://tutorialsdojo.com/s3-transfer-acceleration-vs-direct-connect-vs-vpn-vs-snowball-vs-snowmobile/

 

Question 5

A web application is using an ElastiCache cluster that is suffering from cache churn. A developer needs to reconfigure the application so that data are retrieved from the database only in the event that there is a cache miss.

Which pseudocode illustrates the caching strategy that the developer needs to implement?

Option 1

<pre>
get_item(item_id):
     item_value = cache.get(item_id)
     if item_value is None:
          item_value = database.query(“SELECT * FROM Items WHERE id = ?”, item_id)
          cache.add(item_id, item_value)
     return item_value
</pre>

Option 2

<pre>
get_item(item_id):
        item_value = database.query(“SELECT * FROM Items WHERE id = ?”, item_id)
        if item_value is None:
             item_value = cache.set(item_id, item_value)
             cache.add(item_id, item_value)
        return item_value
</pre>

Option 3

<pre>
get_item(item_id):
        item_value = cache.get(item_id)
        if item_value is not None:
             item_value = database.query(“SELECT * FROM Items WHERE id = ?”, item_id)
             cache.add(item_id, item_value)
             return item_value
        else:
              return item_value
</pre>

Option 4

<pre>
get_item(item_id, item_value):
        item_value = database.query(“UPDATE Items WHERE id = ?”, item_id, item_value)
        cache.add(item_id, item_value)
        return ‘ok’
</pre>

Correct Answer: 1

Lazy Loading is a caching strategy that loads data into the cache only when necessary. Here is how it works:

 – If the data exists in the cache and is current, ElastiCache returns the data to your application. This event is also called “Cache Hit“.

– If there is a “Cache Miss“, or in other words, the data does not exist in the cache, or the data in the cache has expired, then your application requests the data from your data store, which returns the data to your application. Your application then writes the data received from the store to the cache so it can be more quickly retrieved the next time it is requested.

In the scenario, to implement lazy loading, you must first check if the item is already in the cache using the “cache.get(item_id)” method. If the item is not in the cache (i.e. “item_value is None”), the code then queries the database for the item and stores it in the cache using the “cache.set(item_id, item_value)” method so that it can be retrieved faster next time. This way, the application is not querying the database every time the item is needed and instead uses the cached version of the item if it’s available.

Hence, the correct answer is the option that says: 

get_item(item_id):
    item_value = cache.get(item_id)
    if item_value is None:
        item_value = database.query("SELECT * FROM Items WHERE id = ?", item_id)
        cache.add(item_id, item_value)
    return item_value

The option that says:

get_item(item_id):
    item_value = database.query("SELECT * FROM Items WHERE id = ?", item_id)
    if item_value is None:
        item_value = cache.set(item_id, item_value)
        cache.add(item_id, item_value)
    return item_value

is an incorrect implementation of lazy loading because it first queries the database and checks if the item is in the cache.

The option that says:

get_item(item_id):
    item_value = cache.get(item_id)
    if item_value is not None:
        item_value = database.query("SELECT * FROM Items WHERE id = ?", item_id)
        cache.add(item_id, item_value)
        return item_value
    else:
        return item_value

does not implement lazy loading because the code is not utilizing the cache first and is querying the database every time the item is needed, this will make the application slow and inefficient.

The option that says:

get_item(item_id, item_value):
    item_value = database.query("UPDATE Items WHERE id = ?", item_id, item_value)
    cache.add(item_id, item_value)
    return 'ok'

is incorrect because this is an implementation of a write-through caching strategy where data is written to both the cache and the primary storage (such as a database).

References:
https://aws.amazon.com/caching/best-practices/
https://docs.aws.amazon.com/AmazonElastiCache/latest/mem-ug/Strategies.html

Check out this Amazon Elasticache Cheat Sheet:
 https://tutorialsdojo.com/amazon-elasticache/

Question 6

A Software Engineer is developing an application that will be hosted on an EC2 instance and read messages from a standard SQS queue. The average time that it takes for the producers to send a new message to the queue is 10 seconds.

Which of the following is the MOST efficient way for the application to query the new messages from the queue?

  1. Configure the SQS queue to use Long Polling.
  2. Configure each message in the SQS queue to have a custom visibility timeout of 10 seconds.
  3. Configure the SQS queue to use Short Polling.
  4. Configure an SQS Delay Queue with a value of 10 seconds.

Correct Answer: 1

Amazon SQS long polling is a way to retrieve messages from your Amazon SQS queues. While the regular short polling returns immediately, even if the message queue being polled is empty, long polling doesn’t return a response until a message arrives in the message queue or the long poll times out.

Long polling helps reduce the cost of using Amazon SQS by eliminating the number of empty responses (when there are no messages available for a ReceiveMessage request) and false empty responses (when messages are available but aren’t included in a response). This type of polling is suitable if the new messages that are being added to the SQS queue arrive less frequently.

You can configure long polling to your SQS queue by simply setting the “Receive Message Wait Time” field to a value greater than 0. Hence, the correct answer for this scenario is to configure the SQS queue to use Long Polling.

The option that says: Configure each message in the SQS queue to have a custom visibility timeout of 10 seconds is incorrect because a visibility timeout is primarily used to prevent other consumers from processing the message again for a period of time. This is normally used if your application takes a long time to process and delete a message from the SQS queue.

The option that says: Configure the SQS queue to use Short Polling is incorrect because it is inefficient to poll the queue every second if the average time that it takes for the producers to send a new message to the queue is 40 seconds. It is better to do Long Polling, which will query the queue every 15 or 20 seconds, considering that new messages are not being added every second.

The option that says: Configure an SQS Delay Queue with a value of 10 seconds is incorrect because this is primarily configured if you want to postpone the delivery of new messages to the SQS queue for a number of seconds. Having this SQS configuration which sets the new messages to remain invisible to the consumers for a duration of the delay period is not helpful in the given scenario. It is still better to use Long Polling instead of setting up a delay queue.

References:
https://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/sqs-long-polling.html
https://aws.amazon.com/sqs/faqs/

Check out this Amazon SQS Cheat Sheet:
https://tutorialsdojo.com/amazon-sqs/

Amazon Simple Workflow (SWF) vs. AWS Step Functions vs. Amazon SQS:
https://tutorialsdojo.com/amazon-simple-workflow-swf-vs-aws-step-functions-vs-amazon-sqs/

Comparison of AWS Services Cheat Sheets:
https://tutorialsdojo.com/comparison-of-aws-services/

Question 7

A developer has deployed a Lambda function that runs in DEV, UAT, and PROD environments. The function uses different parameters that varies based on the environment it is running in. The parameters are currently hardcoded in the function.

Which action should the developer do to reference the appropriate parameters without modifying the code every time the environment changes?

  1. Create a stage variable called ENV and invoke the Lambda function by its alias name.
  2. Create individual Lambda Layers for each environment
  3. Publish three versions of the Lambda function. Assign the aliases DEV, UAT, and PROD to each version.
  4. Use environment variables to set the parameters per environment.

Correct Answer: 4

Environment variables for Lambda functions enable you to dynamically pass settings to your function code and libraries without making changes to your code. Environment variables are key-value pairs that you create and modify as part of your function configuration using either the AWS Lambda Console, the AWS Lambda CLI, or the AWS Lambda SDK. AWS Lambda then makes these key-value pairs available to your Lambda function code using standard APIs supported by the language, like process.env for Node.js functions.

You can use environment variables to help libraries know what directory to install files in, where to store outputs, store connection and logging settings, and more. By separating these settings from the application logic, you don’t need to update your function code when changing the function behavior based on different settings.

Hence, the correct answer is: Use environment variables to set the parameters per environment.

AWS Exam Readiness Courses

The option that says: Create a stage variable called ENV and invoke the Lambda function by its alias name is incorrect because the stage variable is a feature of API Gateway, not AWS Lambda.

The option that says: Create individual Lambda Layers for each environment is incorrect because this feature is only used to pull in additional code and content in the form of layers. A layer is a ZIP archive that contains libraries, a custom runtime, or other dependencies.

The option that says: Publish three versions of the Lambda function. Assign the aliases DEV, UAT, and PROD to each version is incorrect because this is just like a pointer to a specific Lambda function version. By using aliases, you can access the Lambda function, which the alias is pointing to, without the caller knowing the specific version the alias is pointing to.

References:
https://docs.aws.amazon.com/lambda/latest/dg/env_variables.html
https://docs.aws.amazon.com/lambda/latest/dg/lambda-configuration.html

Check out this AWS Lambda Cheat Sheet:
https://tutorialsdojo.com/aws-lambda/

Question 8

A developer has recently completed a new version of a serverless application that is ready to be deployed using AWS SAM. There is a requirement that the traffic should shift from the previous Lambda function to the new version in the shortest time possible, but you still don’t want to shift traffic all-at-once immediately.

Which deployment configuration is the MOST suitable one to use in this scenario?

  1. CodeDeployDefault.HalfAtATime
  2. CodeDeployDefault.LambdaLinear10PercentEvery1Minute
  3. CodeDeployDefault.LambdaLinear10PercentEvery2Minutes
  4. CodeDeployDefault.LambdaCanary10Percent5Minutes

Correct Answer: 4

If you use AWS SAM to create your serverless application, it comes built-in with CodeDeploy to help ensure safe Lambda deployments. There are various deployment preference types that you can choose from.

For example:

If you choose Canary10Percent10Minutes then 10 percent of your customer traffic is immediately shifted to your new version. After 10 minutes, all traffic is shifted to the new version.

However, if your pre-hook/post-hook tests fail, or if a CloudWatch alarm is triggered, CodeDeploy rolls back your deployment. The following table outlines other traffic-shifting options that are available:

  • – Canary: Traffic is shifted in two increments. You can choose from predefined canary options. The options specify the percentage of traffic that’s shifted to your updated Lambda function version in the first increment, and the interval, in minutes, before the remaining traffic is shifted in the second increment.

  • – Linear: Traffic is shifted in equal increments with an equal number of minutes between each increment. You can choose from predefined linear options that specify the percentage of traffic that’s shifted in each increment and the number of minutes between each increment.

  • – All-at-once: All traffic is shifted from the original Lambda function to the updated Lambda function version at once.

 

Hence, the CodeDeployDefault.LambdaCanary10Percent5Minutes option is correct because 10 percent of your customer traffic is immediately shifted to your new version. After 5 minutes, all traffic is shifted to the new version. This means that the entire deployment time will only take 5 minutes

 

CodeDeployDefault.HalfAtATime is incorrect because this is only applicable for EC2/On-premises compute platform and not for Lambda.

CodeDeployDefault.LambdaLinear10PercentEvery1Minute is incorrect because it will add 10 percent of the traffic linearly to the new version every minute. Hence, all traffic will be shifted to the new version only after 10 minutes

CodeDeployDefault.LambdaLinear10PercentEvery2Minutes is incorrect because it will add 10 percent of the traffic linearly to the new version every 2 minutes. Hence, all traffic will be shifted to the new version only after 20 minutes.

References:
https://docs.aws.amazon.com/serverless-application-model/latest/developerguide/automating-updates-to-serverless-apps.html
https://docs.aws.amazon.com/serverless-application-model/latest/developerguide/serverless-deploying.html

 

Question 9

A serverless application is composed of several Lambda functions which reads data from RDS. These functions must share the same connection string that should be encrypted to improve data security.

Which of the following is the MOST secure way to meet the above requirement?

  1. Create a Secure String Parameter using the AWS Systems Manager Parameter Store.
  2. Use AWS Lambda environment variables encrypted with KMS which will be shared by the Lambda functions.
  3. Create an IAM Execution Role that has access to RDS and attach it to the Lambda functions.
  4. Use AWS Lambda environment variables encrypted with CloudHSM.

Correct Answer: 1

AWS Systems Manager Parameter Store provides secure, hierarchical storage for configuration data management and secrets management. You can store data such as passwords, database strings, and license codes as parameter values. You can store values as plain text or encrypted data. You can then reference values by using the unique name that you specified when you created the parameter.

Parameter Store offers the following benefits and features:

– Use a secure, scalable, hosted secrets management service (No servers to manage).

– Improve your security posture by separating your data from your code.

– Store configuration data and secure strings in hierarchies and track versions.

– Control and audit access at granular levels.

– Configure change notifications and trigger automated actions.

– Tag parameters individually, and then secure access from different levels, including operational, parameter, Amazon EC2 tag, or path levels.

– Reference AWS Secrets Manager secrets by using Parameter Store parameters.

 

Hence, creating a Secure String Parameter using the AWS Systems Manager Parameter Store is the correct solution for this scenario.

The option that says: Use AWS Lambda environment variables encrypted with KMS which will be shared by the Lambda functions is incorrect. Even though the credentials will be encrypted, these environment variables will only be used by an individual Lambda function, and cannot be shared.

The option that says: Create an IAM Execution Role that has access to RDS and attach it to the Lambda functions is incorrect because this solution will not encrypt the database credentials for RDS.

The option that says: Use AWS Lambda environment variables encrypted with CloudHSM is incorrect because Lambda primarily uses KMS for encryption and not CloudHSM.

References:
https://docs.aws.amazon.com/systems-manager/latest/userguide/systems-manager-paramstore.html 
https://aws.amazon.com/blogs/compute/sharing-secrets-with-aws-lambda-using-aws-systems-manager-parameter-store/

Check out this AWS Systems Manager Cheat Sheet:
https://tutorialsdojo.com/aws-systems-manager/

Question 10

To improve their information security management system (ISMS), a company recently released a new policy which requires all database credentials to be encrypted and be automatically rotated to avoid unauthorized access.

Which of the following is the MOST appropriate solution to secure the credentials?

  1. Create a parameter to the Systems Manager Parameter Store using the PutParameter API with a type of SecureString.
  2. Enable IAM DB authentication which rotates the credentials by default.
  3. Create an IAM Role which has full access to the database. Attach the role to the services which require access.
  4. Create a secret in AWS Secrets Manager and enable automatic rotation of the database credentials.

Correct Answer: 4

AWS Secrets Manager is an AWS service that makes it easier for you to manage secrets. Secrets can be database credentials, passwords, third-party API keys, and even arbitrary text. You can store and control access to these secrets centrally by using the Secrets Manager console, the Secrets Manager command line interface (CLI), or the Secrets Manager API and SDKs.

In the past, when you created a custom application that retrieves information from a database, you typically had to embed the credentials (the secret) for accessing the database directly in the application. When it came time to rotate the credentials, you had to do much more than just create new credentials. You had to invest time to update the application to use the new credentials. Then you had to distribute the updated application. If you had multiple applications that shared credentials and you missed updating one of them, the application would break. Because of this risk, many customers have chosen not to regularly rotate their credentials, which effectively substitutes one risk for another.

Secrets Manager enables you to replace hardcoded credentials in your code (including passwords), with an API call to Secrets Manager to retrieve the secret programmatically. This helps ensure that the secret can’t be compromised by someone examining your code, because the secret simply isn’t there. Also, you can configure Secrets Manager to automatically rotate the secret for you according to a schedule that you specify. This enables you to replace long-term secrets with short-term ones, which helps to significantly reduce the risk of compromise.

Hence, creating a secret in AWS Secrets Manager and enabling automatic rotation of the database credentials is the most appropriate solution for this scenario.

The option that says: Create a parameter to the Systems Manager Parameter Store using the PutParameter API with a type of SecureString is incorrect because, by default, Systems Manager Parameter Store doesn’t rotate its parameters.

The option that says: Enable IAM DB authentication which rotates the credentials by default is incorrect because this solution only enables the service to connect to Amazon RDS with IAM credentials. It doesn’t have the capability to rotate the credentials like what AWS Secrets Manager does to its secrets.

The option that says: Create an IAM Role which has full access to the database. Attach the role to the services which requires access is incorrect because although IAM Role is a preferred way to grant access to certain services, this solution doesn’t rotate the keys/credentials.

References:
https://docs.aws.amazon.com/systems-manager/latest/userguide/systems-manager-paramstore.html
https://aws.amazon.com/blogs/compute/sharing-secrets-with-aws-lambda-using-aws-systems-manager-parameter-store/

Check out this AWS Secrets Manager Cheat Sheet:
https://tutorialsdojo.com/aws-secrets-manager/

AWS Security Services Overview – Secrets Manager, ACM, Macie:

For more practice questions like these and to further prepare you for the actual AWS Certified Developer Associate DVA-C02 exam, we recommend that you take our top-notch AWS Certified Developer Associate Practice Exams, which have been regarded as the best in the market. 

Also check out our AWS Certified Developer Associate DVA-C02 Exam Study Guide here.

Get any AWS Specialty Mock Test for FREE when you Buy 2 AWS Pro-Level Practice Tests – as LOW as $10.49 USD each ONLY!

Tutorials Dojo portal

Learn AWS with our PlayCloud Hands-On Labs

Tutorials Dojo Exam Study Guide eBooks

tutorials dojo study guide eBook

FREE AWS Exam Readiness Digital Courses

Subscribe to our YouTube Channel

Tutorials Dojo YouTube Channel

FREE AWS, Azure, GCP Practice Test Samplers

Follow Us On Linkedin

Recent Posts

Written by: Jon Bonso

Jon Bonso is the co-founder of Tutorials Dojo, an EdTech startup and an AWS Digital Training Partner that provides high-quality educational materials in the cloud computing space. He graduated from Mapúa Institute of Technology in 2007 with a bachelor's degree in Information Technology. Jon holds 10 AWS Certifications and is also an active AWS Community Builder since 2020.

AWS, Azure, and GCP Certifications are consistently among the top-paying IT certifications in the world, considering that most companies have now shifted to the cloud. Earn over $150,000 per year with an AWS, Azure, or GCP certification!

Follow us on LinkedIn, YouTube, Facebook, or join our Slack study group. More importantly, answer as many practice exams as you can to help increase your chances of passing your certification exams on your first try!

View Our AWS, Azure, and GCP Exam Reviewers Check out our FREE courses

Our Community

~98%
passing rate
Around 95-98% of our students pass the AWS Certification exams after training with our courses.
200k+
students
Over 200k enrollees choose Tutorials Dojo in preparing for their AWS Certification exams.
~4.8
ratings
Our courses are highly rated by our enrollees from all over the world.

What our students say about us?