Ends in
00
days
00
hrs
00
mins
00
secs
LEARN MORE

72-Hour Flash Sale! Get our AZ-900, AZ-104, and GCP ACE Practice Exams at Super Low Prices

AWS OpsWorks

  • A configuration management service that helps you configure and operate applications in a cloud enterprise by using Puppet or Chef.
  • AWS OpsWorks Stacks and AWS OpsWorks for Chef Automate (1 and 2) let you use Chef cookbooks and solutions for configuration management, while OpsWorks for Puppet Enterprise lets you configure a Puppet Enterprise master server in AWS.
  • With AWS OpsWorks, you can automate how nodes are configured, deployed, and managed, whether they are Amazon EC2 instances or on-premises devices:

opsworks-stacks

OpsWorks for Puppet Enterprise

  • Provides a fully-managed Puppet master, a suite of automation tools that enable you to inspect, deliver, operate, and future-proof your applications, and access to a user interface that lets you view information about your nodes and Puppet activities.
  • Does not support all regions.
  • Uses puppet-agent software.
  • Features
    • AWS manages the Puppet master server running on an EC2 instance. You retain control over the underlying resources running your Puppet master.
    • You can choose the weekly maintenance window during which OpsWorks for Puppet Enterprise will automatically install updates.
    • Monitors the health of your Puppet master during update windows and automatically rolls back changes if issues are detected.
    • You can configure automatic backups for your Puppet master and store them in an S3 bucket in your account.
    • You can register new nodes to your Puppet master by inserting a user-data script, provided in the OpsWorks for Puppet Enterprise StarterKit, into your Auto Scaling groups.
    • Puppet uses SSL and a certification approval process when communicating to ensure that the Puppet master responds only to requests made by trusted users.
  • Deleting a server also deletes its events, logs, and any modules that were stored on the server. Supporting resources are also deleted, along with all automated backups.
  • Pricing
    • You are charged based on the number of nodes (servers running the Puppet agent) connected to your Puppet master and the time those nodes are running on an hourly rate, and you also pay for the underlying EC2 instance running your Puppet master.
IT Certification Category (English)728x90

OpsWorks for Chef Automate

  • Lets you create AWS-managed Chef servers that include Chef Automate premium features, and use the Chef DK and other Chef tooling to manage them.
  • AWS OpsWorks for Chef Automate supports Chef Automate 2.
  • Uses chef-client.
  • Features
    • You can use Chef to manage both Amazon EC2 instances and on-premises servers running Linux or Windows.
    • You receive the full Chef Automate platform which includes premium features that you can use with Chef server, like Chef Workflow, Chef Visibility, and Chef Compliance.
    • You provision a managed Chef server running on an EC2 instance in your account. You retain control over the underlying resources running your Chef server and you can use Knife to SSH into your Chef server instance at any time.
    • You can set a weekly maintenance window during which OpsWorks for Chef Automate will automatically install updates.
    • You can configure automatic backups for your Chef server and is stored in an S3 bucket.
    • You can register new nodes to your Chef server by inserting user-data code snippets provided by OpsWorks for Chef Automate into your Auto Scaling groups.
    • Chef uses SSL to ensure that the Chef server responds only to requests made by trusted users. The Chef server and Chef client use bidirectional validation of identity when communicating with each other.
  • Deleting a server also deletes its events, logs, and any cookbooks that were stored on the server. Supporting resources are deleted also, along with all automated backups.
  • Pricing
    • You are charged based on the number of nodes connected to your Chef server and the time those nodes are running, and you also pay for the underlying EC2 instance running your Chef server.

OpsWorks Stacks

  • Provides a simple and flexible way to create and manage stacks and applications.
  • Stacks are group of AWS resources that constitute an full-stack application. By default, you can create up to 40 Stacks, and each stack can hold up to 40 layers, 40 instances, and 40 apps.
  • You can create stacks that help you manage cloud resources in specialized groups called layers. A layer represents a set of EC2 instances that serve a particular purpose, such as serving applications or hosting a database server. Layers depend on Chef recipes to handle tasks such as installing packages on instances, deploying apps, and running scripts.

AWS Training AWS OpsWorks 2

  • OpsWorks Stacks does NOT require or create Chef servers.
  • Features
    • You can deploy EC2 instances from template configurations, including EBS volume creation.
    • You can configure the software on your instances on-demand or automatically based on lifecycle events, from bootstrapping the base OS image into a working server to modifying running services to reflect changes.
    • OpsWorks Stacks can auto heal your stack. If an instance fails in your stack, OpsWorks Stacks can replace it with a new one.
    • You can adapt the number of running instances to match your load, with time-based or load-based auto scaling.
    • You can use OpsWorks Stacks to configure and manage both Linux and Windows EC2 instances.
    • You can use AWS OpsWorks Stacks to deploy, manage, and scale your application on any Linux server such as EC2 instances or servers running in your own data center.
  • Instance Types
    • 24/7 instances are started manually and run until you stop them.
    • Time-based instances are run by OpsWorks Stacks on a specified daily and weekly schedule. They allow your stack to automatically adjust the number of instances to accommodate predictable usage patterns.
    • Load-based instances are automatically started and stopped by OpsWorks Stacks, based on specified load metrics, such as CPU utilization. They allow your stack to automatically adjust the number of instances to accommodate variations in incoming traffic.
      • Load-based instances are available only for Linux-based stacks.
  • Lifecycle Events
    • You can run recipes manually, but OpsWorks Stacks also lets you automate the process by supporting a set of five lifecycle events:
      • Setup occurs on a new instance after it successfully boots.
      • Configure occurs on all of the stack’s instances when an instance enters or leaves the online state.
      • Deploy occurs when you deploy an app.
      • Undeploy occurs when you delete an app.
      • Shutdown occurs when you stop an instance.
  • Monitoring
    • OpsWorks Stacks sends all of your resource metrics to CloudWatch.
    • Logs are available for each action performed on your instances.
    • CloudTrail logs all API calls made to OpsWorks.
  • Security
    • Grant IAM users access to specific stacks, making management of multi-user environments easier.
    • You can also set user-specific permissions for actions on each stack, allowing you to decide who can deploy new application versions or create new resources.
    • Each EC2 instance has one or more associated security groups that govern the instance’s network traffic. A security group has one or more rules, each of which specifies a particular category of allowed traffic.
  • Pricing
    • You pay for AWS resources created using OpsWorks Stacks in the same manner as if you created them manually.

AWS OpsWorks-related Cheat Sheets:

 

Validate Your Knowledge

Question 1

A company manually runs its custom scripts when deploying a new version of its application that is hosted on a fleet of Amazon EC2 instances. This method is prone to human errors such as accidentally running the wrong script or deploying the wrong artifact. The company wants to automate its deployment procedure and leverage its team’s knowledge in Chef deployment tools. The new version of the application must first be deployed on a staging environment for verification and testing. After passing the tests, it can then be deployed to the production environment. If errors are encountered after the deployment, the company wants to roll back to the older application version within five minutes.

Which of the following options should the Solutions Architect implement to meet the requirements?

  1. Create an environment on AWS Elastic Beanstalk and deploy the application. For succeeding deployments, choose a “rolling update” strategy for fast deployment and easy rollback procedure in case of errors.
  2. Create a new pipeline on AWS CodePipeline and add a stage that will deploy the application on the AWS EC2 instances. Choose a “rolling update with an additional batch” deployment strategy, to allow a quick rollback to the older version in case of errors.
  3. Utilize AWS CodeBuild and add a job with the Chef recipes for the new application version. Use a “canary” deployment strategy to the new version on a new instance. Delete the canary instance if errors are found on the new version.
  4. Create a stack on AWS OpsWorks and deploy the application. Clone this stack and deploy the new application version on it. Use a “blue/green” deployment strategy to shift traffic to the newer stack.

Correct Answer: 4

AWS OpsWorks for Chef Automate provides a fully managed Chef Automate server and suite of automation tools that give you workflow automation for continuous deployment, automated testing for compliance and security, and a user interface that gives you visibility into your nodes and their status. The Chef Automate platform gives you full stack automation by handling operational tasks such as software and operating system configurations, continuous compliance, package installations, database setups, and more. The Chef server centrally stores your configuration tasks and provides them to each node in your compute environment at any scale, from a few nodes to thousands of nodes. OpsWorks for Chef Automate is completely compatible with tooling and cookbooks from the Chef community and automatically registers new nodes with your Chef server.

You can implement a blue/green deployment strategy with OpsWorks stacks which allows you to run another version of your application – blue and green environment.

A blue-green deployment strategy is one common way to efficiently use separate stacks to deploy an application update to production. In a nutshell, you will clone your current OpsWorks stack and then deploy a new version on the cloned stack. Then you will use Amazon Route 53 to point the users to the new stack URL. Here’s the set up for a blue/green deployment on OpsWorks Stacks:

  • The blue environment is the production stack, which hosts the current application.
  • The green environment is the staging stack, which hosts the updated application.
  • When you are ready to deploy the updated app to production, you switch user traffic from the blue stack to the green stack, which becomes the new production stack. You then retire the old blue stack.

Therefore, the correct answer is: Create a stack on AWS OpsWorks and deploy the application. Clone this stack and deploy the new application version on it. Use a “blue/green” deployment strategy to shift traffic to the newer stack.

The option that says: Create an environment on AWS Elastic Beanstalk and deploy the application. For succeeding deployments, choose a “rolling update” strategy for fast deployment and easy rollback procedure in case of errors is incorrect. Although this results in a fast deployment, the rollback procedure will approximately take the same time amount of time as the deployment because you will have to trigger a re-deployment of the old version. This approach doesn’t leverage the team’s knowledge in Chef deployment tools.

The option that says: Create a new pipeline on AWS CodePipeline and add a stage that will deploy the application on the AWS EC2 instances. Choose a “rolling update with an additional batch” deployment strategy, to allow a quick rollback to the older version in case of errors is incorrect. Although the pipeline can deploy the new version on the EC2 instances, rollback for this strategy takes time. You will have to re-deploy the older version if you want to do a rollback.

The option that says: Utilize AWS CodeBuild and add a job with the Chef recipes for the new application version. Use a “canary” deployment strategy to the new version on a new instance. Delete the canary instance if errors are found on the new version is incorrect. Although you can detect errors on a canary deployment, AWS CodeBuild cannot deploy the new application version on the EC2 instances. You have to use AWS CodeDeploy if you want to go this route. It’s also easier to set up Chef deployments using AWS OpsWorks rather than in AWS CodeBuild.

References:
https://aws.amazon.com/opsworks/chefautomate/
https://docs.aws.amazon.com/opsworks/latest/userguide/best-deploy.html#best-deploy-environments-blue-green
https://docs.aws.amazon.com/opsworks/latest/userguide/workingapps-deploying.html

Note: This question was extracted from our AWS Certified Solutions Architect Professional Practice Exams.

Question 2

You are working as a DevOps engineer for a leading telecommunications company which is planning to host a distributed system in AWS. Their system must be hosted on multiple Linux-based application servers which must use the same configuration file that tracks any changes in the cluster such as adding or removing a server. The configuration file is named as tdojo-nodes.config which contains the list of private IP addresses of the servers in the cluster and other metadata.

Which of the following is the MOST automated way to meet the above requirements?

  1. Layer the application server nodes of the cluster using AWS OpsWorks Stacks and add a Chef recipe associated with the Configure Lifecycle Event which populates the tdojo-nodes.config file. Set up a configuration which runs each layer’s Configure recipes that updates the configuration file when a cluster change is detected.
  2. Use AWS OpsWorks Stacks which layers the application server nodes of the cluster using a Chef recipe associated with the Deploy Lifecycle Event. Set up a configuration which populates the tdojo-nodes.config file and runs each layer’s Deploy recipes that updates the configuration file when a cluster change is detected.
  3. Tutorials Dojo Study Guide and Cheatsheet
  4. Store the tdojo-nodes.config configuration file in CodeCommit and set up a CodeDeploy deployment group based on the tags of each application server nodes of the cluster. Integrate CodeDeploy with Amazon CloudWatch Events to automatically update the configuration file of each server if a new node is added or removed in the cluster then persist the changes in CodeCommit.
  5. Store the tdojo-nodes.config configuration file in Amazon S3 and develop a crontab script that will periodically poll any changes to the file and download it if there is any. Use a Node.js based process manager such as PM2 to restart the application servers in the cluster in the event that the configuration file is modified. Use CloudWatch Events to monitor the changes in your cluster and update the configuration file in S3 for any changes.

Correct Answer: 1

In AWS OpsWorks Stacks Lifecycle Events, each layer has a set of five lifecycle events, each of which has an associated set of recipes that are specific to the layer. When an event occurs on a layer’s instance, AWS OpsWorks Stacks automatically runs the appropriate set of recipes. To provide a custom response to these events, implement custom recipes and assign them to the appropriate events for each layer. AWS OpsWorks Stacks runs those recipes after the event’s built-in recipes.

When AWS OpsWorks Stacks runs a command on an instance—for example, a deploy command in response to a Deploy lifecycle event—it adds a set of attributes to the instance’s node object that describes the stack’s current configuration. For Deploy events and Execute Recipes stack commands, AWS OpsWorks Stacks installs deploy attributes, which provide some additional deployment information.

There are five lifecycle events namely: Setup, Configure, Deploy, UnDeploy and Shutdown. The Configure event occurs on all of the stack’s instances when one of the following occurs:

– An instance enters or leaves the online state.

– You associate an Elastic IP address with an instance or disassociate one from an instance.

– You attach an Elastic Load Balancing load balancer to a layer, or detach one from a layer.

For example, suppose that your stack has instances A, B, and C, and you start a new instance, D. After D has finished running its setup recipes, AWS OpsWorks Stacks triggers the Configure event on A, B, C, and D. If you subsequently stop A, AWS OpsWorks Stacks triggers the Configure event on B, C, and D. AWS OpsWorks Stacks responds to the Configure event by running each layer’s Configure recipes, which update the instances’ configuration to reflect the current set of online instances. The Configure event is therefore a good time to regenerate configuration files. For example, the HAProxy Configure recipes reconfigure the load balancer to accommodate any changes in the set of online application server instances.

Hence, the correct solution for this scenario is: Layer the application server nodes of the cluster using AWS OpsWorks Stacks and add a Chef recipe associated with the Configure Lifecycle Event which populates the tdojo-nodes.config file. Set up a configuration which runs each layer’s Configure recipes that updates the configuration file when a cluster change is detected.

The option that says: Use AWS OpsWorks Stacks which layers the application server nodes of the cluster using a Chef recipe associated with the Deploy Lifecycle Event. Set up a configuration which populates the tdojo-nodes.config file and runs each layer’s Deploy recipes that updates the configuration file when a cluster change is detected is incorrect. Although this is properly using the AWS OpsWorks Stacks Lifecycle Events to track the configuration file, the type of Lifecycle event being used is wrong. You should use the Configure Lifecycle Event instead.

The option that says: Store the tdojo-nodes.config configuration file in CodeCommit and set up a CodeDeploy deployment group based on the tags of each application server nodes of the cluster. Integrate CodeDeploy with Amazon CloudWatch Events to automatically update the configuration file of each server if a new node is added or removed in the cluster then persist the changes in CodeCommit is incorrect because CodeCommit is not a suitable service to use to store your dynamic configuration files. Moreover, the integration of CodeDeploy and CloudWatch Events is only applicable when the actual deployment is being executed and not suitable for monitoring your cluster.

The option that says: Store the tdojo-nodes.config configuration file in Amazon S3 and develop a crontab script that will periodically poll any changes to the file and download it if there is any. Use a Node.js based process manager such as PM2 to restart the application servers in the cluster in the event that the configuration file is modified. Use CloudWatch Events to monitor the changes in your cluster and update the configuration file in S3 for any changes is incorrect because using Amazon S3 to store the configuration file entails a lot of management overhead. A better solution is to use AWS OpsWorks Stacks instead of CloudWatch Events and S3.

References:
https://docs.aws.amazon.com/opsworks/latest/userguide/welcome_classic.html#welcome-classic-lifecycle
https://docs.aws.amazon.com/opsworks/latest/userguide/workingcookbook-events.html

Note: This question was extracted from our AWS Certified DevOps Engineer Professional Practice Exams.

For more AWS practice exam questions with detailed explanations, visit the Tutorials Dojo Portal:

Tutorials Dojo AWS Practice Tests

References:
https://aws.amazon.com/opsworks/chefautomate/features
https://aws.amazon.com/opsworks/chefautomate/pricing
https://aws.amazon.com/opsworks/chefautomate/faqs
https://aws.amazon.com/opsworks/puppetenterprise/feature
https://aws.amazon.com/opsworks/puppetenterprise/pricing
https://aws.amazon.com/opsworks/puppetenterprise/faqs
https://aws.amazon.com/opsworks/stacks/features
https://aws.amazon.com/opsworks/stacks/pricing
https://aws.amazon.com/opsworks/stacks/faqs

Save More with Our SAA, CDA, and SysOps Triple Bundle Reviewers!

AWS Certified SysOps Administrator Associate Video Course – Early Access Discount Ends Soon!

Pass your AWS, Azure, and Google Cloud Certifications with the Tutorials Dojo Portal

Tutorials Dojo portal

Our Bestselling AWS Certified Solutions Architect Associate Practice Exams

AWS Certified Solutions Architect Associate Practice Exams

Enroll Now – Our AWS Practice Exams with 95% Passing Rate

AWS Practice Exams Tutorials Dojo

FREE AWS Cloud Practitioner Essentials Course!

Enroll Now – Our Azure Certification Exam Reviewers

azure reviewers tutorials dojo

Enroll Now – Our Google Cloud Certification Exam Reviewers

Tutorials Dojo Exam Study Guide eBooks

tutorials dojo study guide eBook

Subscribe to our YouTube Channel

Tutorials Dojo YouTube Channel

FREE Intro to Cloud Computing for Beginners

FREE AWS, Azure, GCP Practice Test Samplers

Browse Other Courses

Generic Category (English)300x250

Recent Posts

AWS, Azure, and GCP Certifications are consistently among the top-paying IT certifications in the world, considering that most companies have now shifted to the cloud. Earn over $150,000 per year with an AWS, Azure, or GCP certification!

Follow us on LinkedIn, YouTube, Facebook, or join our Slack study group. More importantly, answer as many practice exams as you can to help increase your chances of passing your certification exams on your first try!

View Our AWS, Azure, and GCP Exam Reviewers

Our Community

~98%
passing rate
Around 95-98% of our students pass the AWS Certification exams after training with our courses.
200k+
students
Over 200k enrollees choose Tutorials Dojo in preparing for their AWS Certification exams.
~4.8
ratings
Our courses are highly rated by our enrollees from all over the world.

What our students say about us?

error: Content is protected !!