Ends in
00
days
00
hrs
00
mins
00
secs
ENROLL NOW

🎁 Get 20% Off - Christmas Big Sale on All Practice Exams, Video Courses, and eBooks!

aws training

Home » aws training » Page 4

Release with a Pipeline: Continuous Delivery to AWS with GitHub Actions

2024-01-24T01:07:44+00:00

This is the final part of a three-part article about a Web Application Project from building a private infrastructure to building a deployment pipeline using AWS’ cloud-native continuous delivery service AWS CodePipeline, and now finalizing the infrastructure to be accessible in a public domain and building a pipeline for continuous deployment using a third-party CD tool – GitHub Actions. From the private infrastructure previously built, we will update the S3 policy to add a statement for an allowed action for the CloudFront resource. As best practice, this statement will be added to the Terraform script of the infrastructure to make it [...]

Release with a Pipeline: Continuous Delivery to AWS with GitHub Actions2024-01-24T01:07:44+00:00

Distributed Data Parallel Training with TensorFlow and Amazon SageMaker Distributed Training Library

2024-01-22T00:58:08+00:00

Introduction In the realm of machine learning, the ability to train models effectively and efficiently stands as a cornerstone of success. As datasets grow exponentially and models become more complex, traditional single-node training methods increasingly fall short. This is where distributed training enters the picture, offering a scalable solution to this growing challenge. Distributed Training Overview Distributed training is a technique used to train machine learning models on large datasets more efficiently. By splitting the workload across multiple compute nodes, it significantly reduces training time. There are two main strategies in distributed training: data parallelism, where the dataset is partitioned [...]

Distributed Data Parallel Training with TensorFlow and Amazon SageMaker Distributed Training Library2024-01-22T00:58:08+00:00

Securing Machine Learning Pipelines: Best Practices in Amazon SageMaker

2024-01-17T00:45:41+00:00

Introduction In today's digital era, the importance of security in machine learning (ML) pipelines cannot be overstated. As ML systems increasingly become integral to business operations and decision-making, ensuring the integrity and security of these systems is paramount. A breach or a flaw in an ML pipeline can lead to compromised data, erroneous decision-making, and potentially catastrophic consequences for businesses and individuals alike. This section will delve into why securing ML pipelines is crucial, highlighting the potential risks and impacts of security lapses. Short Introduction to Amazon SageMaker Amazon SageMaker is a fully managed service that provides every developer and [...]

Securing Machine Learning Pipelines: Best Practices in Amazon SageMaker2024-01-17T00:45:41+00:00

Building a Deployment Pipeline for a React Application with AWS CodePipeline

2024-01-07T02:48:10+00:00

This is the second part of a series of blogs about the platform management of a React Application infrastructure by adding a continuous deployment component to the earlier infrastructure. In an earlier article, I wrote about how a private react application infrastructure can be deployed with Terraform code. Now, we will explore this further by building a deployment pipeline using AWS CodePipeline. Let's assume that the source code of the React web application is hosted on GitHub. Using the GitHub connections feature of AWS CodePipeline, we can authorize the third-party provider to work with AWS resources to establish integration between [...]

Building a Deployment Pipeline for a React Application with AWS CodePipeline2024-01-07T02:48:10+00:00

Securing LLMs with Guardrails for Amazon Bedrock

2024-01-03T00:32:13+00:00

One of the pillars of the AWS Well-Architected Framework is security. It is a foundational concept when running your workloads in the cloud to think about privacy, access limits, compliance with regulatory requirements, and data protection; and this includes Amazon Bedrock. Along with several AI announcements during the keynote of AWS CEO, Adam Selipsky during AWS re:Invent 2023 was Guardrails for Amazon Bedrock. As AI technology evolves and becomes more mature, it makes sense to also reinvent the way usage is handled by security safeguards. Guardrails for Amazon Bedrock allow security policies to be applied across foundational models, to fulfill [...]

Securing LLMs with Guardrails for Amazon Bedrock2024-01-03T00:32:13+00:00

AWS Device Farm

2024-04-12T13:48:53+00:00

Bookmarks Key Features Terminology Availability Device Selection Test Types Test Reports Pricing Security Reference AWS Device Farm Cheat Sheet AWS Device Farm allows you to examine and interact with your Android, iOS, and web applications on actual, physical devices maintained by Amazon Web Services (AWS). Key Features Automated App Testing - Device Farm provides the functionality to either upload your personalized tests or utilize the pre-existing, script-free compatibility tests. The testing process is executed concurrently, which enables tests on various devices to commence within minutes. Upon the completion [...]

AWS Device Farm2024-04-12T13:48:53+00:00

AWS Well-Architected Tool

2024-03-23T07:02:34+00:00

Bookmarks Key Components Using the Tool Best Practices  Benefits Reference AWS Well-Architected Tool Cheat Sheet The AWS Well-Architected Tool is a service that helps you review your workloads and compares them to the latest AWS architectural best practices. The tool provides recommendations for making your workloads more reliable, secure, efficient, and cost-effective. Key Components Workload is a term used to describe a collection of components that collectively contribute to business value. This could range from marketing websites, e-commerce platforms, and backend for mobile applications to analytic platforms. The complexity of a workload [...]

AWS Well-Architected Tool2024-03-23T07:02:34+00:00

Batch Data Ingestion Simplified in AWS

2023-12-12T00:40:18+00:00

Today's tech industry is dominated by Big Data and Cloud Computing. It is crucial for companies and organizations to efficiently manage large volumes of data. To address this important need, AWS offers robust solutions for handling these chunks of large data, particularly through batch data ingestion. This process involves collecting and importing bulk or big data into storage or other processing systems at regular intervals or specific events. Batch data ingestion is crucial for scenarios where immediate or real-time processing is not necessary, allowing for efficient resource utilization. Batch data ingestion in AWS is not only efficient and cost-effective but [...]

Batch Data Ingestion Simplified in AWS2023-12-12T00:40:18+00:00

Building Code-Free GenerativeAI Apps with PartyRock

2023-12-05T00:19:12+00:00

What is PartyRock? It has been two weeks since Amazon announced PartyRock, an Amazon Bedrock Playground. It comes with the tagline “Everyone can build AI apps”. According to Amazon President and CEO, Andy Jassy, it was just an internal tool created by AWS developers to experiment with Foundation Models from Amazon Bedrock. The name PartyRock was in reference to it being a fun and collaborative way to experience Amazon Bedrock. I joined the party and played around with PartyRock to see what all the fuss was about, and it literally took me less than five minutes to create my first [...]

Building Code-Free GenerativeAI Apps with PartyRock2023-12-05T00:19:12+00:00

AWS, Azure, and GCP Certifications are consistently among the top-paying IT certifications in the world, considering that most companies have now shifted to the cloud. Upskill and earn over $150,000 per year with an AWS, Azure, or GCP certification!

Follow us on LinkedIn, Facebook, or join our Slack study group. More importantly, answer as many practice exams as you can to help increase your chances of passing your certification exams on your first try!