AWS Lambda

  • A serverless compute service.
  • Lambda executes your code only when needed and scales automatically.
  • Lambda functions are stateless – no affinity to the underlying infrastructure.
  • You choose the amount of memory you want to allocate to your functions and AWS Lambda allocates proportional CPU power, network bandwidth, and disk I/O.
  • AWS Lambda is SOC, HIPAA, PCI, ISO compliant.
  • Natively supports the following languages:
    • Node.js
    • Java
    • C#
    • Go
    • Python
    • Ruby
    • PowerShell
  • You can also provide your own custom runtime.

Introduction to AWS Lambda & Serverless Applications

Components of a Lambda Application

  • Function – a script or program that runs in Lambda. Lambda passes invocation events to your function. The function processes an event and returns a response.
  • Runtimes – Lambda runtimes allow functions in different languages to run in the same base execution environment. The runtime sits in-between the Lambda service and your function code, relaying invocation events, context information, and responses between the two.
  • Layers – Lambda layers are a distribution mechanism for libraries, custom runtimes, and other function dependencies. Layers let you manage your in-development function code independently from the unchanging code and resources that it uses.
  • Event source – an AWS service or a custom service that triggers your function and executes its logic.
  • IT Certification Category (English)728x90
  • Downstream resources – an AWS service that your Lambda function calls once it is triggered.
  • Log streams – While Lambda automatically monitors your function invocations and reports metrics to CloudWatch, you can annotate your function code with custom logging statements that allow you to analyze the execution flow and performance of your Lambda function.
  • AWS Serverless Application Model

Lambda Functions

  • You upload your application code in the form of one or more Lambda functions. Lambda stores code in Amazon S3 and encrypts it at rest.
  • To create a Lambda function, you first package your code and dependencies in a deployment package. Then, you upload the deployment package to create your Lambda function.
  • After your Lambda function is in production, Lambda automatically monitors functions on your behalf, reporting metrics through Amazon CloudWatch.
  • Configure basic function settings including the description, memory usage, execution timeout, and role that the function will use to execute your code.
  • Environment variables are always encrypted at rest, and can be encrypted in transit as well.
  • Versions and aliases are secondary resources that you can create to manage function deployment and invocation.
  • A layer is a ZIP archive that contains libraries, a custom runtime, or other dependencies. Use layers to manage your function’s dependencies independently and keep your deployment package small.
  • You can configure a function to mount an Amazon EFS file system to a local directory. With Amazon EFS, your function code can access and modify shared resources securely and at high concurrency.

Invoking Functions

  • Lambda supports synchronous and asynchronous invocation of a Lambda function. You can control the invocation type only when you invoke a Lambda function (referred to as on-demand invocation).
  • An event source is the entity that publishes events, and a Lambda function is the custom code that processes the events.
  • Event source mapping maps an event source to a Lambda function. It enables automatic invocation of your Lambda function when events occur. 
  • Lambda provides event source mappings for the following services.
    • Amazon Kinesis
    • Amazon DynamoDB
    • Amazon Simple Queue Service
  • Your functions’ concurrency is the number of instances that serve requests at a given time. When your function is invoked, Lambda allocates an instance of it to process the event. When the function code finishes running, it can handle another request. If the function is invoked again while a request is still being processed, another instance is allocated, which increases the function’s concurrency.
  • To ensure that a function can always reach a certain level of concurrency, you can configure the function with reserved concurrency. When a function has reserved concurrency, no other function can use that concurrency. Reserved concurrency also limits the maximum concurrency for the function.
  • To enable your function to scale without fluctuations in latency, use provisioned concurrency. By allocating provisioned concurrency before an increase in invocations, you can ensure that all requests are served by initialized instances with very low latency.

Configuring a Lambda Function to Access Resources in a VPC

In AWS Lambda, you can set up your function to establish a connection to your virtual private cloud (VPC). With this connection, your function can access the private resources of your VPC during execution like EC2, RDS and many others.

By default, AWS executes your Lambda function code securely within a VPC. Alternatively, you can enable your Lambda function to access resources inside your private VPC by providing additional VPC-specific configuration information such as VPC subnet IDs and security group IDs. It uses this information to set up elastic network interfaces which enable your Lambda function to connect securely to other resources within your VPC.

Lambda@Edge

  • Lets you run Lambda functions to customize content that CloudFront delivers, executing the functions in AWS locations closer to the viewer. The functions run in response to CloudFront events, without provisioning or managing servers.
  • You can use Lambda functions to change CloudFront requests and responses at the following points:
    • After CloudFront receives a request from a viewer (viewer request)
    • Before CloudFront forwards the request to the origin (origin request)
    • After CloudFront receives the response from the origin (origin response)
    • Before CloudFront forwards the response to the viewer (viewer response)

AWS Training Lambda

  • You can automate your serverless application’s release process using AWS CodePipeline and AWS CodeDeploy.
  • Lambda will automatically track the behavior of your Lambda function invocations and provide feedback that you can monitor. In addition, it provides metrics that allows you to analyze the full function invocation spectrum, including event source integration and whether downstream resources perform as expected.

Pricing

  • You are charged based on the total number of requests for your functions and the duration, the time it takes for your code to execute.

Additional AWS Lambda-related Cheat Sheets:

 

Validate Your Knowledge

Question 1

A company is deploying the package of its Lambda function, which is compressed as a ZIP file, to AWS. However, they are getting an error in the deployment process because the package is too large. The manager instructed the developer to keep the deployment package small to make the development process much easier and more modularized. This should also help prevent errors that may occur when dependencies are installed and packaged with the function code.

Which of the following options is the MOST suitable solution that the developer should implement?

  1. Upload the deployment package to S3.
  2. Zip the deployment package again to further compress the zip file.
  3. Upload the other dependencies of your function as a separate Lambda Layer instead.
  4. Compress the deployment package as TAR file instead.

Correct Answer: 3

You can configure your Lambda function to pull in additional code and content in the form of layers. A layer is a ZIP archive that contains libraries, a custom runtime, or other dependencies. With layers, you can use libraries in your function without needing to include them in your deployment package.

Layers let you keep your deployment package small, which makes development easier. You can avoid errors that can occur when you install and package dependencies with your function code. For Node.js, Python, and Ruby functions, you can develop your function code in the Lambda console as long as you keep your deployment package under 3 MB.

A function can use up to 5 layers at a time. The total unzipped size of the function and all layers can’t exceed the unzipped deployment package size limit of 250 MB.

You can create layers, or use layers published by AWS and other AWS customers. Layers support resource-based policies for granting layer usage permissions to specific AWS accounts, AWS Organizations, or all accounts. Layers are extracted to the /opt directory in the function execution environment. Each runtime looks for libraries in a different location under /opt, depending on the language. Structure your layer so that function code can access libraries without additional configuration.

Hence, the correct answer is to upload the other dependencies of your function as a separate Lambda Layer instead.

Uploading the deployment package to S3 is incorrect. Although you can upload large deployment packages of over 50 MB in size via S3, your function will still be in a single layer. This doesn’t meet the requirement of making the deployment package small and modularized. You have to use Lambda Layers instead.

Zipping the deployment package again to further compress the zip file is incorrect because doing this will not significantly make the ZIP file smaller.

Tutorials Dojo Study Guide and Cheatsheet

Compressing the deployment package as TAR file instead is incorrect. Athough it may decrease the size of the deployment package, it is still not enough to totally solve the issue. A compressed TAR file is not significantly smaller as compared to a ZIP file.

References:
https://docs.aws.amazon.com/lambda/latest/dg/configuration-layers.html
https://docs.aws.amazon.com/lambda/latest/dg/limits.html

Note: This question was extracted from our AWS Certified Developer Associate Practice Exams.

Question 2

A sports technology company plans to build the latest kneepads version that can collect data from athletes wearing them. The product owner is looking to develop them with wearable medical sensors to ingest near-real-time data securely at scale and store it in durable storage. Furthermore, it should only collect non-confidential information from the streaming data and exclude those classified as sensitive data.

Which solution achieves these requirements with the least operational overhead?

  1. Using Amazon Kinesis Data Firehose, ingest the streaming data, and use Amazon S3 for durable storage. Write an AWS Lambda function that removes sensitive data. Schedule a separate job that invokes the Lambda function once the data is stored in Amazon S3.
  2. Using Amazon Kinesis Data Firehose, ingest the streaming data, and use Amazon S3 for durable storage. Write an AWS Lambda function that removes sensitive data. During the creation of the Kinesis Data Firehose delivery stream, enable record transformation and use the Lambda function.
  3. Using Amazon Kinesis Data Streams, ingest the streaming data, and use an Amazon EC2 instance for durable storage. Write an Amazon Kinesis Data Analytics application that removes sensitive data.
  4. Using Amazon Kinesis Data Streams, ingest the streaming data, and use Amazon S3 for durable storage. Write an AWS Lambda function that removes sensitive data. Schedule a separate job that invokes the Lambda function once the data is stored in Amazon S3.

Correct Answer: 2

Amazon Kinesis Data Streams enables you to build custom applications that process or analyze streaming data for specialized needs. It is useful for rapidly moving data off data producers and then continuously processing the data, whether used to transform the data before emitting to a data store, run real-time metrics and analytics, or derive more complex data streams for further processing.

Within seconds, the data will be available for your Amazon Kinesis Applications to read and process from the stream. The throughput of an Amazon Kinesis data stream is determined by the number of shards within the data stream. You can use the UpdateShardCount API or the AWS Management Console to scale the number of shards in a data stream, or you can change the throughput of an Amazon Kinesis data stream by adjusting the number of shards within the data stream.

Meanwhile, Amazon Kinesis Data Firehose is the easiest way to load streaming data into data stores and analytics tools. It can capture, transform, and load streaming data into Amazon S3, Amazon Redshift, Amazon Elasticsearch Service, and Splunk, enabling near real-time analytics with existing business intelligence tools and dashboards you’re already using today. It is a fully managed service that automatically scales to match your data’s throughput and requires no ongoing administration. It can also batch, compress, and encrypt the data before loading it, minimizing the storage used at the destination and increasing security. With the Firehose data transformation feature, you can now specify a Lambda function that can perform transformations directly on the stream when you create a delivery stream.

Among the choices, the solution that meets the requirements with the least operational overhead is the option that says: Using Amazon Kinesis Data Firehose, ingest the streaming data, and use Amazon S3 for durable storage. Write an AWS Lambda function that removes sensitive data. During the creation of the Kinesis Data Firehose delivery stream, enable record transformation and use the Lambda function.

The option that says: Using Amazon Kinesis Data Firehose, ingest the streaming data, and use Amazon S3 for durable storage. Write an AWS Lambda function that removes sensitive data. Schedule a separate job that invokes the Lambda function once the data is stored in Amazon S3 is incorrect. You do not need to schedule a different job for the AWS Lambda function since you can directly enable and set up a data transformation process in the Kinesis Data Firehose stream. With the Firehose data transformation feature, a Lambda function can be specified to perform transformations directly on the data stream.

The option that says: Using Amazon Kinesis Data Streams, ingest the streaming data, and use an Amazon EC2 instance for durable storage. Write an Amazon Kinesis Data Analytics application that removes sensitive data is incorrect. Writing a custom Kinesis Data Analytics application entails additional effort. In addition, Amazon EC2 does not provide durable storage.

The option that says: Using Amazon Kinesis Data Streams, ingest the streaming data, and use Amazon S3 for durable storage. Write an AWS Lambda function that removes sensitive data. Schedule a separate job that invokes the Lambda function once the data is stored in Amazon S3 is incorrect. Amazon Kinesis Data Streams does not support direct transfer to Amazon S3 without using another service. The data transformation is also not done in near real-time. A better solution is to use Amazon Kinesis Firehose with its data transformation feature enabled.

References:
https://docs.aws.amazon.com/firehose/latest/dev/data-transformation.html
https://aws.amazon.com/blogs/big-data/persist-streaming-data-to-amazon-s3-using-amazon-kinesis-firehose-and-aws-lambda/
https://aws.amazon.com/blogs/compute/amazon-kinesis-firehose-data-transformation-with-aws-lambda/

Note: This question was extracted from our AWS Certified Data Analytics Specialty Practice Exams.

For more AWS practice exam questions with detailed explanations, check out the Tutorials Dojo Portal:Tutorials Dojo AWS Practice Tests

Additional Training Materials: AWS Lambda Video Courses on Udemy

  1. AWS Serverless APIs & Apps – A Complete Introduction by Maximilian Schwarzmüller
  2. AWS Lambda & Serverless Architecture Bootcamp (Build 5 Apps) by Riyaz Sayyad
  3. Build a Serverless App with AWS Lambda – Hands On! by Sundog Education
  4. AWS Lambda and the Serverless Framework – Hands On Learning! by Stephane Maarek

References:
https://docs.aws.amazon.com/lambda/latest/dg
https://aws.amazon.com/lambda/features/
https://aws.amazon.com/lambda/pricing/
https://aws.amazon.com/lambda/faqs/

Pass your AWS, Azure, and Google Cloud Certifications with the Tutorials Dojo Portal

Tutorials Dojo portal

Our Bestselling AWS Certified Solutions Architect Associate Practice Exams

AWS Certified Solutions Architect Associate Practice Exams

Enroll Now – Our AWS Practice Exams with 95% Passing Rate

AWS Practice Exams Tutorials Dojo

Enroll Now – Our Azure Certification Exam Reviewers

azure reviewers tutorials dojo

Enroll Now – Our Google Cloud Certification Exam Reviewers

Tutorials Dojo Exam Study Guide eBooks

Tutorials Dojo Study Guide and Cheat Sheets-2

Subscribe to our YouTube Channel

Tutorials Dojo YouTube Channel

FREE Intro to Cloud Computing for Beginners

FREE AWS, Azure, GCP Practice Test Samplers

Browse Other Courses

Generic Category (English)300x250

Recent Posts

AWS, Azure, and GCP Certifications are consistently among the top-paying IT certifications in the world, considering that most companies have now shifted to the cloud. Earn over $150,000 per year with an AWS, Azure, or GCP certification!

Follow us on LinkedIn, YouTube, Facebook, or join our Slack study group. More importantly, answer as many practice exams as you can to help increase your chances of passing your certification exams on your first try!

View Our AWS, Azure, and GCP Exam Reviewers

Our Community

~98%
passing rate
Around 95-98% of our students pass the AWS Certification exams after training with our courses.
200k+
students
Over 200k enrollees choose Tutorials Dojo in preparing for their AWS Certification exams.
~4.8
ratings
Our courses are highly rated by our enrollees from all over the world.

What our students say about us?

error: Content is protected !!