AWS Lambda

AWS Lambda

Last updated on January 18, 2024

AWS Lambda Cheat Sheet

  • A serverless compute service.
  • Lambda executes your code only when needed and scales automatically.
  • Lambda functions are stateless – no affinity to the underlying infrastructure.
  • You choose the amount of memory you want to allocate to your functions and AWS Lambda allocates proportional CPU power, network bandwidth, and disk I/O.
  • AWS Lambda is SOC, HIPAA, PCI, ISO compliant.
  • Natively supports the following languages:
    • Node.js
    • Java
    • C#
    • Go
    • Python
    • Ruby
    • PowerShell
  • You can also provide your own custom runtime.

 

Components of a Lambda Application

  • Function – a script or program that runs in Lambda. Lambda passes invocation events to your function. The function processes an event and returns a response.
  • Tutorials dojo strip
  • Execution environment – a secure, isolated micro virtual machine where a Lambda function is executed.
  • Runtimes – Lambda runtimes allow functions in different languages to run in the same base execution environment. The runtime sits in-between the Lambda service and your function code, relaying invocation events, context information, and responses between the two.
  • Environment variables – key-value pairs that you can use to store configuration settings for your function. They can be used to pass dynamic parameters to your function at runtime, such as database connection strings, API keys, and other sensitive information.
  • Layers – Lambda layers are a distribution mechanism for libraries, custom runtimes, and other function dependencies. Layers let you manage your in-development function code independently from the unchanging code and resources that it uses.
  • Event source – an AWS service or a custom service that triggers your function and executes its logic.
  • Downstream resources – an AWS service that your Lambda function calls once it is triggered.
  • Log streams – While Lambda automatically monitors your function invocations and reports metrics to CloudWatch, you can annotate your function code with custom logging statements that allow you to analyze the execution flow and performance of your Lambda function.
  • AWS Serverless Application Model

Lambda Functions

  • You can upload your application code as a ZIP file or a container image hosted on Amazon Elastic Container Registry (Amazon ECR).
  • To create a Lambda function, you first package your code and dependencies in a deployment package. Then, you upload the deployment package to create your Lambda function.
  • After your Lambda function is in production, Lambda automatically monitors functions on your behalf, reporting metrics through Amazon CloudWatch.
  • Configure basic function settings, including the description, memory usage, storage (512MB – 10GB), execution timeout (15 minutes max), and the role that the function will use to execute your code.
  • Environment variables are always encrypted at rest and can be encrypted in transit as well.
  • Versions – a snapshot of your function’s state at a given time. When you publish a new version, a :version-number is appended to your function’s ARN:
    • arn:aws:lambda:us-east-2:123456789123:function:my-function:1
  • Aliases – serves as a pointer to a Lambda function version. Aliases create a human-readable version of the function’s name, making it easier to remember and understand what the function does. An alias follows the following format:
    • arn:aws:lambda:us-east-2:123456789123:function:my-function:MyAlias
  • A layer is a ZIP archive that contains libraries, a custom runtime, or other dependencies. Use layers to manage your function’s dependencies independently and keep your deployment package small.
  • You can configure a function to mount an Amazon EFS file system to a local directory. With Amazon EFS, your function code can access and modify shared resources securely and at high concurrency.

Invoking Lambda Functions

  • Lambda supports synchronous and asynchronous invocation of a Lambda function.
  • Synchronous invocation
    • when a function is invoked synchronously, AWS Lambda waits until the function is done processing, then returns the result.
    • examples of AWS services that invoke Lambda functions synchronously:
  • Asynchronous invocation
    • when a function is invoked asynchronously, AWS Lambda stores the event in an internal queue and handles the invocation
    • the Lambda function returns a 202 status code (Accepted) immediately after being invoked, and the processing continues in the background. The 202 code just confirms that the event is queued; it does not indicate whether the function runs successfully or not.
    • typically used for long-latency processes that run in the background, such as batch operations, video encoding, and order processing.
    • can only accept a payload of up to 256 KB.
    • examples of AWS services that invoke Lambda functions asynchronously:

Event Source Mapping

  • Event source mapping is a Lambda resource that reads from a queue or stream and synchronously invokes a Lambda function.
  • You can apply an event-filtering pattern to process events that are only relevant to your application. This allows you to save money by reducing the number of function invocations.
  • Event source mapping invokes a function if one of the following conditions is met:
    • The batch size is reached
    • The maximum batching window is reached
    • The total payload is 6 MB
  • Lambda provides event source mappings for the following services.

Deploying Codes with External Dependencies

  • AWS Lambda includes a number of pre-built dependencies for specific runtimes. These dependencies can be used to run your code without having to include them in your deployment package.
  • If you’re using an external library/SDK/module in your Lambda code, do the following steps:
    1. Place all external dependencies locally in your application’s folder.
    2. Create a ZIP deployment package of your Lambda function.
    3. Upload the deployment package to AWS Lambda. You can send the file directly to the AWS Lambda Console or store it first in Amazon S3 and deploy it from there.

Concurrency Management

  • Concurrency is the number of instances that serve requests at a given time. When your function is invoked, Lambda allocates an instance of it to process the event. When the function finishes running, it can handle another request. If the function is invoked again while a request is still being processed, another instance is allocated, which increases the function’s concurrency.
  • To ensure that a function can always reach a certain level of concurrency, you can configure the function with reserved concurrency. When a function has reserved concurrency, no other function can use that concurrency. Reserved concurrency also limits the maximum concurrency for the function.
  • To enable your function to scale without fluctuations in latency, use provisioned concurrency. By allocating provisioned concurrency before an increase in invocations, you can ensure that all requests are served by initialized instances with very low latency.

Lambda Function URL

  • With the function URL feature of the AWS Lambda service, you can launch a secure HTTPS endpoint dedicated to your custom Lambda function.
  • You don’t need an intermediary service such as Amazon API Gateway to directly invoke your function, which was required in the past. Just send an HTTP request to the unique URL of your Lambda function to get started.
  • Function URL endpoints are publicly accessible by default and have the following format:
    • https://<url-id>.lambda-url.<region>.on.aws
  • A Lambda Function URL can be created and configured via the AWS Lambda console or through the Lambda API.
  • Upon creating a function URL, AWS Lambda automatically generates a unique URL endpoint for you that you can immediately use.
  • This URL endpoint is static and doesn’t change once created.
  • Lambda URLs are dual stack-enabled, which support both IPv4 and IPv6 protocols
  • The URL can be invoked via a web browser, CURL, Postman, or any HTTP client.
  • There are 2 authentication types for controlling access to a Lambda function URL:
    • AWS_IAM – uses IAM to authenticate and authorize users. Only IAM users or roles that have been granted permission to invoke the function through IAM policies will be able to do so.
    • NONE – allows anyone who has the function URL to execute the Lambda function whether they have an AWS account or not.
  • You can access your function URL through the public Internet only and not via AWS PrivateLink (e.g., VPC Endpoints)
  • Uses resource-based policies for security and access control. You can further secure your function URL by enabling cross-origin resource sharing (CORS) to whitelist origins permitted to invoke it.
  • A function URL can be applied to any Lambda function alias or to the LATEST unpublished function version but not to any other function version.

Configuring a Lambda Function to Access Resources in a VPC

In AWS Lambda, you can set up your function to establish a connection to your virtual private cloud (VPC). With this connection, your function can access the private resources of your VPC during execution like EC2, RDS and many others.

By default, AWS executes your Lambda function code securely within a VPC. Alternatively, you can enable your Lambda function to access resources inside your private VPC by providing additional VPC-specific configuration information such as VPC subnet IDs and security group IDs. It uses this information to set up elastic network interfaces which enable your Lambda function to connect securely to other resources within your VPC.

 

Lambda@Edge

  • Lets you run Lambda functions to customize content that CloudFront delivers, executing the functions in AWS locations closer to the viewer. The functions run in response to CloudFront events, without provisioning or managing servers.
  • You can use Lambda functions to change CloudFront requests and responses at the following points:
    • After CloudFront receives a request from a viewer (viewer request)
    • Before CloudFront forwards the request to the origin (origin request)
    • After CloudFront receives the response from the origin (origin response)
    • Before CloudFront forwards the response to the viewer (viewer response)

AWS Training Lambda

  • You can automate your serverless application’s release process using AWS CodePipeline and AWS CodeDeploy.
  • Lambda will automatically track the behavior of your Lambda function invocations and provide feedback that you can monitor. In addition, it provides metrics that allow you to analyze the full function invocation spectrum, including event source integration and whether downstream resources perform as expected.

 

AWS Lambda SnapStart

  • Lambda SnapStart speeds up your Java applications by reusing a single initialized snapshot to quickly resume multiple execution environments.
  • You can use the Lambda SnapStart for Java feature to decrease the cold start time required without provisioning additional resources. This also removes the burden of implementing complex performance optimizations for your Java application

AWS Lambda Pricing

  • You are charged based on the total number of requests for your functions and the duration, the time it takes for your code to execute.

 

Additional AWS Lambda-related Cheat Sheets:

 

Validate Your AWS Lambda Knowledge

Question 1

AWS Exam Readiness Courses

A company is deploying the package of its Lambda function, which is compressed as a ZIP file, to AWS. However, they are getting an error in the deployment process because the package is too large. The manager instructed the developer to keep the deployment package small to make the development process much easier and more modularized. This should also help prevent errors that may occur when dependencies are installed and packaged with the function code.

Which of the following options is the MOST suitable solution that the developer should implement?

  1. Upload the deployment package to S3.
  2. Zip the deployment package again to further compress the zip file.
  3. Upload the other dependencies of your function as a separate Lambda Layer instead.
  4. Compress the deployment package as TAR file instead.

Correct Answer: 3

You can configure your Lambda function to pull in additional code and content in the form of layers. A layer is a ZIP archive that contains libraries, a custom runtime, or other dependencies. With layers, you can use libraries in your function without needing to include them in your deployment package.

Layers let you keep your deployment package small, which makes development easier. You can avoid errors that can occur when you install and package dependencies with your function code. For Node.js, Python, and Ruby functions, you can develop your function code in the Lambda console as long as you keep your deployment package under 3 MB.

A function can use up to 5 layers at a time. The total unzipped size of the function and all layers can’t exceed the unzipped deployment package size limit of 250 MB.

You can create layers, or use layers published by AWS and other AWS customers. Layers support resource-based policies for granting layer usage permissions to specific AWS accounts, AWS Organizations, or all accounts. Layers are extracted to the /opt directory in the function execution environment. Each runtime looks for libraries in a different location under /opt, depending on the language. Structure your layer so that function code can access libraries without additional configuration.

Hence, the correct answer is to upload the other dependencies of your function as a separate Lambda Layer instead.

Uploading the deployment package to S3 is incorrect. Although you can upload large deployment packages of over 50 MB in size via S3, your function will still be in a single layer. This doesn’t meet the requirement of making the deployment package small and modularized. You have to use Lambda Layers instead.

Zipping the deployment package again to further compress the zip file is incorrect because doing this will not significantly make the ZIP file smaller.

Compressing the deployment package as TAR file instead is incorrect. Although it may decrease the size of the deployment package, it is still not enough to totally solve the issue. A compressed TAR file is not significantly smaller as compared to a ZIP file.

References:
https://docs.aws.amazon.com/lambda/latest/dg/configuration-layers.html
https://docs.aws.amazon.com/lambda/latest/dg/limits.html

Note: This question was extracted from our AWS Certified Developer Associate Practice Exams.

Question 2

A sports technology company plans to build the latest kneepads version that can collect data from athletes wearing them. The product owner is looking to develop them with wearable medical sensors to ingest near-real-time data securely at scale and store it in durable storage. Furthermore, it should only collect non-confidential information from the streaming data and exclude those classified as sensitive data.

Which solution achieves these requirements with the least operational overhead?

  1. Using Amazon Kinesis Data Firehose, ingest the streaming data, and use Amazon S3 for durable storage. Write an AWS Lambda function that removes sensitive data. Schedule a separate job that invokes the Lambda function once the data is stored in Amazon S3.
  2. Using Amazon Kinesis Data Firehose, ingest the streaming data, and use Amazon S3 for durable storage. Write an AWS Lambda function that removes sensitive data. During the creation of the Kinesis Data Firehose delivery stream, enable record transformation and use the Lambda function.
  3. Using Amazon Kinesis Data Streams, ingest the streaming data, and use an Amazon EC2 instance for durable storage. Write an Amazon Kinesis Data Analytics application that removes sensitive data.
  4. Using Amazon Kinesis Data Streams, ingest the streaming data, and use Amazon S3 for durable storage. Write an AWS Lambda function that removes sensitive data. Schedule a separate job that invokes the Lambda function once the data is stored in Amazon S3.

Correct Answer: 2

Amazon Kinesis Data Streams enables you to build custom applications that process or analyze streaming data for specialized needs. It is useful for rapidly moving data off data producers and then continuously processing the data, whether used to transform the data before emitting to a data store, run real-time metrics and analytics, or derive more complex data streams for further processing.

Within seconds, the data will be available for your Amazon Kinesis Applications to read and process from the stream. The throughput of an Amazon Kinesis data stream is determined by the number of shards within the data stream. You can use the UpdateShardCount API or the AWS Management Console to scale the number of shards in a data stream, or you can change the throughput of an Amazon Kinesis data stream by adjusting the number of shards within the data stream. 

Meanwhile, Amazon Kinesis Data Firehose is the easiest way to load streaming data into data stores and analytics tools. It can capture, transform, and load streaming data into Amazon S3, Amazon Redshift, Amazon Elasticsearch Service, and Splunk, enabling near real-time analytics with existing business intelligence tools and dashboards you’re already using today. It is a fully managed service that automatically scales to match your data’s throughput and requires no ongoing administration. It can also batch, compress, and encrypt the data before loading it, minimizing the storage used at the destination and increasing security. With the Firehose data transformation feature, you can now specify a Lambda function that can perform transformations directly on the stream when you create a delivery stream.

Among the choices, the solution that meets the requirements with the least operational overhead is the option that says: Using Amazon Kinesis Data Firehose, ingest the streaming data, and use Amazon S3 for durable storage. Write an AWS Lambda function that removes sensitive data. During the creation of the Kinesis Data Firehose delivery stream, enable record transformation and use the Lambda function.

The option that says: Using Amazon Kinesis Data Firehose, ingest the streaming data, and use Amazon S3 for durable storage. Write an AWS Lambda function that removes sensitive data. Schedule a separate job that invokes the Lambda function once the data is stored in Amazon S3 is incorrect. You do not need to schedule a different job for the AWS Lambda function since you can directly enable and set up a data transformation process in the Kinesis Data Firehose stream. With the Firehose data transformation feature, a Lambda function can be specified to perform transformations directly on the data stream.

The option that says: Using Amazon Kinesis Data Streams, ingest the streaming data, and use an Amazon EC2 instance for durable storage. Write an Amazon Kinesis Data Analytics application that removes sensitive data is incorrect. Writing a custom Kinesis Data Analytics application entails additional effort. In addition, Amazon EC2 does not provide durable storage.

The option that says: Using Amazon Kinesis Data Streams, ingest the streaming data, and use Amazon S3 for durable storage. Write an AWS Lambda function that removes sensitive data. Schedule a separate job that invokes the Lambda function once the data is stored in Amazon S3 is incorrect. Amazon Kinesis Data Streams does not support direct transfer to Amazon S3 without using another service. The data transformation is also not done in near real-time. A better solution is to use Amazon Kinesis Firehose with its data transformation feature enabled.

AWS Lambda Cheat Sheet References:

https://docs.aws.amazon.com/firehose/latest/dev/data-transformation.html
https://aws.amazon.com/blogs/big-data/persist-streaming-data-to-amazon-s3-using-amazon-kinesis-firehose-and-aws-lambda/
https://aws.amazon.com/blogs/compute/amazon-kinesis-firehose-data-transformation-with-aws-lambda/

Note: This question was extracted from our AWS Certified Data Analytics Specialty Practice Exams.

For more AWS practice exam questions with detailed explanations, check out the Tutorials Dojo Portal:Tutorials Dojo AWS Practice Tests

AWS Lambda Cheat Sheet References:

https://docs.aws.amazon.com/lambda/latest/dg
https://aws.amazon.com/lambda/features/
https://aws.amazon.com/lambda/pricing/
https://aws.amazon.com/lambda/faqs/

Tutorials Dojo portal

Be Inspired and Mentored with Cloud Career Journeys!

Tutorials Dojo portal

Enroll Now – Our Azure Certification Exam Reviewers

azure reviewers tutorials dojo

Enroll Now – Our Google Cloud Certification Exam Reviewers

Tutorials Dojo Exam Study Guide eBooks

tutorials dojo study guide eBook

FREE AWS Exam Readiness Digital Courses

Subscribe to our YouTube Channel

Tutorials Dojo YouTube Channel

FREE Intro to Cloud Computing for Beginners

FREE AWS, Azure, GCP Practice Test Samplers

Recent Posts

Written by: Jon Bonso

Jon Bonso is the co-founder of Tutorials Dojo, an EdTech startup and an AWS Digital Training Partner that provides high-quality educational materials in the cloud computing space. He graduated from Mapúa Institute of Technology in 2007 with a bachelor's degree in Information Technology. Jon holds 10 AWS Certifications and is also an active AWS Community Builder since 2020.

AWS, Azure, and GCP Certifications are consistently among the top-paying IT certifications in the world, considering that most companies have now shifted to the cloud. Earn over $150,000 per year with an AWS, Azure, or GCP certification!

Follow us on LinkedIn, YouTube, Facebook, or join our Slack study group. More importantly, answer as many practice exams as you can to help increase your chances of passing your certification exams on your first try!

View Our AWS, Azure, and GCP Exam Reviewers Check out our FREE courses

Our Community

~98%
passing rate
Around 95-98% of our students pass the AWS Certification exams after training with our courses.
200k+
students
Over 200k enrollees choose Tutorials Dojo in preparing for their AWS Certification exams.
~4.8
ratings
Our courses are highly rated by our enrollees from all over the world.

What our students say about us?