Last updated on September 9, 2024
Amazon Bedrock Cheat Sheet
Amazon Bedrock enables you to construct and expand applications powered by generative AI. These applications have the capability to produce text, images, audio, and artificial data in reaction to specific prompts.
Key Features
- Model Choice: Amazon Bedrock provides access to a variety of high-performing foundation models from leading AI companies such as AI21 Labs, Anthropic, Cohere, Meta, Stability AI, and Amazon. You can easily experiment with and evaluate these models for your use case.
- Customization: You can privately customize the models with your data using techniques like fine-tuning and Retrieval Augmented Generation (RAG).
- Agents: You can build agents that execute tasks using your enterprise systems and data sources.
- Serverless: Since Amazon Bedrock is serverless, you don’t have to manage any infrastructure.
- Integration: You can securely incorporate and implement generative AI features into your applications using the AWS services you already know.
Additional Capabilities
- Text, Image, and Chat playgrounds: Amazon Bedrock provides playgrounds for text, chat, and image models. In these playgrounds, you can experiment with models before deciding to use them in an application.
- Examples library: Amazon Bedrock provides a code example library that includes AWS SDK examples available in the AWS Doc SDK Examples GitHub repo.
- Amazon Bedrock API: Amazon Bedrock provides a detailed API that includes actions and their parameters. It can be accessed using various AWS SDKs.
- Embeddings: Amazon Bedrock provides text and image embeddings that represent meaningful vector representations of unstructured text, such as documents, paragraphs, and sentences.
- Agents for Amazon Bedrock: These are AI-powered helpers that are constructed on the base models provided by Amazon Bedrock. They have the capability to perform tasks that involve multiple steps across various systems and data sources within an organization.
- Knowledge base for Amazon Bedrock: The knowledge base for Amazon Bedrock provides the capability of amassing data sources into a repository of information. With knowledge bases, you can easily build an application that takes advantage of retrieval augmented generation (RAG), a technique in which the retrieval of information from data sources augments the generation of model responses.
- Provisioned Throughput: When you configure Provisioned Throughput for a model in Amazon Bedrock, you receive a level of throughput at a fixed cost. You can use Provisioned Throughput with Amazon and third-party base models, as well as with customized models.
- Fine-tuning and Continued Pre-training: Amazon Bedrock provides a new capability that allows you to train Amazon Titan Text Express and Amazon Titan Text Lite foundation models and customize them using your own unlabeled data in a secure and managed environment.
- Model invocation logging: Model invocation logging is an optional feature in Amazon Bedrock that can be used to collect invocation logs, model input data, and model output data for all invocations in your AWS account used in Amazon Bedrock.
- Model versioning: Amazon Bedrock supports model versioning, allowing you to manage and use different versions of a model.
Pricing
- On-Demand: Allows you to pay for only the resources you use. Charges are based on the number of tokens processed or images generated. You will be billed for each input and output token for text-generation models, whereas embedding models incur charges per input token. Image-generation models accumulate costs per generated image. There are no additional fees for cross-region inference. Hence, the source region of your request determines the cost.
- Batch:Â This model enables you to submit multiple prompts at once and receive responses in bulk, with processing handled and stored in your Amazon S3 bucket. This model is available at a 50% discount compared to On-Demand pricing for selected foundation models, offering cost savings for large-scale predictions.
- Provisioned Throughput: Designed for high-volume, consistent inference workloads, providing guaranteed throughput. You can purchase model units with a choice of 1-month or 6-month commitment terms, and the charges are based on the number of tokens processed per minute. Custom models are only accessible through this pricing model.
- Model Customization: Charges are based on the number of tokens processed during training (including the number of epochs) and model storage, which are billed monthly. Inferences with customized models are charged under the Provisioned Throughput plan, with the first model unit available without commitment and additional units requiring a commitment term.
- Model Evaluation: Pricing includes charges for model inference in automatic evaluations, with no extra cost for algorithmic scores. For human-based evaluations, you pay $0.21 per completed human task, where each task evaluates a single prompt and its responses. AWS-managed evaluations have customized pricing based on your specific needs.
Amazon Bedrock Cheat Sheet References:
https://aws.amazon.com/bedrock/
https://aws.amazon.com/bedrock/pricing/