Amazon Bedrock Prompt Management Cheat Sheet
- Amazon Bedrock Prompt Management is a centralized prompt management system for generative AI, enabling easy creation, testing, versioning, and deployment of structured prompts while separating prompt engineering from application code.
- It offers key capabilities to streamline the GenAI workflow:
- Structured prompts: Define system instructions, tools, and user messages in a standardized format.
- Converse and InvokeModel API integration: Invoke cataloged prompts directly in your code using their Amazon Resource Names (ARN), eliminating the need to hardcode prompts in source files.
Core Concepts
- Prompt:
- The text or instruction provided to a model to control its output.
- Variable:
- A dynamic placeholder (e.g., {{
variable_name}}) in a prompt, filled with custom values during testing or runtime.
- A dynamic placeholder (e.g., {{
- Prompt Variant:
- A different version of a prompt (such as using another model or instruction), used for side-by-side testing and optimization.
- Prompt Builder:
- A visual tool in the Bedrock console for creating, editing, and testing prompts and their variants, no manual JSON editing required.
- Prompt management is supported in the following AWS Regions:
- ap-northeast-1, ap-northeast-2, ap-northeast-3, ap-south-1, ap-south-2, ap-southeast-1, ap-southeast-2, ca-central-1, eu-central-1, eu-central-2, eu-north-1, eu-south-1, eu-south-2, eu-west-1, eu-west-2, eu-west-3, sa-east-1, us-east-1, us-east-2, us-gov-east-1, us-gov-west-1, us-west-2
- Prompt management is supported for all text models available through the Converse API.
Prerequisites
- Permissions: Your IAM user or role must have the necessary permissions to access Amazon Bedrock Prompt Management (e.g.,
bedrock:CreatePrompt,bedrock:GetPrompt). - Model Access: You must enable access to the foundation models you intend to use. This is configured in the Model access page of the Amazon Bedrock console.
Creating a Prompt
- Prompt Message:
- The textual input that serves as the instruction for the Foundation Model (FM) to generate output.
- Variables:
- Dynamic placeholders defined using double curly braces (e.g., {{
customer_name}}) that are populated when the prompt is invoked.
- Dynamic placeholders defined using double curly braces (e.g., {{
- Model Selection:
- You can associate a prompt with a specific model to configure inference parameters, or leave it unspecified for use with agents.
- Inference Parameters:
maxTokens: The hard limit on the number of tokens the model generates in its response.stopSequences: A defined list of character sequences that, when generated, force the model to immediately stop generating further text.temperature: Controls the “creativity” or randomness of the output. Higher values increase the likelihood of selecting lower-probability options.topP: A sampling technique (nucleus sampling) that limits the model’s token choices to the top percentage of most likely candidates.- Additional Fields: JSON objects used to specify model-specific parameters not covered by the base list.
-
Console Steps
-
- Navigate to Prompt management in the Amazon Bedrock console.
- Select Create prompt.
- Enter a name and optional description, then choose Create.
- The prompt is created and ready for editing in the Prompt Builder.
-
API Steps
- Send a
CreatePromptrequest using the build-time endpoint. - Include the required fields such as name and optional fields like description or variants.
- Send a
Viewing Prompt Information
- Access details regarding your prompt’s metadata, drafts, and version history.
-
Console Steps
- Open the Amazon Bedrock console and select Prompt management.
- Choose a prompt from the Prompts list.
- Review the Overview (creation/update dates), Prompt draft (current configuration), and Prompt versions (deployment history) sections.
-
API Steps
- Get Details: Send a
GetPromptrequest specifying the prompt’s ARN or ID as thepromptIdentifier. To see a specific version, populate thepromptVersionfield. - List All: Send a
ListPromptsrequest. UsemaxResultsto limit the return count andnextTokento paginate through results.
- Get Details: Send a
Modifying a Prompt
- Update your prompt’s metadata, content, or inference configurations.
-
Console Steps
- In Prompt management, select the specific prompt you wish to edit.
- Edit Metadata: Choose Edit in the Overview section to change the Name or Description, then Save.
- Edit Content: Choose Edit in prompt builder to modify the message, variables, or model parameters.
-
API Steps
- Send an
UpdatePromptrequest to the build-time endpoint.- Note: You must include all fields you wish to maintain, as well as the fields you are changing.
- Send an
Testing a Prompt
- Validate the behavior of your prompt using real-time inference before creating a version.
-
Console Steps
- Select a prompt and choose Edit in Prompt builder (for drafts) or select a specific Version.
- Configure Variables: If your prompt uses {{
variables}}, enter temporary Test values in the Test variables pane (these values are not saved). - Run Inference: Choose Run in the Test window to generate a response.
- Iterate: Modify configurations and re-run as needed. If satisfied, choose Create version to snapshot the prompt for production.
-
API Steps
- Run Inference: Send a request to InvokeModel, Converse, or ConverseStream using the prompt’s ARN as the
modelId. - Restriction: You cannot override
inferenceConfig, system, ortoolConfigduring this call. - Restriction: Messages included in the call are appended after the prompt’s defined messages.
- Test in Flow: Create a flow with a
PromptNodepointing to the prompt ARN, then useInvokeFlow. - Test with Agent: Use
InvokeAgentand pass the prompt text into theinputTextfield.
- Test in Flow: Create a flow with a
- Run Inference: Send a request to InvokeModel, Converse, or ConverseStream using the prompt’s ARN as the
Optimizing a Prompt
- Automatically rewrite prompts to improve performance and output quality for a specific model.
-
Console Steps
- In the Prompt builder or playground, write your initial prompt and select a model.
- Select the Optimize (wand icon) button.
- Compare: View the original and optimized prompts side-by-side.
- Select: Choose Replace original prompt (or “Use optimized prompt”) to accept the changes, or exit to keep your original.
-
API Steps
- Send an
OptimizePromptrequest to the runtime endpoint. Provide the input prompt object and thetargetModelId. The response will stream ananalyzePromptEventfollowed by anoptimizedPromptEventcontaining the rewritten text.
- Send an
Application Deployment using Versions
- To deploy a prompt to production, you must create a version.
- A version is an immutable snapshot of your prompt taken at a specific point in time, allowing you to switch between configurations safely.
-
Create a version:
- Save a snapshot of your current draft to stabilize it for use in applications.
-
View information about versions:
- Access the history of all created snapshots to track changes over time.
-
Compare versions:
- Analyze the differences between two versions (or a version and a draft) to understand validation changes.
-
Delete a version:
- Remove specific snapshots that are no longer needed (ensure they are not in production use).
Deleting a Prompt
- Permanently remove a prompt and all its associated versions.
-
Console Steps
- In Prompt management, select the prompt you want to remove.
- Choose Delete.
- Type confirm in the warning dialog (acknowledging that runtime errors may occur if resources still use this prompt) and choose Delete.
-
API Steps
- Send a
DeletePromptrequest specifying the prompt ARN/ID. To delete only a specific version, populate thepromptVersionfield.
- Send a
Security
- IAM Permissions:
- Access to create, edit, and invoke prompts is controlled via AWS Identity and Access Management (IAM) policies.
- Encryption:
- Prompts and their versions are encrypted at rest using AWS KMS keys.
- Guardrails Integration:
- You can attach Amazon Bedrock Guardrails to your prompts to enforce safety policies, content filtering, and redaction of sensitive information during testing and inference.
Best Practices
- Use Variables:
- Always use {{
variables}} for dynamic input. This prevents prompt injection risks and ensures prompts are reusable across different contexts.
- Always use {{
- Version Control:
- Never use the
DRAFTversion in production. Always create immutable numbered versions (e.g.,v1,v2) to ensure stability.
- Never use the
- Model Specificity:
- Prompts are often model-specific. If switching from Claude to Llama, create a new variant tailored to the new model’s specific prompt engineering patterns.
Pricing
- Management:
- There is generally no additional charge for storing and organizing prompts in the management library.
- Testing & Optimization:
- You are charged standard On-Demand model inference fees (based on input/output tokens) whenever you run a prompt in the console for testing, comparison, or optimization.
- Prompt Flows:
- If you use prompts within Amazon Bedrock Flows, you are charged per node transition.
Amazon Bedrock Prompt Management References:
https://aws.amazon.com/bedrock/prompt-management/
https://docs.aws.amazon.com/bedrock/latest/userguide/prompt-management.html











