One of the pillars of the AWS Well-Architected Framework is security. It is a foundational concept when running your workloads in the cloud to think about privacy, access limits, compliance with regulatory requirements, and data protection; and this includes Amazon Bedrock.
Along with several AI announcements during the keynote of AWS CEO, Adam Selipsky during AWS re:Invent 2023 was Guardrails for Amazon Bedrock. As AI technology evolves and becomes more mature, it makes sense to also reinvent the way usage is handled by security safeguards. Guardrails for Amazon Bedrock allow security policies to be applied across foundational models, to fulfill application requirements and implement responsible AI policies.
Responsible AI in Amazon Bedrock
Generative AI is one of the technologies that has seen accelerated growth in the past few years. More and more companies are utilizing its capabilities for various use cases and are driving research and innovation. This adoption increases the potential for misuse.Â
Responsible AI is a concept that was brought about by the exploration of opportunities for AI applications while maintaining accountability and governance to its ethical use. There is more and more responsibility to define guidelines and comply with regulations.
Amazon Bedrock has always been secure. It keeps data secure in private with various promises: none of the customer’s data is used to train an underlying model, all data is encrypted whether in transit or at rest, data remains within your VPC, and supports compliance standards including GDPR and HIPAA.
Guardrails are high-level rules that provide governance for your AWS environment and have been available for some time in other AWS services, including AWS Control Tower and Amazon SageMaker. To further support security and the responsible AI concept, Guardrails for Amazon Bedrock has also recently been added. Guardrails for Amazon Bedrock put limits to the information the large language models (LLMs) can return, including fine-tuned ones.
Creating a Guardrail
The Guardrails feature is currently in Preview mode and may not be available on all AWS accounts yet. It should be accessible under a new section, Safeguards, in the management console.
Creating a guardrail will open a wizard that will allow the user to configure safeguards for Amazon Bedrock.
Key features of Guardrails for Amazon Bedrock where policies can be defined are listed below:
Denied Topics
You can implement policies to restrictions on topics by adding to a denied list. Using natural language description, you can define the prohibited topics within an application. You can also provide example phrases (up to 5 per topic) to further classify the denied topic. This will classify an input or a response as restricted and provide predefined responses if it passes the Guardrail check.
Implement Content Filters
Content filters can be turned on for both prompts and responses. You can configure thresholds (none, low, medium, high) to content filters for various categories (hate, insults, sexual, and violence).
Define Blocked Messaging
Guardrails for Amazon Bedrock also allows pre-defined messaging for both prompts and responses. This will allow the user to create canned responses specific to your application.
Redact PII for User Privacy (coming soon)
Guardrails for Amazon Bedrock allow detection of Personally Identifiable Information (PII) in prompts and responses and can be rejected (from inputs) or redacted (from responses). This feature allows companies to easily apply governance to the usage of AI in terms of limiting data exposure.
Final Notes
The work being done on securing AI technology is becoming more and more important as it continues to develop. Always be reminded about the foundations of system architecture and how security is one of the pillars to keep in mind for every project. Amazon Bedrock Guardrails is currently released in limited preview and may not be available in all AWS accounts yet.
References:
AWS re:Invent 2023 – CEO Keynote with Adam Selipsky
https://aws.amazon.com/machine-learning/responsible-ai/policy/