Amazon Makes Bedrock’s Guardrails Feature Even Stronger

Spread the love



In April 2024, Amazon implemented Guardrails for Amazon Bedrock, their platform for building generative AI applications, following White House Recommendations for responsible AI use. This feature allows users to block harmful content and assess model safety, offering customizable protections on top of native protection. Amazon claims it can block up to 85% more harmful content and filter out over 75% of hallucinated responses for certain workloads.

During the AWS Summit in New York on July 10th, Amazon Web Services (AWS) introduced the ApplyGuardrail API, allowing customers to establish guardrails for their GenAI applications across various models, including self-managed and third-party models. This API also provides flexibility in evaluating user inputs and model responses at different stages of application development.

AWS also announced new Contextual Grounding capabilities, aiming to detect AI hallucinations before responses reach the user. This feature adds a safeguard to filter out over 75% of AI delusions, improving the reliability of GenAI applications across multiple use cases. It relies on two filtering parameters: grounding threshold and relevance threshold, which can be adjusted based on specific use cases.

Andrés Hevia Vega, Deputy Director of Architecture at MAPFRE, praised the benefits of Guardrails and Amazon Bedrock for their security protocols and streamlined API selection processes. These tools have proven invaluable for efficient, innovative, secure, and responsible AI development practices.

Overall, Amazon’s introduction of Contextual Grounding and the ApplyGuardrail API showcases their commitment to promoting safe and responsible GenAI development and deployment. As an industry leader, Amazon’s initiatives can inspire other technology companies to adopt similar responsible AI frameworks.

Article Source
https://www.datanami.com/2024/07/11/amazon-expands-guardrails-feature-for-bedrock/