Site icon VMVirtualMachine.com

Amazon introduces new measures to reduce fake AI content

Amazon introduces new measures to reduce fake AI content
Spread the love



Tech Brew is a resource that aims to keep business leaders informed about the latest technologies, automation advancements, policy changes, and more to help them make informed decisions. Recently, Amazon Web Services (AWS) introduced a new tool called “contextual grounding checking” to address the issue of generative AI chatbots providing unreliable answers. This tool will require large language models (LLMs) to provide reference text to back up their output, thus increasing accuracy and reducing errors in tasks like recall augmented generation (RAG) and summarization by up to 75%.

In addition to this new tool, AWS’s Bedrock generative AI platform already has customizable gates in place to filter out objectionable content such as offensive language, personally identifiable information, and irrelevant topics. These gates, initially made available to all users in April, will now be offered as a standalone API. The goal of these protective measures is to boost confidence in the reliability of AI generation, especially for companies operating in highly regulated industries like banking and healthcare.

Matt Wood, vice president of AI products at AWS, emphasized the importance of establishing guardrails to prevent incorrect or misleading AI-generated answers, particularly as businesses increasingly rely on LLM tools for various applications such as customer service and summaries. The contextual check feature allows users to set confidence thresholds for the relevance and accuracy of the information provided by AI, giving organizations the flexibility to tailor their level of scrutiny based on their specific needs.

Diya Wynn, head of AI at AWS, highlighted the importance of building trust in AI accuracy to encourage widespread adoption of AI tools. Whether it’s in a classroom setting where inappropriate content needs to be filtered out or in a financial institution where sensitive investment information must be protected, organizations must have confidence that the information provided by AI is reliable and accurate. By offering tools like contextual grounding checking, AWS aims to establish itself as a safe and dependable platform for companies looking to leverage AI technologies effectively.

Article Source
https://www.emergingtechbrew.com/stories/2024/07/11/amazon-aws-new-safeguards-ai

Exit mobile version