Microsoft strengthens defenses in Azure AI

Spread the love



Microsoft has introduced new capabilities in Azure AI Studio to enhance the trustworthiness and resilience of generative artificial intelligence (GenAI) applications against malicious manipulation and other threats. The platform enables organizations to create custom AI assistants, copilots, bots, and more based on their data sources. Other tech giants like Amazon and Google have also launched similar offerings in response to the growing interest in AI technologies.

The five new capabilities in Azure AI Studio are Prompt Shields, Grounding Detection, Safety System Messages, Safety Assessments, and Risk and Security Monitoring. Prompt Shields help developers distinguish between valid and potentially untrustworthy inputs, while Grounding Detection detects potentially disconnected GenAI outputs. The Safety System Messages feature allows developers to define the capabilities and limitations of their models, and the Risk and Security Monitoring feature helps detect problematic model inputs.

These features address challenges related to rapid injection attacks, malicious model manipulation, and model hallucinations in large language models (LLMs). Rapid engineering attacks involve using harmless cues to direct AI models to generate harmful responses, while model hallucinations lead to the generation of plausible but false results. By incorporating these new capabilities, developers can mitigate the risks associated with AI model vulnerabilities and ensure the reliability and integrity of their applications.

Microsoft’s commitment to enhancing the security and trustworthiness of AI technologies reflects the industry’s growing reliance on AI across various sectors. As organizations increasingly leverage AI capabilities, it becomes crucial to prioritize security measures that protect against emerging threats and ensure the responsible use of AI in applications. With the continuous advancements in AI technology, developers can harness the benefits of generative AI while mitigating potential risks and vulnerabilities in their models.

Article Source
https://www.darkreading.com/application-security/microsoft-adds-tools-for-protecting-against-prompt-injection-other-threats-in-azure-ai