Microsoft is expanding generative AI tools for businesses, such as Azure AI Content Safety, to ensure secure deployment. With over 350 people working on responsible AI, the company has advocated for AI governance and standards. The tool is customizable for different use cases, helping companies like Shell develop generative AI platforms without compromising security. Azure AI Content Safety also plays a critical role in content moderation and governance, allowing for a broader adoption of AI that consumers can trust. Microsoft continues to improve the technology through research and customer feedback to stay ahead of potential threats in online spaces. Boyd emphasized the importance of trust and security in AI innovation, reflecting on Microsoft’s commitment to protecting customers throughout the company’s history.
Article Source
https://news.microsoft.com/source/features/ai/azure-ai-content-safety-in-azure-ai-platform/