Microsoft has announced a series of new product capabilities aimed at strengthening the security, safety, and privacy of AI systems. The announcement, made on September 24, by Executive Vice President and Chief Marketing Officer Takeshi Numoto, builds on the company's existing commitments to ensure Trustworthy AI.

Key security enhancements include evaluations in Azure AI Studio for proactive risk assessments and transparency features in Microsoft 365 Copilot to help admins and users understand how web search enhances Copilot responses.

In the realm of AI safety, Microsoft introduced a Correction capability in Azure AI Content Safety to address hallucination issues in real-time, Embedded Content Safety for on-device scenarios, new evaluations in Azure AI Studio to assess output quality and relevancy, and Protected Material Detection for Code to help detect pre-existing content and code.

Privacy improvements include confidential inferencing in preview for the Azure OpenAI Service Whisper model, general availability of Azure Confidential VMs with NVIDIA H100 Tensor Core GPUs, and upcoming Azure OpenAI Data Zones for the EU and U.S. to enhance data residency options.

Numoto emphasised that these new capabilities are designed to help customers pursue the benefits of AI while mitigating risks. He cited examples of companies already using Microsoft's solutions to build more secure and trustworthy AI applications, including Unity, ASOS, and the New York City Public Schools.



Share this post
The link has been copied!