Microsoft Boosts AI Systems Security With Hallucination Correction, Confidential Inferencing
'We all need and expect AI we can trust,' Microsoft EVP and CMO Takeshi Numoto said.
Microsoft introduced a series of new product capabilities aimed at making artificial intelligence systems more secure, including a correction capability in Azure AI Content Safety for fixing hallucination issues in real time and a preview for confidential inferencing capability in the Azure OpenAI Service Whisper model.
The new capabilities are meant “to help ensure that AI systems are more secure, private and safe,” Takeshi Numoto, the Redmond, Wash.-based vendor’s executive vice president and chief marketing officer, said in a statement Tuesday.
“We all need and expect AI we can trust,” Numoto said. “With new capabilities that further security, privacy and safety, we empower organizations to create AI solutions that are trustworthy and continue to build a future where trust in AI is paramount.”
[RELATED: Microsoft: Buying Three Mile Island Nuclear Power Will Help ‘Carbon-Free Energy’ Goal]
Microsoft AI Security Enhancements
The product updates coincide with a progress report Microsoft released this week for its Secure Future Initiative and news that Microsoft’s long-term AI investment now includes buying power from the infamous Three Mile Island nuclear site in Pennsylvania.
The vendor—which has more than 400,000 partners worldwide—also detailed the second wave of iteration around its Copilot brand of AI tools earlier this month. During Salesforce’s annual Dreamforce conference last week, the AI rival’s CEO and co-founder Marc Benioff criticized Microsoft’s Copilot strategy and the hallucination rate of other AI offerings.
As part of the new Microsoft updates, the vendor has made generally available (GA) Azure confidential virtual machines with Nvidia H100 Tensor Core graphics processing units (GPUs). Microsoft positions these VMs as provisioning users data security directly on the GPU. Confidential computing is used for keeping customer data encrypted and protected in secure environments, and confidential inferencing brings that security and privacy to the process of trained AI models making predictions and decisions based on new data.
In preview is a confidential inferencing capability in the Azure OpenAI Service Whisper model. This is meant to allow users to develop generative AI apps that support verifiable end-to-end privacy, according to Microsoft.
Capabilities “coming soon” for users include transparency into web queries for Microsoft 365 Copilot administrators and users plus SharePoint Advanced Management updates around data oversharing, according to Microsoft.
Transparency into web queries aims to help with figuring out how web search enhances Copilot responses.
New evaluations in Azure AI Studio are meant to support proactive risk assessments, according to the Redmond, Wash.-based tech giant. The evaluations should help users assess output quality and relevancy and how often an AI application puts out protected material.
Azure AI Content Safety’s Groundedness Detection feature gained a correction capability for fixing hallucination issues in real time before customers see them, according to Microsoft.
Users also gained the ability to embed Azure AI Content Safety on devices that may have intermittent or no cloud connectivity. These capabilities are in public preview.
John Snyder, CEO of Durham, N.C.-based solution provider Net Friends, told CRN in an interview that he looks forward to Microsoft adding more AI-powered security into its Defender for Endpoints and the extended detection and response (XDR) space, a space where Microsoft partners including Snyder’s business participate.
“There’s so much potential to aggregate all the security alerts and actionable tasks” within Defender, Snyder said.