Microsoft AI PC Recall Feature Gets Security Update
'We've taken some important steps to secure your snapshots and then put you in complete control,' Microsoft Consumer CMO Yusuf Mehdi says.
Microsoft has a new security architecture for its controversial Recall feature powering improved search on artificial intelligence PCs, set to become available for Windows Insider Program members in October.
In a video announcing the changes to Recall, Yusuf Mehdi, Microsoft’s executive vice president and consumer chief marketing officer, said that part of the changes to Recall include users needing to opt in to turn on and use Recall. If users don’t proactively turn Recall on, no snapshots are taken or saved. Windows settings will even have an option for removing Recall entirely.
“We've taken some important steps to secure your snapshots and then put you in complete control,” Mehdi said.
[RELATED: Microsoft’s Joy Chik On ‘Acceleration’ Of Internal Security Across Identity, Network, Supply Chain]
Microsoft AI PC Recall
John Snyder, CEO of Durham, N.C.-based solution provider Net Friends, told CRN in a recent interview that he looks forward to Microsoft adding more AI-powered security into its Defender for Endpoints and the extended detection and response (XDR) space, a space where Microsoft partners including Snyder’s business participate.
“There’s so much potential to aggregate all the security alerts and actionable tasks” within Defender, Snyder said.
New Recall Controls
Recall will only work on Copilot PCs verified by the feature to have BitLocker (Windows 11 Pro) and Device Encryption (Windows 11 Home) Trusted Platform Module (TPM), according to Microsoft. The vendor has previously said that hardware for its latest Windows 11 operating system (OS) needs TPM 2.0 security chips.
The devices also need virtualization-based security (VBS) and Hypervisor-Protected Code Integrity (HVCI), Measured Boot and System Guard Secure Launch and Kernel Direct Memory Access (DMA) Protection, according to the vendor.
Microsoft said that it has conducted security assessments of Recall before its launch including months of design reviews and penetration testing by the Microsoft Offensive Research and Security Engineering team (MORSE), an independent security design review and pen test by an unnamed third-party security vendor and a Responsible AI Impact (RAI) Assessment, which measured fairness reliability, security, inclusion, accountability and other factors.
Recall users will also need to biometrically authenticate themselves with Windows Hello every time they use Recall to increase security, according to Microsoft. Recall will detect sensitive data and personal information including passwords and credit card numbers to avoid capturing the information. The feature leverages the same library powering Microsoft Purview information protection.
Recall users will have privacy and security settings for opting in to saving snapshots and deleting them. Users can delete individual snapshots, do bulk deletion or delete snapshots from specific periods of times and involving specific applications or websites–even after screenshots are taken. Microsoft will use an icon in the system tray to notify users when snapshots are saving and allow users to pause snapshots.
Recall doesn’t save snapshots when users leverage private browsing in supported browsers. Users can set how long Recall retains content and the amount of disk space allotted for snapshots, according to Microsoft.
Microsoft reiterated that Recall snapshots are stored locally on devices without access by Microsoft, third parties or different Windows users on the same device.
In Recall, users can leverage the timeline or search box for finding an item previously searched for by using a keyword even partially related to the past search. Recall will provide related results for that keyword. Recall snapshots and associated information in the vector database are encrypted with keys protected by the TPM.
VBS Enclaves
Encryption keys are tied to users’ Windows Hello Enhanced Sign-In Security identity and used only by operations in a secure environment. That secure environment is the VBS Enclave. Enclaves use the same hypervisor as Azure to segment computer memory into special protected areas where information is processed. No other user can access the keys.
Enclaves use zero-trust principles, cryptographic attestation protocols and are accessed through Windows Hello permission. They are an isolation boundary from kernel and administrative users, and authorization times out and requires access authorization for future sessions to avoid latent malware ride alongs.
Recall services operating on screenshots and associated data and performing decryption are within a secure VBS Enclave. Users must request information when actively using Recall for it to leave the enclave, according to Microsoft. Enclaves also use concurrency protection and monotonic counters to prevent system overload from too many requests.
The Recall feature leverages rate-limiting and anti-hammering to protect against malware. It supports users’ personal identification numbers (PINs) as a fallback after Recall is configured to avoid data loss if a secure sensor is damaged, according to the vendor.
Recall was first introduced in May during an unveiling of the Redmond, Wash.-based vendor’s Copilot+ PC line of AI devices. Backlash from cybersecurity professionals around the devices constantly taking and storing screenshots of user activity resulted in Microsoft pushing Recall back from a preview experience for Copilot+ PCs available in June to a WIP available experience in October.
Originally, Microsoft said Recall wouldn’t hide any sensitive or confidential information captured in the screenshots unless a user filters out specific applications or websites or browses privately on supported browsers such as Microsoft Edge, Firefox, Opera and Google Chrome. It was also confirmed that Microsoft would keep Recall on by default for Copilot+ PCs.
The Recall updates come days after Microsoft introduced a series of new product capabilities aimed at making AI systems more secure, including a correction capability in Azure AI Content Safety for fixing hallucination issues in real time and a preview for confidential inferencing capability in the Azure OpenAI Service Whisper model.