Microsoft's AI web fix protocol hit by critical security flaw

Theverge

A significant security vulnerability has been discovered in Microsoft’s new NLWeb protocol, a technology designed to enable advanced AI-powered search and interaction across websites and applications, dubbed “HTML for the Agentic Web.” The flaw emerged just months after Microsoft unveiled NLWeb at its Build conference and as the company began deploying it with early customers like Shopify, Snowlake, and TripAdvisor.

The vulnerability, identified as a “classic path traversal flaw,” allows unauthorized remote users to access sensitive files. These include system configuration files and crucial API keys for large language models (LLMs) such as OpenAI or Google’s Gemini. Its ease of exploitation, merely by visiting a malformed URL, underscores the critical nature of the issue.

For AI agents, the implications of this flaw are particularly severe. As security researcher Aonan Guan, who reported the vulnerability alongside Lei Wang, explained, these exposed API keys serve as the “cognitive engine” for AI agents. An attacker gaining access to these keys doesn’t just steal credentials; they “steal the agent’s ability to think, reason, and act,” potentially leading to substantial financial losses from API abuse or the creation of malicious clones.

Guan and Wang reported the flaw to Microsoft on May 28th, only weeks after NLWeb’s public unveiling. Microsoft responded by issuing a fix on July 1st. However, the company has not issued a Common Vulnerabilities and Exposures (CVE) identifier for the issue, an industry standard for classifying and tracking security vulnerabilities. The researchers have been advocating for a CVE, arguing it would increase awareness and allow for better tracking, even though NLWeb is not yet widely adopted.

In a statement, Microsoft spokesperson Ben Hope confirmed that the issue was “responsibly reported” and that the “open-source repository” has been updated. Hope added that “Microsoft does not use the impacted code in any of our products,” and customers utilizing the repository are “automatically protected.” Despite this, Guan emphasized that public-facing NLWeb deployments remain vulnerable to unauthorized reading of API key files unless users specifically “pull and vend a new build version.”

This incident highlights the complex challenges of maintaining robust security in the rapidly evolving landscape of artificial intelligence. It also raises questions about Microsoft’s renewed emphasis on security, especially as the company simultaneously pushes forward with other AI initiatives, such as native support for the Model Context Protocol (MCP) in Windows, which has drawn its own security warnings from researchers. The swift discovery of this fundamental flaw in NLWeb serves as a potent reminder for tech companies to prioritize comprehensive security measures alongside the rapid development and deployment of new AI capabilities.