Critical Flaw in Cursor AI Tool's MCP Poses Supply Chain Risk

Theregister

Cybersecurity researchers at Check Point have uncovered a significant remote code execution vulnerability in Cursor, a popular AI-powered coding tool. The flaw, dubbed “MCPoison,” could have allowed attackers to compromise developer environments by subtly altering previously approved configurations within the Model Context Protocol (MCP), enabling the silent execution of malicious commands without any user notification.

Promptly addressing the issue, Cursor released version 1.3 on July 29, which implements a crucial fix. This update now mandates explicit user approval every time an MCP Server entry is modified, significantly enhancing security. Users of the AI-powered code editor are strongly advised to update to the latest version to protect their systems from potential threats.

While Cursor has patched the vulnerability, Check Point views this incident as a stark illustration of emerging AI supply chain risks. “The flaw exposes a critical weakness in the trust model behind AI-assisted development environments, raising the stakes for teams integrating LLMs and automation into their workflows,” Check Point’s research team stated in a recent blog post.

The vulnerability specifically targets the Model Context Protocol (MCP), an open-source protocol introduced by Anthropic in November 2024. MCP facilitates connections between AI-based systems, such as agents and large language models (LLMs), and external data sources, enabling them to interact. While designed to streamline these processes, it also introduces new attack surfaces that security researchers have been actively investigating since its rollout.

Cursor, an AI-integrated development environment (IDE), leverages LLMs to assist with code writing and debugging. Such tools inherently rely on a degree of trust, particularly in collaborative settings involving shared code, configuration files, and AI-based plugins. Check Point researchers Andrey Charikov, Roman Zaikin, and Oded Vanunu explained their focus: “We set out to evaluate whether the trust and validation model for MCP execution in Cursor properly accounted for changes over time, especially in cases where a previously approved configuration is later modified.” They added that in collaborative development, such changes are common, and validation gaps could lead to command injection, code execution, or persistent compromise.

The researchers indeed identified such a validation flaw. They demonstrated how an attacker could exploit it by initially submitting a benign MCP server configuration to a shared repository. Once approved by a user, the attacker could then secretly modify this same entry to include a malicious command. Due to Cursor’s previous “one-time approval” mechanism, the malicious command would then execute silently on the victim’s machine every time the Cursor project was opened. Check Point successfully demonstrated persistent remote code execution by replacing an approved non-malicious command with a reverse-shell payload, thereby gaining access to the victim’s machine.

This disclosure is reportedly the first in a series of vulnerabilities that Check Point researchers have uncovered in developer-focused AI platforms. The firm plans to release further findings, aiming to highlight overlooked risks and elevate security standards within the rapidly evolving AI ecosystem.

Critical Flaw in Cursor AI Tool's MCP Poses Supply Chain Risk - OmegaNext AI News