Google Expands A2A Protocol Support Across Cloud Services
Google is significantly expanding the reach of its Agent2Agent (A2A) protocol, a communication standard designed to enable artificial intelligence agents to interact seamlessly with one another. Launched in early April, the A2A protocol was subsequently donated to the Linux Foundation and has since garnered support from over 150 companies, including major industry players like Amazon, Microsoft, Salesforce, and ServiceNow.
The latest development sees Google integrating A2A support directly into a wide array of its own agent-centric developer tools and services. This includes native A2A capabilities within its Agent Development Kit (ADK), a key toolkit for developers, and Agentspace, Google's no-code agent builder tailored for enterprise use.
To facilitate easier deployment, Google is introducing new options for A2A agents, allowing them to be readily deployed to Cloud Run, its fully managed serverless platform, or the Google Kubernetes Engine (GKE) for users seeking more control over their deployments. Furthermore, Google's Agent Engine, the company's managed runtime specifically for agents, will also now support A2A agents.
A2A Protocol Specification Updates
Alongside these product integrations, the A2A team has released the latest version of the protocol specification, now at version 0.3. Rao Surapaneni, Google Cloud’s A2A and Business Platform VP, emphasized the initial focus on enterprise readiness. "One of the things we did is we wanted to start with being enterprise-ready when we launched," Surapaneni stated. "Security, identity, monitoring — we baked all of that into the spec. As people started using our A2A SDK, we got feedback that, okay, we need slight tweaks here. We need additional capabilities to apply it for high-performance scenarios."
To address these needs, the updated specification now includes support for gRPC, a high-performance framework widely used for connecting services. Surapaneni highlighted a customer currently piloting A2A with gRPC in a mobile environment involving a vast fleet of AI agents. On the security front, the specification has been enhanced with updates concerning the handling of both unauthenticated and authenticated agents, as well as agents operating with elevated or delegated privileges.
Streamlined Deployment and a New Marketplace
As more Google customers transition from experimenting with agents and A2A to deploying them in production, the demand for simpler deployment methods and robust monitoring tools has grown. "As customers started deploying into production, they’re looking for options," Surapaneni explained. "So we incorporated this into ADK, which is our Agent Development Kit. We made it super easy — like a couple of lines, or even one line with the defaults — to convert an agent into A2A agent. Then once you build it, you want to deploy it somewhere."
Customers now have flexible deployment choices: the managed Agent Engine, deployment into a container managed by Cloud Run, or direct deployment to GKE for those requiring granular control. Google is also extending A2A agent deployment to Agentspace, enabling businesses to publish their agents within the service. This will allow enterprises to access and manage both their internally developed and third-party agents from a centralized location.
In a related move, Google is launching an AI Agent Marketplace. This platform will allow Google Cloud customers to discover and acquire agents from Independent Software Vendors (ISVs), Global System Integrators (GSIs), and other providers. These agents will be required to run on the Google Cloud Platform and undergo Google's vetting process to ensure quality and compatibility. "Our approach of giving enterprise users the ability to access the right content, right actions and relevant agents all in one surface is received amazingly well," Surapaneni remarked.
Furthermore, Google's Vertex GenAI Evaluation Service, which benchmarks applications against developer-defined criteria, can now test these A2A agents thanks to its newly added protocol support.
A2A vs. Model Context Protocol (MCP)
Despite the growing adoption of A2A, some confusion persists regarding its relationship with Anthropic’s Model Context Protocol (MCP). Surapaneni, a key figure in the creation of the A2A protocol, shed light on its origins and the distinction between the two.
"The insight was that as customers and all these technology vendors build their own agents, you’re suddenly getting into, I would say, world intelligence that they’re providing," he explained. "But if you look at it from a customer perspective, I’m deploying Salesforce, ServiceNow, Google, and maybe something else. If these agents cannot talk to each other, they can only do what they do, and I cannot leverage them easily. That’s the key insight that led me to think of how I can make these agents talk to each other?"
Surapaneni clarified that a core difference lies in their approach to communication. While an MCP call is essentially an API call using code, A2A aims to replicate more nuanced, natural language interactions between agents. "You’re missing out on the natural language capability and the autonomous intelligence that these agents have," he noted regarding MCP. "So I wanted to bake that into the protocol. So just like a human is typing and chatting with an agent, another agent can do this ambiguous conversation and drive towards a goal. I did not want to lose a semantic exchange that is happening, and I wanted to bring that to the agents."
He concluded that while MCP excels at handling structured data and invoking tools, A2A is designed for the "much more nuanced, ambiguous" back-and-forth communication akin to human-to-human interaction, allowing agents to fill gaps and collaboratively achieve goals.