The USB‑C Moment for AI Agents: MCP and A2A Go Mainstream
In 2025, cross-vendor agent interoperability quietly became real. With Agent2Agent entering the Linux Foundation and the Model Context Protocol landing natively in Windows, Azure, and OpenAI’s stack, agents are about to plug into every app, cloud, and desktop like USB-C.

Breaking: The agent ports are standardizing
The generative AI story of 2025 is not another model. It is a plug. Over a few dense weeks, the industry snapped into a shared set of connectors for agents. Google’s Agent2Agent protocol moved to neutral governance. Microsoft shipped first party Model Context Protocol support across Windows and Azure. OpenAI turned on computer use so agents can drive real desktops and added native support for remote MCP servers in its core API. In the same window, security researchers publicly demonstrated malicious MCP servers and the specification introduced new hardening. The result is an inflection point: agent interop is no longer a slideware promise. It is booting on your machines.
In late June 2025, the Linux Foundation announced A2A, an open protocol originating at Google that lets agents discover one another, exchange messages, and coordinate across vendors. Microsoft and Amazon Web Services voiced support alongside other contributors, which matters for anyone shipping enterprise software. A2A is the network connector for agent-to-agent collaboration.
On May 19, 2025, Microsoft used its Build stage to detail first party MCP support spanning Windows, Azure AI Foundry, GitHub, Copilot Studio, Dynamics, and Semantic Kernel. That is the operating system and the cloud treating agent tool access as a native capability, not an add-on. In parallel, GitHub and Microsoft joined the MCP Steering Committee and socialized designs for an authorization spec and a registry of trusted MCP servers. MCP is the device connector that lets an agent plug into tools and data like file systems, calendars, storage drives, search, and line-of-business applications.
Meanwhile, OpenAI moved beyond lab demos. In January it unveiled a computer-using agent that can click, type, and scroll through interfaces. By spring and summer, OpenAI rolled out the Computer Use tool in its Responses API and added support for remote MCP servers so developers can point models at external capabilities without custom glue. If you think of A2A as the agent network and MCP as the device bus, OpenAI’s computer use is the universal driver that lets agents operate any graphical environment. See OpenAI’s announcement describing the tool and how to use it in the Responses API: OpenAI introduced computer use.
Researchers also stress-tested reality. In May and June, several teams outlined attacks that abused malicious or misconfigured MCP servers, uploaded proof-of-concept servers to aggregator sites, and showed how overly permissive tools could leak data or trigger unintended actions. Within weeks, the MCP spec added OAuth-based authorization guidance, resource indicators that bind tokens to specific servers, and sharper requirements around declaring trust models. Growing pains, yes, but also a sign the ecosystem is real enough to attract adversaries and mature fast.