Tone Dark
Tint
04 Protocols & interop · how agents talk to tools and to each other

Common standards finally exist for talking to agents.

Until late 2024, every team building agents wrote their own version of "how the agent calls a tool" and "how two agents talk to each other." If you wanted to swap models or share an agent across teams, you had to rewrite a lot of plumbing. Imagine if every USB device required its own custom cable: that's where agent integration was.

Starting in late 2024, four open standards emerged that aim to fix this. Three of them are already running in production systems. This chapter explains what each one does, where to use it, and where things are still rough. If you want to dig deeper, two helpful surveys are Yang et al., arXiv 2025 (covers all four protocols) and Singh et al., arXiv 2025 (focuses on MCPMCP 2025); for the security side, see Hou et al., arXiv 2025.

The four protocols at a glance

ProtocolWhat it standardizesMaintainerStatus
MCP (Model Context Protocol) How agents talk to tools. A standard way for any LLM to call any external API, database, or file system. Started by Anthropic; handed to the Linux Foundation's Agentic AI Foundation in Dec 2025 In production. Supported by Anthropic, OpenAI, Google, and Microsoft.
A2AYang 2025 (Agent-to-Agent Protocol) How agents talk to each other. Each agent publishes an "Agent Card" describing what it can do and how to call it. Started by Google; broad industry backing Ready for production. Used in many enterprise agent platforms.
ACP (Agent Communication Protocol) General-purpose RESTful HTTP for agent messaging. MIME-typed multipart messages, sync and async. Various; runtime-independent design Emerging. Fewer integrations than MCP/A2A.
ANP (Agent Network Protocol) Decentralized agent discovery and identity (DIDs), semantic self-description. Open community Experimental.

MCP, the most popular of the four

Of the four protocols, MCP is the one that took off. It answers a simple question: how does an agent (running on any LLM) connect to a tool (database, API, file system, search service) without writing custom code for each pairing?

The basic idea: your agent runs an "MCP client", and each tool you want to use runs as an "MCP server". The client and the server talk over a simple message format. You can swap the LLM, swap the tool, or add new tools, without rewriting the connection code.

Why this matters for system design: before MCP, every project's "tool layer" was custom code. With MCP, adding a new tool is mostly a configuration change. The Tool Gateway layer (from the architecture chapter) is increasingly just "an MCP host wired to a few MCP servers".

What MCP looks like in code

# MCP server: expose a "search_kb" tool that a host can discover and call
from mcp.server import Server
from mcp.server.stdio import stdio_server

server = Server("knowledge-base")

@server.list_tools()
async def list_tools():
    return [{
        "name": "search_kb",
        "description": "Search the internal knowledge base.",
        "inputSchema": {
            "type": "object",
            "properties": {"query": {"type": "string"}},
            "required": ["query"],
        },
    }]

@server.call_tool()
async def call_tool(name, args):
    if name == "search_kb":
        results = kb.search(args["query"])
        return [{"type": "text", "text": str(results)}]
    raise ValueError(f"unknown tool {name}")

if __name__ == "__main__":
    import asyncio
    asyncio.run(stdio_server(server))

A2A: when agents need to talk to other agents

MCP is for when an agent wants to call a tool (a database, a file, an API). A2A is for when an agent wants to call another agent, possibly built by another team or another company. The difference matters: tools do fixed work; agents reason for themselves.

A2A standardizes the wire: how the bytes flow. It does not say what those bytes should mean for inter-agent collaboration, who is allowed to see what, or when shared context should expire. That is the next layer up, and Chapter 06 (Context exchange) covers it: typed envelopes, capability handshakes before delegation, and compartments at the boundary. Use A2A as the transport; use the chapter 06 patterns to decide what travels through it.

Security: the part where things get scary

Standardizing connections is great for productivity, and also great for attackers. A widely-shared April 2025 essay by Elena Cross joked that "the S in MCP stands for security", meaning there isn't one. The serious treatment is in Hou et al., arXiv 2025 and the broader Prompt Injection Review, MDPI 2026. Two real CVEs from 2025 made the threat concrete:

The attack patterns to know:

Most important rule: just because a connection uses MCP doesn't mean the data flowing through it is safe. MCP is a delivery mechanism, like HTTP. The content riding on it deserves the same scrutiny as any other untrusted input. LakeraLakera 2025, Forcepoint, and CrowdStrikeCrowdStrike 2025 all gave the same advice in 2025: filter inputs at the protocol boundary before they reach the model.

What's still missing

These standards are real progress, but a few things still aren't solved:

Practical guidance

Standards feel slow until they don't. MCP and A2A passed the tipping point in 2025. Building outside them is becoming the harder choice.