3Client calls tools/list to discover available tools and their schemas
4Tool schemas are injected into the model’s context (model has no idea MCP exists)
5When the model generates a tool call, the host routes it to the correct MCP server
6MCP server executes the tool and returns the result via JSON-RPC
Key Relationship
They are complementary layers — Function calling is the mechanism inside the LLM API. MCP is the protocol layer that feeds tools into that mechanism and executes them on the other side.
The Costs of MCP
MCP solves real problems, but it introduces real costs. Understanding the tradeoffs is essential.
Context Window Bloating
67,000+
tokens from 4 servers
25–30% of a 200K context window consumed before the conversation even starts. Every tool definition is injected as text. 2–3x cost increase from token overhead alone.
Progressive Disclosure
On-demand
load only when needed
The modern alternative: load tools on demand instead of all at once. Claude Skills exemplifies this pattern — capabilities are loaded contextually, not dumped into the system prompt.
Security Surface
43%
had injection vulnerabilities
30% vulnerable to SSRF. 88% of servers require credentials, yet 53% use insecure static secrets. No built-in sandboxing or permission model in the protocol itself.
The Evolving Landscape
MCP isn't the only protocol vying for the agent interop space. The ecosystem is consolidating fast.
A2A
Google, April 2025
Agent-to-Agent communication
50+ launch partners. Now under Linux Foundation governance. Development has slowed since the initial announcement.
Linux Foundation / Slowing
ACP
IBM / Linux Foundation, March 2025
Agent Communication Protocol
Merged with A2A under the Linux Foundation in September 2025. Simple REST-based design, no SDK required.
Merged with A2A
AG-UI
CopilotKit, 2025
Agent-User Interaction
Event-based protocol for agent-human interaction. Fills the UI gap that MCP deliberately ignores. Active in niche use cases.
Active / Niche
Progressive Disclosure
Anthropic (Claude Skills), 2025
On-demand capability loading
Architectural pattern, not a protocol. Directly addresses context bloating by loading tools contextually instead of all at once.
Growing adoption
Key Takeaways
MCP is a JSON-RPC protocol. Not a product, not a service. A standardized conversation format between client and server.
The model never talks to MCP servers. The host application's MCP client manages all communication. The model just sees tools and results as text in its context.
MCP solves the N×M integration problem. Instead of every app building custom integrations for every tool, both sides implement the protocol once. N+M replaces N×M.
Three primitives, three control models. Tools (model-controlled), Resources (app-controlled), Prompts (user-controlled). Each exists for a reason.
MCP servers are simple programs. They can be written in any language, run locally or remotely, and are often under 100 lines of code. “Server” is misleading — think “adapter” or “plugin.”
The lifecycle starts with a handshake. Initialization → capability negotiation → discovery → execution. The handshake is what makes the protocol dynamic and extensible.
Transport is a detail, not the protocol. stdio for local, HTTP+SSE for remote. The JSON-RPC messages are the same regardless of how they travel.
MCP is the USB-C of AI. One standard interface, infinite capabilities. Before USB: proprietary cables everywhere. Before MCP: custom integrations everywhere. After MCP: plug and play.