What you actually buy when you adopt MCP
Thousands of public servers. Pinterest in production. The protocol grew up faster than its operational story did, and you're being asked at standup whether it's ready
You've been hearing about MCP for nine months. Someone on your team has installed three servers from a list. Someone else has pinned them to a specific commit because that list keeps changing. The question your team lead is now asking, the one you don't have a clean answer to, is what does adopting this thing actually buy us?
The Model Context Protocol is no longer an interesting protocol. It is, on the evidence of the last six months, infrastructure. Anthropic shipped it as an open standard in late 2024. By April 2026 there are over seven thousand publicly exposed MCP servers, with downloads in the hundreds of millions, and Pinterest is running a domain-specific MCP fleet against Presto, Spark, and Airflow inside their production environment.
That doesn't make MCP "ready". It makes it specific. Adopting MCP is a buying decision with a clear shape: a small set of capabilities you're paying for, a small set of costs you're signing up for, and a much bigger set of things you'd be wrong to expect from it.
What MCP actually is, in one paragraph
MCP is the USB-C of agent tooling. It standardises how a model client (Claude Code, Cursor, VS Code's agent surface, Windsurf, Gemini CLI) connects to an external tool surface (a filesystem, a database, a search index, your company's wiki). Before MCP, every IDE and every model vendor invented their own plugin shape. After MCP, you write the server once and any compliant client can call it.
That's the whole pitch. The reason it caught is that the timing was right: agentic coding tools landed in early 2025, every one of them needed tool access, and nobody wanted to be the integrator of last resort.
A developer's day, with and without MCP
The cleanest way to understand what you buy is to compare the same task across the two worlds.
Without MCP. You're debugging a flaky test. You ask Claude Code what's going on. It needs the test code (you paste it), the implementation (you paste it), the recent changes (you paste a diff), the failing log (you paste it), and the schema for the table the test touches (you paste it). Five copy-paste round trips before the model sees the system. The agent's reasoning quality is bottlenecked on your typing speed.
With MCP. You ask Claude Code the same question. Behind the scenes, a filesystem MCP server reads the test and the implementation. A git MCP server pulls the recent diffs. A database MCP server returns the schema. The agent sees the system in one coherent fetch, reasons against it, and proposes a fix. You went from five paste cycles to zero.
That is the operational delta. It is not glamorous. It is not "AI gets smarter". It is a context-acquisition tax going to zero, and that tax is what dominated the developer experience of agent tools through most of 2025.
The servers most teams actually wire up
The published server lists run to the thousands. The set most teams actually run is small, and the same five names come up over and over.
1. Filesystem
The first server everybody installs. Reads, writes, lists files in a sandboxed root. Replaces the "paste this file in" loop entirely. Operational cost: near zero. Risk surface: the sandbox boundary. Configure the root narrowly, not at $HOME.
2. Git and GitHub
Two distinct servers, often deployed together. Local git for diffs, log, and blame against the working tree. The GitHub server for issues, PRs, and Actions. The pair turns "what changed last week" from a paste-the-log dance into a one-line agent question.
3. Database (Postgres, SQLite, occasionally MySQL)
Schema introspection plus read-only query access. Read-only matters: this is the server most likely to embarrass you if a hallucinated DELETE goes through. The sane default is read-only against a non-production replica, not your prod instance.
4. Web search or fetch
Brave, Tavily, or a homegrown fetch server. Lets the agent reach a doc page, a status board, or a stack trace from an external monitoring tool without you tabbing out. Quality varies wildly by provider.
5. An internal knowledge server
This is the one that pays for itself. A wrapper around your wiki, your Notion, your runbooks, or your design-decision log. Suddenly the agent can answer "why did we structure auth this way?" with a citation from your own ADR archive instead of a confident-sounding guess. Pinterest's fleet runs essentially this pattern at scale, with domain-specific servers fronting Presto, Spark, and Airflow rather than one monolithic service.
The five-server shape
Most production MCP setups we've seen converge on this shape: filesystem, git, database (read-only), web fetch, internal knowledge. Anything beyond that is either a specialist tool for one workflow or a candidate for the cut list. Resist the urge to install ten servers because the registry is open. Each server is a privilege you've granted an agent.
What you're actually paying for
The benefits are concrete. So are the costs. Both deserve to be named explicitly.
You're buying:
- Lower context-acquisition tax. The five-paste loop disappears. Your agent reasons against the system, not against your willingness to copy strings.
- Cross-tool portability. An MCP server you write today works against Claude Code, Cursor, and Windsurf without rewriting the integration. That's worth a lot if your team uses more than one tool, which most teams now do.
- A vocabulary for tool authoring. "Write a tool" used to mean ten things. Now it means one specific thing with a known schema. Onboarding a new contributor to your tool surface takes hours, not days.
You're paying with:
- A new install-path question. Every server is code running locally with the user's privileges. The package supply chain matters, the launcher matters, and so does the configuration. We'll come back to this next week, in detail.
- Configuration drift across machines. MCP server configs sit in editor-specific config files (
~/.cursor/mcp.json, Claude Code'sconfig.toml, VS Code'ssettings.json). Three engineers on the same project end up with three different server sets unless someone owns the canonical list. - Capability creep. The first server is exciting. The tenth is invisible. By the time you've installed ten, you can no longer answer the question "what can my agent do on this machine?" without consulting a list.
The roadmap is knocking down enterprise blockers
The 2026 MCP roadmap reads like a list of complaints from people trying to use MCP in regulated environments. Stateful sessions, so a long-running agent doesn't lose context between calls. .well-known metadata, so a client can discover what a server offers without out-of-band documentation. SSO-integrated auth, so the server enforces who you are rather than trusting whoever runs the binary. Pinterest's deployment already pushes in this direction with a two-layer authorisation model combining end-user JWTs with service-mesh identities and a central registry every server must register against.
If you're a one-person team running an MCP filesystem against your own laptop, none of this applies to you. If you're putting MCP into a stack that has a CISO, all of it does. Where you are on that spectrum determines whether you adopt now or wait two quarters for stateful sessions and the well-known metadata work to land.
How to decide, this week
The question is not "is MCP ready". The question is whether your team's specific shape matches what MCP currently delivers. Three concrete tests:
- Are you copy-pasting context into agents more than three times a session? If yes, MCP buys you back time immediately. The five-server shape pays for itself in a week.
- Do you have someone who owns the canonical server list for your team? If no, get one before you install. The maintenance overhead is real and it lands on whoever installed first.
- Is your sensitive surface area (production data, signing keys, customer PII) reachable from a developer laptop? If yes, the auth and audit story matters more than the productivity story. Wait for the roadmap auth work, or scope your servers tightly to a non-production replica.
None of these is a "no". The first two are operational; the third is a sequencing question. The honest answer to your team lead is that MCP is ready for the tasks it was designed for and is still maturing on the tasks enterprise wants from it. Knowing which side of that line you're on is the whole job.
Adopting MCP is not a yes or no. It's a buying decision with a five-server shape, a clear set of operational costs, and a roadmap that's still completing. Install the boring servers. Skip the ten-server list. Pick someone to own the canonical config. And come back next week, because the part of this story we haven't covered yet is what you give up at the install path. You can't audit what you didn't install on purpose.