MCP joins the Linux Foundation: What this means for developers building the next era of AI tools and agents

MCP is moving to the Linux Foundation. Here’s how that will affect developers.

| 10 minutes

Over the past year, AI development has exploded. More than 1.1 million public GitHub repositories now import an LLM SDK (+178% YoY), and developers created nearly 700,000 new AI repositories, according to this year’s Octoverse report. Agentic tools like vllm, ollama, continue, aider, ragflow, and cline are quickly becoming part of the modern developer stack.

As this ecosystem expands, we’ve seen a growing need to connect models to external tools and systems—securely, consistently, and across platforms. That’s the gap the Model Context Protocol (MCP) has rapidly filled. 

Born as an open source idea inside Anthropic, MCP grew quickly because it was open from the very beginning and designed for the community to extend, adopt, and shape together. That openness is a core reason it became one of the fastest-growing standards in the industry. That also allowed companies like GitHub and Microsoft to join in and help build out the standard.  

Now, Anthropic is donating MCP to the Agentic AI Foundation, which will be managed by the Linux Foundation, and the protocol is entering a new phase of shared stewardship. This will provide developers with a foundation for long-term tooling, production agents, and enterprise systems.  This is exciting for those of us who have been involved in the MCP community. And given our long-term support of the Linux Foundation, we are hugely supportive of this move.

The past year has seen incredible growth and change for MCP.  I thought it would be great to review how MCP got here, and what its transition to the Linux Foundation means for the next wave of AI development.

Before MCP: Fragmented APIs and brittle integrations

LLMs started as isolated systems. You sent them prompts and got responses back. We would use patterns like retrieval-augmented generation (RAG) to help us bring in data to give more context to the LLM, but that was limited. OpenAI’s introduction of function calling brought about a huge change as, for the first time, you could call any external function. This is what we initially built on top of as part of GitHub Copilot. 

By early 2023, developers were connecting LLMs to external systems through a patchwork of incompatible APIs: bespoke extensions, IDE plugins, and platform-specific agent frameworks, among other things. Every provider had its own integration story, and none of them worked in exactly the same way. 

Nick Cooper, an OpenAI engineer and MCP steering committee member, summarized it plainly: “All the platforms had their own attempts like function calling, plugin APIs, extensions, but they just didn’t get much traction.”

This wasn’t a tooling problem. It was an architecture problem.

Connecting a model to the realtime web, a database, ticketing system, search index, or CI pipeline required bespoke code that often broke with the next model update. Developers had to write deep integration glue one platform at a time.

As David Soria Parra, a senior engineer at Anthropic and one of the original architects of MCP, put it, the industry was running headfirst into an n×m integration problem with too many clients, too many systems, and no shared protocol to connect them.

In practical terms, the n×m integration problem describes a world where every model client (n) must integrate separately with every tool, service, or system developers rely on (m). This would mean five different AI clients talking to ten internal systems, resulting in fifty bespoke integrations—each with different semantics, authentication flows, and failure modes. MCP collapses this by defining a single, vendor-neutral protocol that both clients and tools can speak. With something like GitHub Copilot, where we are connecting to all of the frontier labs models and developers using Copilot, we also need to connect to hundreds of systems as part of their developer platform. This was not just an integration challenge, but an innovation challenge. 

And the absence of a standard wasn’t just inefficient; it slowed real-world adoption. In regulated industries like finance, healthcare, security, developers needed secure, auditable, cross-platform ways to let models communicate with systems. What they got instead were proprietary plugin ecosystems with unclear trust boundaries.

MCP: A protocol built for how developers work

Across the industry including at Anthropic, GitHub, Microsoft, and others, engineers kept running into the same wall: reliably connecting models to context and tools. Inside Anthropic, teams noticed that their internal prototypes kept converging on similar patterns for requesting data, invoking tools, and handling long-running tasks. 

Soria Parra described MCP’s origin simply: it was a way to standardize patterns Anthropic engineers were reinventing. MCP distilled those patterns into a protocol designed around communication, or how models and systems talk to each other, request context, and execute tools.

Anthropic’s Jerome Swanwick recalled an early internal hackathon where “every entry was built on MCP … went viral internally.”

That early developer traction became the seed. Once Anthropic released MCP publicly alongside high-quality reference servers, we saw the value immediately, and it was clear that the broader community understood the value immediately. MCP offered a shared way for models to communicate with external systems, regardless of client, runtime, or vendor.

Why MCP clicked: Built for real developer workflows

When MCP launched, adoption was immediate and unlike any standard I have seen before.

Developers building AI-powered tools and agents had already experienced the pain MCP solved. As Microsoft’s Den Delimarsky, a principal engineer and core MCP steering committee member focused on security and OAuth, said: “It just clicked. I got the problem they were trying to solve; I got why this needs to exist.”

Within weeks, contributors from Anthropic, Microsoft, GitHub, OpenAI, and independent developers began expanding and hardening the protocol. Over the next nine months, the community added:

  • OAuth flows for secure, remote servers
  • Sampling semantics (These help ensure consistent model behavior when tools are invoked or context is requested, giving developers more predictable execution across different MCP clients.)
  • Refined tool schemas
  • Consistent server discovery patterns
  • Expanded reference implementations
  • Improving long-running task support

Long-running task APIs are a critical feature. They allow builds, indexing operations, deployments, and other multi-minute jobs to be tracked predictably, without polling hacks or custom callback channels. This was essential for the long-running AI agent workflows that we now see today.

Delimarsky’s OAuth work also became an inflection point. Prior to it, most MCP servers ran locally, which limited usage in enterprise environments and caused installation friction. OAuth enabled remote MCP servers, unlocking secure, compliant integrations at scale. This shift is what made MCP viable for multi-machine orchestration, shared enterprise services, and non-local infrastructure.

Just as importantly, OAuth gives MCP a familiar and proven security model with no proprietary token formats or ad-hoc trust flows. That makes it significantly easier to adopt inside existing enterprise authentication stacks.

Similarly, the MCP Registry—developed in the open by the MCP community with contributions and tooling support from Anthropic, GitHub, and others—gave developers a discoverability layer and gave enterprises governance control. Toby Padilla, who leads MCP Server and Registry efforts at GitHub, described this as a way to ensure “developers can find high-quality servers, and enterprises can control what their users adopt.”

But no single company drove MCP’s trajectory. What stands out across all my conversations with the community is the sense of shared stewardship.

Cooper articulated it clearly: “I don’t meet with Anthropic, I meet with David. And I don’t meet with Google, I meet with Che.” The work was never about corporate boundaries. It was about the protocol.

This collaborative culture, reminiscent of the early days of the web, is the absolute best of open source. It’s also why, in my opinion, MCP spread so quickly.

Developer momentum: MCP enters the Octoverse

The 2025 Octoverse report, our annual deep dive into open source and public activity on GitHub, highlights an unprecedented surge in AI development:

  • 1.13M public repositories now import an LLM SDK (+178% YoY)
  • 693k new AI repositories were created this year
  • 6M+ monthly commits to AI repositories
  • Tools like vllm, ollama, continue, aider, cline, and ragflow dominated fastest-growing repos
  • Standards are emerging in real time with MCP alone, hitting 37k stars in under eight months

These signals tell a clear story: developers aren’t just experimenting with LLMs, they’re operationalizing them.

With hundreds of thousands of developers building AI agents, local runners, pipelines, and inference stacks, the ecosystem needs consistent ways to connect models to tools, services, and context.

MCP isn’t riding the wave. The protocol aligns with where developers already are and where the ecosystem is heading.

The Linux Foundation move: The protocol becomes infrastructure

As MCP adoption accelerated, the need for neutral governance became unavoidable. Openness is what drove its initial adoption, but that also demands shared stewardship—especially once multiple LLM providers, tool builders, and enterprise teams began depending on the protocol.

By transitioning governance to the Linux Foundation, Anthropic and the MCP steering committee are signaling that MCP has reached the maturity threshold of a true industry standard.

Open, vendor-neutral governance offers everyone:

1. Long-term stability

A protocol is only as strong as its longevity. Linux Foundation’s backing reduces risk for teams adopting MCP for deep integrations.

2. Equal participation

Whether you’re a cloud provider, startup, or individual maintainer, Linux Foundation governance processes support equal contribution rights and transparent evolution.

3. Compatibility guarantees

As more clients, servers, and agent frameworks rely on MCP, compatibility becomes as important as the protocol itself.

4. The safety of an open standard

In an era where AI is increasingly part of regulated workloads, neutral governance makes MCP a safer bet for enterprises.

MCP is now on the same path as technologies like Kubernetes, SPDX, GraphQL, and the CNCF stack—critical infrastructure maintained in the open.

Taken together, this move aligns with the Agentic AI Foundation’s intention to bring together multiple model providers, platform teams, enterprise tool builders, and independent developers under a shared, neutral process. 

What MCP unlocks for developers today

Developers often ask: “What do I actually get from adopting MCP?”

Here’s the concrete value as I see it:

1. One server, many clients

Expose a tool once. Use it across multiple AI clients, agents, shells, and IDEs.

No more bespoke function-calling adapters per model provider.

2. Predictable, testable tool invocation

MCP’s schemas make tool interaction debuggable and reliable, which is closer to API contracts than prompt engineering.

3. A protocol for agent-native workloads

As Octoverse shows, agent workflows are moving into mainstream engineering:

  • 1M+ agent-authored pull requests via GitHub Copilot coding agent alone in the five months since it was released
  • Rapid growth of key AI projects like vllm and ragflow
  • Local inference tools exploding in popularity

Agents need structured ways to call tools and fetch context. MCP provides exactly that.

4. Secure, remote execution

OAuth and remote-server support mean MCP works for:

  • Enterprises
  • Regulated workloads
  • Multi-machine orchestration
  • Shared internal tools

5. A growing ecosystem of servers

With a growing set of community and vendor-maintained MCP servers (and more added weekly), developers can connect to:

  • Issue trackers
  • Code search and repositories
  • Observability systems
  • Internal APIs
  • Cloud services
  • Personal productivity tools

Soria Parra emphasized that MCP isn’t just for LLMs calling tools. It can also invert the workflow by letting developers use a model to understand their own complex systems.

6. It matches how developers already build software

MCP aligns with developer habits:

  • Schema-driven interfaces (JSON Schema–based)
  • Reproducible workflows
  • Containerized infrastructure
  • CI/CD environments
  • Distributed systems
  • Local-first testing

Most developers don’t want magical behavior—they want predictable systems. MCP meets that expectation.

MCP also intentionally mirrors patterns developers already know from API design, distributed systems, and standards evolution—favoring predictable, contract-based interactions over “magical” model behaviors.

What happens next

The Linux Foundation announcement is the beginning of MCP’s next phase, and the move signals:

  • Broader contribution
  • More formal governance
  • Deeper integration into agent frameworks
  • Cross-platform interoperability
  • An expanding ecosystem of servers and clients

Given the global developer growth highlighted in Octoverse—36M new developers on GitHub alone this year—the industry needs shared standards for AI tooling more urgently than ever.

MCP is poised to be part of that future. It’s a stable, open protocol that lets developers build agents, tools, and workflows without vendor lock-in or proprietary extensions.

The next era of software will be shaped not just by models, but by how models interact with systems. MCP is becoming the connective tissue for that interaction.

And with its new home in the Linux Foundation, that future now belongs to the community.

Explore the MCP specification and the GitHub MCP Registry to join the community working on the next phase of the protocol.

Written by

Related posts