← All posts

The MCP play indie tool builders are missing.

By Jameson Daines · 2026-04-27 · 7 min read

It's late April 2026, and I've been watching the indie AI tool space for the last three months trying to figure out why more builders aren't adding MCP support. Anthropic's Model Context Protocol shipped in November 2024. The reference implementations are mature. There are hundreds of community-built MCP servers. Claude Desktop, Cursor, Continue.dev, and a handful of others speak MCP fluently.

Most indie AI tools? Don't. Which is genuinely surprising, because the engineering cost is small and the distribution upside is real.

This is a short post about why MCP is the closest thing to free distribution that exists for an indie AI tool right now, and why the window to be early is closing.

The 30-second version of MCP

MCP is a JSON-RPC protocol that lets AI tools talk to external data sources in a consistent way. Think USB-C for AI: one connector spec, so any compliant AI client can use any compliant server.

Concretely:

The full explainer is at /mcp-explained. For this post, that's all you need to know.

The distribution advantage

Here's the thing nobody seems to be talking about. When Claude Desktop added MCP support, the ecosystem of community-built servers exploded. Now there are MCP servers for:

Each of these servers is somebody else's work. As an indie tool builder, if your tool speaks MCP, your users get all of these capabilities for free, the day you ship MCP support. You didn't write the GitHub integration. You wrote the protocol; Anthropic wrote the SDK; the community wrote the servers.

This is the part most indie builders haven't internalized. Adding MCP support to your tool isn't "shipping one feature." It's shipping access to the entire MCP ecosystem at the cost of one feature.

Why isn't every tool doing it

I think there are three reasons.

One: indie tool builders are head-down on their own roadmap. Most are shipping features users explicitly asked for. MCP support shows up nowhere in the user-feedback queue, because users don't know to ask for "Model Context Protocol" by name. They ask for "can it integrate with my GitHub" or "can it read my Notion." MCP is the answer; the question gets phrased as something else.

Two: it sounds technical. "Add support for the Model Context Protocol" reads like a 2-week engineering project. It's actually a 1-2 day project to add basic MCP client support using the official Anthropic SDK. The intimidation is bigger than the reality.

Three: most indie AI tools target a single workflow. A focused tool ("AI for customer interviews") doesn't obviously need to talk to GitHub or Postgres. So MCP feels off-strategy. But the right framing is: MCP gives your single-workflow tool the ability to ingest data from anywhere your user already keeps it. That's not a distraction from the focused workflow. It's the thing that makes the workflow work without forcing imports.

The opportunity for an indie tool right now

If you're shipping an indie AI tool in 2026 and you don't yet support MCP:

  1. Add MCP client support. The Anthropic SDK in TypeScript or Python makes this 1-2 days.
  2. Document which MCP servers you've tested with (filesystem is the obvious starting point).
  3. Submit your tool to the MCP clients list (file an issue at modelcontextprotocol/servers if there isn't a clients-section yet, or submit to punkpeye/awesome-mcp-clients).
  4. Write a blog post saying "[your tool] now speaks MCP." Specifically tag the post for the MCP community discussion threads on HN and X.

The result, based on what I've seen with Projelli's MCP integration, is meaningful inbound from users who specifically search "what tools speak MCP." Right now that's a small but rapidly growing audience. They're high-intent (they know what they want) and high-conversion (they're already comfortable with developer tools).

The Projelli example

I added MCP client support to Projelli before launch. The reasoning at the time: it's a 2-day project, every Projelli user gets the MCP server ecosystem for free, and indie AI workspace tools that speak MCP are still rare enough that being one is a small but real differentiator.

Two months in, it's been the right call. A meaningful fraction of inbound traffic comes from MCP-related queries. The "MCP support" line in our marketing copy generates more interest than I expected. And every time the MCP community ships a new server (Salesforce, AWS Bedrock, Datadog), Projelli users get access without me writing a line of integration code.

The window is closing, but slowly

I don't want to overdo the urgency. MCP isn't going to "win" or "lose" in the next six months. It's going to gradually become the default way AI tools talk to external data, the way HTTP became the default for application protocols. There's no cliff.

But there's a real first-mover advantage right now. The list of MCP-compatible clients is short enough that being on it is noticed. The "what AI tools speak MCP" search is small but sticky; people who care will find you. In a year, MCP support will be table stakes and being one of fifty MCP clients won't be distinguishing.

If you're building an indie AI tool and haven't added MCP support, this is the cheapest distribution play available to you right now. Two days of engineering, real inbound, and you stop having to write integrations against every SaaS API your users want to connect to.

Resources for getting started

If you build an indie AI tool and you ship MCP support, drop me a note at [email protected]. Happy to amplify and to compare implementation notes.

See how Projelli uses MCP, free download