The short version. MCP (Model Context Protocol) is an open standard that lets AI tools talk to your files, databases, and other apps in a consistent way. Anthropic introduced it in November 2024. Think of it as a USB-C connector for AI: a single spec, so any compatible client can work with any compatible server. Today there are MCP servers for filesystems, GitHub, Slack, Notion, Google Drive, Postgres, and hundreds more. Projelli speaks MCP, so do Claude Desktop, Cursor, Continue.dev, and a growing list of clients.
If you've heard MCP referenced and wondered whether it matters or if it's just another acronym, this is the page. I'll cover what problem MCP actually solves, how it works mechanically, what you can do with it today, and how Projelli uses it.
For most of 2023-2024, every AI tool that wanted to connect to external data had to build that connection itself. ChatGPT had its own plugins format. Cursor had its own context system. Claude Desktop had its own. If you wrote a connector for a private database, you wrote it three times to support three tools. If you wanted a new tool to access the same database, you wrote a fourth.
The pattern was very 1995-Web-protocols-era. Lots of duplication, no shared standard.
MCP standardizes the connector. You write one MCP server for your database. Every MCP-compliant client can use it. New clients become useful instantly without each having to ship its own data integrations.
Concretely:
That's it. The acronym is intimidating; the underlying idea is the kind of plumbing the web has been doing for decades.
MCP defines three roles:
Each MCP server can offer:
When you talk to your AI workspace and it accesses your files or your calendar or your codebase, here's what's really happening: the host sees that the AI's response includes a tool call ("read this file"), the host routes that call to the relevant MCP client, the client sends a JSON-RPC request to the server, the server runs the action and returns the result, the result feeds back into the AI's context, the AI generates the next part of the response. All in a few hundred milliseconds.
Say you're working on a customer interview synthesis. You ask your AI:
"Look through the customer interviews in ~/projects/acme/interviews/ and pull out everyone who mentioned pricing concerns."
Without MCP, the AI has no way to read that folder. The conversation goes sideways: "I can't access your filesystem; please paste the contents."
With MCP and a filesystem server connected:
fs.list("/Users/you/projects/acme/interviews/")You see this happen in real-time as a flowing conversation. The AI is "doing research" against your actual data, with you in the loop on any destructive action.
The community-built MCP server registry at github.com/modelcontextprotocol/servers lists hundreds. Highlights:
| Server | What it exposes |
|---|---|
| Filesystem | Read, write, list files in a workspace folder |
| GitHub | Read repos, browse issues, search code, file PRs |
| Slack | Read channels, search messages, post replies |
| Google Drive | List documents, read content, edit Docs / Sheets |
| Postgres / SQLite | Query a database, inspect schema |
| Notion | Read pages, search a workspace, create new pages |
| Linear / Jira | Read issues, create tickets, update status |
| Brave Search / Tavily | Web search |
| Custom | Whatever you build for your own data |
Most are 50-300 lines of TypeScript or Python and run locally as a subprocess of the host app. Setup is typically: install the server (npm install or pip install), add a snippet to the host's MCP config file, restart the host.
Projelli is both an MCP client and exposes itself as an MCP-compatible workspace.
As a client: Projelli can connect to any MCP server you configure. That means you can wire your Projelli workspace to your GitHub account (via the GitHub server), your Slack (via the Slack server), your Postgres (via the database server), or any custom internal data source you build a server for. The AI conversations in Projelli can then read from and write to those external sources, with Projelli's standard "ask before destructive operations" approval flow.
As a workspace: every Projelli workspace is already a folder of Markdown files. Tools that can read files (any MCP filesystem server, any other text-aware tool) can access your Projelli workspace without any special integration. Your data in, data out.
For the technically-minded: Projelli uses the official Anthropic MCP SDK. Server config is exposed in Settings → AI → MCP Servers. Each server runs as a separate subprocess that Projelli manages.
If you're an indie founder using AI for general workspace work (writing, planning, drafting), the answer depends on what data sources you want the AI to access:
For founders specifically, the most common useful MCP integrations are GitHub (for code-adjacent work), Notion (for migrating away from cloud workspaces), Postgres or SQLite (for any database-backed product where you want the AI to do analysis), and a custom server for whatever your particular product touches.
If you have something specific to expose, the official SDKs make a basic server about 50 lines. Here's the shape, in TypeScript:
import { Server } from "@modelcontextprotocol/sdk/server/index.js";
import { StdioServerTransport } from "@modelcontextprotocol/sdk/server/stdio.js";
const server = new Server(
{ name: "my-server", version: "0.1.0" },
{ capabilities: { tools: {} } }
);
server.setRequestHandler(/* schema */, async (request) => {
// your tool logic here
return { content: [{ type: "text", text: "result" }] };
});
const transport = new StdioServerTransport();
await server.connect(transport);
The full guide is at the MCP documentation site. Anthropic and the community maintain reference implementations for both Python and TypeScript SDKs.
MCP stands for Model Context Protocol. It's an open standard, originally introduced by Anthropic in late 2024, that defines how AI tools communicate with external data sources, files, and other applications. Think of it as USB-C for AI: a single connector spec that lets any AI client work with any MCP-compliant server.
Anthropic introduced MCP as an open specification in November 2024. The protocol is open and not controlled by Anthropic; reference implementations and SDK libraries are published at github.com/modelcontextprotocol. Adoption has spread to many AI tools beyond Claude.
An MCP client is the AI tool the user interacts with (Claude Desktop, Projelli, Cursor, Continue.dev). An MCP server is a small program that exposes a specific data source or capability (a file system, a database, a SaaS API) over the MCP protocol. Clients connect to servers; the AI in the client uses the server's capabilities.
No. MCP is transport-agnostic. Many MCP servers run as local subprocesses on your machine, communicating with the client over stdio. Some run as HTTP services over the network. The protocol works either way.
Claude Desktop is the canonical MCP client. Other supporting clients include Cursor, Continue.dev, Cody, and Projelli. The MCP server registry at github.com/modelcontextprotocol/servers lists hundreds of community-built servers covering filesystems, databases, GitHub, Slack, Google Drive, Notion, and more.
If you have a private data source or internal tool you want your AI workspace to access, yes. The official SDKs for Python and TypeScript make a basic MCP server about 50 lines of code. If you just want general capabilities (filesystem, GitHub, Slack), start with the existing community servers.
MCP itself is just a protocol; the security depends on how the host handles tool approvals. Good clients (Claude Desktop, Projelli) ask the user before allowing destructive operations and clearly show which servers are connected. Read operations are typically auto-approved. The risk surface is similar to any other AI tool that can take actions, with the standard mitigation: don't connect servers you don't trust, and review tool call prompts before approving.
Projelli speaks MCP. Connect filesystem, GitHub, Slack, Notion, Postgres, or any custom MCP server you build. Local data, your API keys, sold once.
Get Projelli