Projelli 1.5 is out today. It's the biggest release since launch, and it closes the gap between "an AI chat that produces files" (what 1.0 was) and "an AI workspace that's actually aware of what you've already written" (what 1.5 is). Four new capabilities ship together. I'll walk through each one, say what didn't make it, and explain how to upgrade.
The short version, and the new headline: Projelli is the AI workspace that remembers your stuff. Local files. Your API keys. Every chat becomes a durable note. Available in every AI tool you use.
The thing that most annoyed me about Projelli 1.0 was that every new conversation started from zero. The AI had no idea what I'd written last week. I built 1.5's memory layer to fix that.
Under the hood, Projelli now runs a local vector index over every Markdown file in your workspace. The embedder is e5-small-v2, running through fastembed-rs. The vector store is LanceDB. It all lives in a hidden .projelli/vectors/ folder inside your workspace. A file watcher keeps the index in sync as you write.
You use it in two ways. First, type @workspace in any chat and Projelli retrieves the most relevant paragraphs from your workspace and injects them into the prompt as <workspace_context>. The chat response cites which file and paragraph each claim came from, and clicking a citation opens the file at the right spot. Second, Projelli extracts facts from your conversations (every 10 messages) and writes them to memory.json. Those facts get injected ahead of workspace context on every future chat. You can Accept, Edit, or Reject each proposed fact.
The whole thing is toggleable in Settings and opt-out per chat. See the full argument for why chat shouldn't disappear.
Projelli 1.5 ships a Model Context Protocol server. MCP is Anthropic's open spec for exposing tools and data to AI clients, and the ecosystem around it has grown fast in the last six months.
The Projelli MCP server is bundled as a .mcpb Desktop Extension. One-click install into Claude Desktop, Cursor, or any other MCP-compatible client, and that client can now read your Projelli workspace. Five tools are exposed: list_workspace_files, read_workspace_file, search_workspace (reuses the same LanceDB index as Flag 1, so retrieval quality matches @workspace), write_workspace_file (requires in-app approval by default), and get_memory_facts.
The server is a hand-rolled JSON-RPC 2.0 binary, not pulled from rmcp, because five tools is tractable and I wanted the binary small. The whole sidecar is ~151 MiB stripped, dominated by the LanceDB + fastembed deps we already ship for Flag 1.
Select a paragraph in the Markdown editor, hit Ctrl+Shift+E, type "tighten this to three sentences," and watch the AI's revision stream into the document as a live diff. Deleted lines are red and struck through; added lines are green. Once the stream completes, the diff splits into hunks and you accept or reject each one individually.
This is the same pattern you've seen in Claude Artifacts and ChatGPT Canvas, but local. The file stays on your disk. Every accepted hunk gets a version-history entry tagged author: 'ai' with the prompt, model, and offset range. You can see what the AI changed weeks later and roll it back hunk by hunk.
Ships for the Markdown and plain-text editors in 1.5. TipTap, DOCX, and RTF editors are a follow-up for 1.6.
Two capabilities here, both offline-capable.
First, voice input. Hold Ctrl+Shift+Space to dictate into whatever field is focused (chat, editor, any textarea). Hold Ctrl+Shift+N instead and the transcript saves to Inbox/note-<timestamp>.md. Transcription runs through a bundled whisper.cpp-family sidecar (Parakeet.cpp when a release is available) that takes WAV bytes on stdin and returns text on stdout with a 30-second timeout. Audio never leaves your machine.
Second, Ollama is now a fourth provider alongside Claude, OpenAI, and Gemini. Run llama3.2:3b or qwen2.5 or mistral or any other model you've pulled with ollama pull, and Projelli talks to it on 127.0.0.1:11434 with zero network round-trip. Cost is always $0. The Settings panel auto-detects whether Ollama is running and lists your installed models.
Together, Memory + Ollama + Voice means you can dictate "what did I say about pricing last week?" on a plane, with no internet, and get a useful answer citing your own files.
A few honest omissions.
The deferred "Big Bets" (B1-B6 in the internal planning doc) include a true agent layer, multiplayer, mobile, a plugin API, and a hosted sync option. None of these are on the 2026 roadmap. Projelli stays small, local, and founder-scoped.
Five smaller additions that happened alongside the big four.
? keyboard-shortcuts overlay listing every hotkey in the app.If you're on Projelli 1.0.8 or later, the auto-updater will deliver 1.5 within a day or two. You'll get an in-app banner when it's downloaded. Click, relaunch, you're on 1.5.
If you're on 1.0.0 through 1.0.7, the auto-updater won't migrate you cleanly. Grab the installer from the releases page and install over the top. Your workspace, settings, and API keys carry over.
If you've never installed Projelli, the download page has the installer for Windows, Mac (Apple Silicon + Intel), and Linux. Free tier is unchanged. Pro stays at $49 one-time. The first-100-buyers Founder's Launch Lifetime is still $29 (26 sold as of this post).
Projelli is built by me, Jameson Daines, on 5-10 hours a week around a full-time day job as a Senior Product Designer at Wheel Health. Not a funded startup. Not a team. One person, nights and weekends, shipping a local-first AI workspace because the cloud tools kept losing my business plans.
If this post is useful, share it. Tell one freelance founder you know who's fighting their pitch deck on ChatGPT. That's how Projelli grows.
Download Projelli 1.5Read the competitive comparisons or see real worked examples in the template gallery.