My Editor Reads My Code Before I Ask It To
2026-02-20 · 6 min read
How I turned a 50-year-old text editor into a shared workspace with AI agents
Every day I open Emacs. On the left, the file I am working on. On the right, Claude—not in a browser tab, not in a sidebar chat widget, but running inside my editor with full access to my buffers, my project structure, my syntax tree. When I say “this function is slow,” Claude reads the code I am looking at and proposes an optimized version. The diff appears in my editor's review tool. I accept the good parts, reject the rest, and keep going.
This is not what most people picture when they hear “Emacs.”
Why Emacs, of all things?
I have a PhD in statistical physics. I have worked in simulation neuroscience, genomics, geosciences, biophysics. Across all of these, the constant has been computation—and for the past fifteen years, Emacs has been where I do it. Not because I enjoy arcane keybindings (though I do), but because Emacs is programmable to its core. Every action is a function. Every piece of state is inspectable. Every part of the interface can be extended.
When LLMs arrived, this programmability turned into a superpower. An AI running inside Emacs inherits the same transparency: it can read open files, navigate symbols, execute code, and propose changes—all through well-defined interfaces. Other editors have AI sidebars. Emacs has AI cohabitation.
What my setup actually looks like
My daily configuration connects to 13 LLM providers through a single Emacs package called gptel:
- Anthropic (Claude Opus, Sonnet, Haiku) for deep reasoning and code
- Google (Gemini 3.1 Pro, Flash) for fast iteration
- DeepSeek, Groq, xAI for cost-effective alternatives
- Ollama for local models when I want privacy or am offline
- OpenRouter as an aggregator giving access to 40+ models
Switching between them is one keybinding. The same prompt, the same interface, different model. I pick the right tool for the task.
All API keys live in a GPG-encrypted file. None are hardcoded in my configuration. One passphrase at session start, and every provider is authenticated.
The shared workspace
The real transformation is not having many models available—it is how they participate in my work.
claude-code-ide embeds Claude Code in my editor with Model Context Protocol (MCP) tools. Claude can read the buffer I am editing, navigate my project's file tree, find function references via the language server, and propose changes that appear in a diff viewer. It is pair programming where the other programmer can literally see my screen.
agent-shell runs AI agents—Claude, Gemini CLI, OpenAI Codex—in native Emacs buffers via the Agent Client Protocol. I can run three different agents simultaneously, ask them the same question, and compare approaches side by side.
gptel-agent lets me define persistent AI personas as simple markdown files: a statistical physicist collaborator, a code reviewer, a writing editor. Each has a tailored system-prompt that shapes its behavior. I write the prompt, save it as a file, version-control it with git, and load it whenever I need that particular collaborator.
And all of this happens inside Org-mode—Emacs's plain-text format where prose, code, and LLM conversations coexist in the same document. I write a code block, execute it, see the result inline. I type a prompt, send it to Claude, and the response streams into my document. My notes, my code, and my AI conversations are not in three separate tools—they are in one file.
Why this matters beyond my workflow
I am not suggesting everyone should use Emacs. The learning curve is real, though LLMs make it dramatically less steep (the AI inside Emacs can teach you Emacs).
What I am suggesting is that the architecture matters. Most AI coding tools treat the LLM as an external service: you send context out, you get a response back. The shared workspace model is fundamentally different. The AI is in the environment, reading the same state you see, proposing changes through the same tools you use. The collaboration is deeper because the medium is shared.
Emacs enables this because of its fifty-year commitment to programmability and transparency. Every buffer is accessible. Every action is composable. When we built the first AI agents, we discovered that the best substrate for human-machine collaboration was not the newest platform—it was the one designed from the start to be fully extensible.
Where to learn more
I have written a detailed tutorial on setting up this workflow: from installing Doom Emacs through configuring gptel to enabling Claude Code's MCP integration. The full guide is part of the MayaDevGenI tutorial series—a framework for principled human-machine collaboration that I develop as part of my research.
The companion chapter covers the complete architecture with code examples.
The tools are all open source. The configuration is plain text. The collaboration is yours to shape.