Every Code: The Codex Fork That Added Everything OpenAI Wouldn't
Community-driven fork of OpenAI's Codex CLI with multi-agent orchestration, browser integration, theming, and Claude/Gemini support. What Codex should have been.
When the Community Gets Impatient
OpenAI shipped Codex as a terminal-based coding agent. Clean interface. Good model integration. Worked. But the feature roadmap was... let's call it "measured." The community wanted multi-agent coordination, browser automation, model flexibility, and a dozen other things. OpenAI was going at OpenAI speed, which is to say, quietly and on their own timeline.
So the community forked it.
Every Code by the just-every organisation is what happens when developers who actually use a tool every day get write access to the codebase. 3,700 stars, 228 forks, 9,440 commits. This isn't a weekend hack that someone abandoned after the initial buzz. It's an actively maintained project with real engineering behind it.
What It Adds Over Stock Codex
The headline features that stock Codex doesn't have:
Multi-agent orchestration. Three built-in commands: /plan, /solve, /code. Each spawns a different type of agent interaction. Planning agent researches and structures the work. Solving agent works through problems step by step. Coding agent writes the implementation. You can chain them or use them independently.
Browser integration via Chrome DevTools Protocol. The agent can open a headless browser, navigate pages, interact with elements, and pull data back into its context. If your coding task involves anything web-facing (building a scraper, testing a UI, checking API responses), the agent can verify its own work in a real browser.
Auto Drive for fully autonomous task handling. Point it at a task, walk away, come back to see the result. Not for everything, but for well-defined tasks with clear success criteria, it works.
Auto Review runs background code quality checks as you work. It's not waiting for you to ask "is this code any good?" It's checking continuously and flagging issues.
Multi-model support. Claude, Gemini, and GPT running simultaneously if you want. Different models for different tasks, coordinated through the same interface. Want Claude for reasoning and Gemini for fast generation? Done.
Theming. Yes, really. Customisable colour schemes and accessible presets. Sounds trivial until you've spent twelve hours staring at a terminal and the default colours are burning your retinas. Small things matter in daily tools.
The Model-Agnostic Angle
This is the bit that interests me most. Every Code started as an OpenAI Codex fork, so you'd expect it to be GPT-centric. But the community pushed it model-agnostic almost immediately.
You can run it with Claude as your primary model. You can run it with Gemini. You can mix and match. The MCP integration means you can plug in whatever tools and data sources you want. The safety modes (read-only, approval workflows, workspace sandboxing) work regardless of which model is underneath.
This matters because it gives you an alternative path to multi-model development. Instead of running three different tools (Claude Code, Codex, Gemini CLI) and context-switching between their interfaces, you run one tool with three models. Same keybindings. Same workflow. Different brains.
| 📚 Geek Corner |
|---|
| The fork sustainability question. Every Code maintains upstream compatibility with OpenAI Codex, which means it tracks upstream changes and merges them in. This is both a strength (you get OpenAI's improvements for free) and a fragility (if OpenAI changes the architecture significantly, the fork has to absorb that change or diverge). The 9,440 commit count suggests active development well beyond just patching upstream, which means the codebase has likely diverged enough that a major upstream refactor could be painful. This is the standard open source fork lifecycle: start by adding features, gradually diverge from upstream, eventually become a de facto separate project. Every Code seems to be well into that middle phase. Whether it reaches full independence or gets folded back into upstream depends on whether OpenAI ships the features the community wants. If they do, the fork's value proposition shrinks. If they don't, the fork becomes the canonical tool. |
When to Reach for Every Code
If you're a Codex user who wants multi-agent features without switching to Claude Code entirely, Every Code is the obvious choice. You keep your existing workflow and gain capabilities.
If you're a Claude Code user wondering whether to try it, the main draw is the browser integration and multi-model support. Claude Code's browser story is improving (especially with the Playwright MCP), but Every Code's built-in Chrome DevTools integration is more tightly integrated out of the box.
If you just want the best single-model coding agent experience and you're happy with Claude, stick with Claude Code. The interface is more polished and the model integration is deeper when you're all-in on Anthropic's stack.
Getting Started
npm install -g @just-every/code
Run it, pick your model, and give it a spin. The /plan command is a good starting point to see how the multi-agent workflow feels compared to single-agent Claude Code sessions.