OpenProse: A Programming Language for AI Sessions
Markdown-based programming language for long-running AI sessions. Contract-based semantics, parallel execution, and composable workflows. The thesis: an AI session is a computer, and this is its language.
The Mental Model Shift
Here's the pitch, and it's a properly bold one: "A long-running AI session is a Turing-complete computer. OpenProse is a programming language for it."
Read that again and sit with it for a second. Every time you open a Claude Code session, or a Codex session, or Amp, you're starting a computer. It has memory (the context window). It has I/O (tools, file system, browser). It can branch, loop, and make decisions. It can execute arbitrary operations. The only thing it's missing is a proper language to program it with.
OpenProse fills that gap.
What It Is (and Isn't)
OpenProse is not a Claude Code plugin. It's not an MCP server. It's not a configuration framework. It's a standalone programming language specification with its own runtime, and its programs happen to execute inside AI sessions.
Programs are written as Markdown files with YAML frontmatter. The syntax uses contract-based semantics: you declare requires (what the program needs as input) and ensures (what it guarantees as output). The runtime figures out execution order by matching contracts. If step A ensures something that step B requires, A runs before B. No explicit sequencing needed.
---
requires: [codebase_analysis]
ensures: [test_plan]
---
Review the codebase analysis and create a comprehensive test plan
covering all edge cases for the authentication module.
That's a step in an OpenProse program. The runtime looks at what other steps produce codebase_analysis, runs those first, then feeds the result into this step. If multiple steps can run in parallel (their contracts don't depend on each other), they run in parallel.
The Forme Container
The runtime's coordination layer is called a "Forme Container" and it's the interesting bit architecturally. It automatically wires together multi-service programs by matching contracts. Think of it like dependency injection but for AI agent workflows. Each step declares what it needs and what it produces, and the container figures out the wiring.
This means you can write individual workflow steps as self-contained units and compose them into larger programs without manually orchestrating the data flow. Add a new step, declare its contracts, and the container slots it into the right place in the execution graph.
Key Features
Contracts. The requires/ensures system. This is the core abstraction and it's elegant. Instead of imperative "do this then do that" sequencing, you declare intent and let the runtime handle order. If you've used build systems like Make or Bazel where targets declare dependencies, same energy.
Parallel execution. Steps with independent contracts run simultaneously. No explicit parallelism annotations needed. The contract graph determines what can run in parallel.
Error handling, loops, conditionals. All the control flow you'd expect from a real programming language, expressed in Markdown.
Persistent agents. Long-running agent instances that maintain state across multiple program invocations.
Pipelines. Chain programs together, where one program's output feeds into the next.
Variable assignment. Store intermediate results and reference them later.
| 📚 Geek Corner |
|---|
| Contract-based vs imperative orchestration. Most AI agent orchestration tools are imperative: "run step 1, then step 2, then step 3." OpenProse is declarative: "step 1 produces X, step 2 needs X and produces Y, step 3 needs Y." The difference matters when programs get complex. In an imperative system, adding a new step means figuring out where it goes in the sequence and manually wiring inputs/outputs. In a contract-based system, you declare what the step needs and produces, and the runtime resolves the rest. This is the same insight that made SQL powerful for databases and React powerful for UIs: describe what you want, not how to get there. The tradeoff is debugging. When the runtime decides execution order, and something goes wrong, understanding why it ran in that order requires understanding the contract resolution algorithm. Imperative systems are easier to debug because the execution order is right there in the code. |
Compatibility
OpenProse isn't tied to a single AI platform. The docs list Claude Code, OpenCode, and Amp with Opus as compatible execution environments. If your AI coding agent can read Markdown files and execute instructions, it can probably run OpenProse programs.
The legacy format was .prose files (v0 syntax). Current versions use standard .md files with YAML frontmatter. There's a prose migrate command if you have v0 programs to update.
Where This Sits in the Ecosystem
OpenProse is solving a different problem than tools like SuperClaude or Ruflo. Those tools configure or orchestrate AI agents. OpenProse programs AI sessions.
It's closer in spirit to GitHub's spec-driven development approach: treat a document as the source of truth and have the AI execute it. But where spec-driven development uses a specification that gets "compiled" into code, OpenProse uses a program that gets run inside an AI session. The specification is the program.
Still in beta (v0.8.1 as of late January 2026, roughly 1,000 stars on GitHub), so it's early days. But the thesis is strong enough that it's worth watching. If AI sessions really are computers, they'll eventually need proper programming languages. OpenProse is placing that bet.
Getting Started
Check the docs at prose.md and the GitHub repo. Write a simple program with a couple of contract-linked steps, run it in your AI session, and see if the mental model clicks for you. If it does, the rabbit hole goes deep.