one of you.
a hundred
of_them.

PaellaDoc ADO is the orchestrator that lets one person run multiple projects with 10–100 agents working in parallel — no merge wars, no shared context bleed. Every chat, decision and artifact stays on your machine. Bring any model — Claude, Codex, Gemini, or a local Llama from a cabin in the woods.

FLEET / LIVE paelladoc ado · in motion
each story = 1 agent · 1 worktree · 1 isolated context.
merge wars: 0. context bleed: 0.

Three things, in this order.

/01 · Bandwidth

orchestrate at scale

Run 10–100 agents in parallel across many projects. Each in its own git worktree, its own isolated context, its own brain. They never step on each other — and you stay one person.

/02 · Sovereignty

the knowledge is yours

Every chat, every decision, every artifact your agents produce stays in plain SQLite on your machine. Your domain expertise, encoded. Not training data for someone else.

/03 · Portability

any model, any time

Today Claude. Tomorrow Codex. Next week a 70B Llama on your laptop in a cabin with no signal. The orchestrator is the constant — the model is a hot-swappable part.

one orchestrator.
n projects. n×k brains.

# your fleetYOU (1 human)
├── orchestrator · paelladoc-ado · local
│   ├── project: acme/portfolio-v3
│   │   ├── worktree-01 → claude · story-014 · ✓
│   │   ├── worktree-02 → codex  · story-015 · ↻
│   │   └── …16 more
│   ├── project: paelladoc/ado-core
│   │   ├── worktree-01 → codex  · refactor · ✓
│   │   └── …23 more
│   └── project: cabin/local-llm-bench
│       └── worktree-01 → llama-local · ✓ (offline)
└── context_store · sqlite · ~/.paelladoc/
    ├── chats.db   2.4 GB
    ├── decisions.db 180 MB
    └── artifacts/   11 GB
ISOLATED

Every agent runs in its own git worktree. Its own branch, its own filesystem view, its own conversation. Cleanup is automatic. Merge wars stop being a thing.

PARALLEL

You are not the bottleneck. Queue 50 user stories before bed; wake up to 50 worktrees finished, replayed, and ready to review at your speed.

LOCAL

The orchestrator is on your machine. The context is on your machine. The agents call out to whichever brain you choose — or to a local model you self-host.

the context
is the asset.
keep it.

The real value of working with AI for months isn't the code it generates. It's the knowledge graph you build along the way: your decisions, your edge cases, the way your domain actually works.

Most agentic tools quietly turn that into someone else's training data. Or they lock it inside a single frontier model — switch and you lose everything.

PaellaDoc ADO inverts it. Your context lives in plain SQLite, on your disk, in formats you can read. Point any model at it — Claude today, Codex tomorrow, a local Llama from a cabin next week. The brain changes; your knowledge doesn't.

• 100% local • 0 telemetry • SQLite + WAL • bring your own model

ship production
where you're allowed to.

Air-gapped clients. On-prem mandates. Regulated industries that forbid frontier APIs. Or just a cabin off-grid. PaellaDoc ADO keeps the product methodology intact — PRD, epics, stories, acceptance criteria, golden-gate validation — and routes each story to whatever brain your context allows. Same hierarchy. Same evidence trail. Same shipping bar.

  • Local: Llama-cpp, Ollama, vLLM, MLX. Hosted: any OpenAI-compatible endpoint
  • Per-project policy: lock to local-only, allow frontier, or mix per AC
  • Cost-aware routing: cheap models for trivial criteria, frontier only when earned
  • Full audit trail in plain SQLite — specs, decisions, traces, artifacts
benchmark · frontier on a desk
DeepSeek V4 Flash 158B-A13B · MoE · 1M context
31–56 tok/s on MacBook Pro M5 Max · 128 GB

Day-0 frontier MoE, running locally. The orchestrator drives it like any other backend — same PRD, same golden gate, same audit trail.

$ paelladoc run --backend=mlx:deepseek-v4-flash
→ context loaded from ~/.paelladoc/store.db
→ resuming story-018 · refactor auth module
→ MacBook Pro M5 Max · 128 GB · 47 tok/s
→ agent ready · offline frontier ✓

built for the solo operator.

01

The portfolio founder

Six side-products. One human. The orchestrator runs each as its own fleet, and the context never bleeds between them.

02

The platform principal

100 stories across a legacy migration. Queue them, assign brains, watch the tree turn green. The PM sees what every agent decided, not just what it shipped.

03

The off-grid hacker

Owns the model weights. Owns the context. Codes from a cabin. PaellaDoc just routes the work.

04

The compliance-bound consultancy

Client code never leaves the laptop. Frontier APIs are opt-in per project. Audit trail in plain SQLite.

05

The model-curious team

Same task, three brains, side-by-side diff. Keep the best output. Stop guessing which model is good at what.

@jlcases
solo founder · CPTO @ Rankia
“I'm one person. I run several products. I don't want to be a manager of agents — I want to be the architect, and let a fleet run.

I also don't want my entire mental model — every conversation, every decision — to live inside a vendor's cloud where it disappears the day I switch tools.

PaellaDoc was for the new paradigm — context and spec-driven over the code itself. The architect writes the spec; the system ships the code. productivity — thesis. 277★ in April 2025, when nobody was talking about this yet.

Now I'm back. PaellaDoc ADO — chasing 100× for one human.”
[007] · download

run a
fleet.

free · 100% local · no account · macOS apple silicon