Skip to content

Plan & Delegate

The Build command is how you get work onto the board — whether you’re building something new or fixing something broken. Open it, and the agent asks: “Build a new feature, or fix a bug?” From there, the conversation diverges into the right workflow.

The agent is opinionated. It proposes concrete approaches with rationale, pushes back when something doesn’t add up, and asks focused questions only where the decision genuinely affects the outcome. This is a conversation between peers, not a menu of options.

  • A repository with Cate set up (quickstart)
  • A feature idea, a bug to report, or an existing issue to refine

Open the Build command from the dashboard. You can start three ways:

  • From scratch — leave the inputs blank and the agent asks what you want to work on. Choose “Build a new feature” or “Fix a bug.”
  • With a description — type a brief description. The agent routes to the feature or fix path automatically based on intent.
  • From an existing issue — provide an issue number. The agent reads it, determines whether it’s a feature or bug, and enters the right path.

A new tab opens with the Build session.


When you choose “Build a new feature,” you’re in a planning conversation. You bring the product context — what you want to build and why. The agent brings deep analysis of your codebase — what exists, what patterns to follow, where the risk is. Together you produce well-specified issues that agents pick up and implement.

Before saying anything, the agent silently explores your codebase. It reads the areas the work will touch, identifies existing patterns and utilities, checks .cate/research/ for prior context from earlier sessions, and looks for design docs or architectural notes. Only then does it present its analysis.

The agent presents its synthesis — not just echoing back what you said, but its own read of what the work involves, which codebase areas are affected, what existing infrastructure can be reused, and where the risk is. It uses diagrams to communicate structure: dependency graphs, ER diagrams for data model changes, sequence diagrams for request flows. It asks: “Is this understanding right? What am I missing?”

For each major area, the agent proposes a concrete approach and invites discussion:

“For the authentication layer, I’d use the existing session middleware in src/auth/ rather than a standalone service — it already handles cookie parsing and the overhead isn’t justified for something that doesn’t need independent scaling. The main risk is the session store becoming a bottleneck under load, which we’d mitigate with Redis.”

Push back, ask questions, contribute your product context. The agent defers to your domain knowledge but won’t just agree with everything — if something doesn’t make sense technically, it says so.

For UI work, the agent produces ASCII mockups using box-drawing characters during the discussion. You iterate until the layout is agreed. These mockups flow into the issue bodies as spatial contracts the implementing agent must match.

Before proposing a decomposition, the agent evaluates whether the work warrants an epic or is better as a single issue.

Favors an epicFavors a single issue
Multiple distinct functional areasContained to one module or concern
3 or more independent deliverablesOne clear deliverable
Multiple PRs needed for reviewabilityFits in one reviewable PR
Parallel work by multiple agents is beneficialSequential or small enough for one agent
Mix of infrastructure and feature workStraightforward change with clear scope

The agent presents its assessment with rationale and asks you to confirm. When in doubt, lean toward a single issue — an epic can always be created later if the work turns out larger.

When the work fits in a single issue:

  1. The agent synthesizes the discussion into a focused issue with acceptance criteria, implementation context, and (for UI work) the agreed mockups.
  2. A fresh-context spec review validates the issue is self-contained — could an agent with zero prior context implement this without guessing?
  3. The issue is created on your board. The agent asks where to place it — To Do so agents pick it up immediately, or another status.
  4. The agent asks whether Cate agents should handle the work or a human will.

If the discussion produced insights worth preserving, the agent writes research files to .cate/research/ and commits them so future agents have context.

When the work needs breaking down:

Decomposition — The agent proposes a breakdown: numbered sub-issues with titles, descriptions, and dependency annotations. It identifies the critical path and what can run in parallel. Each sub-issue is a vertical slice: independently shippable, completable in a single reviewable PR. The decomposition follows tested heuristics: data model before consumers, infrastructure before features, same-file conflicts create dependencies. The agent renders a dependency graph diagram and invites refinement: “Should any of these be split or combined?”

Spec review — Before creating issues, a fresh-context review checks the assembled spec in isolation — simulating “could an agent with zero prior context execute this?” This catches assumptions from the planning conversation that didn’t make it into the written spec. The review iterates until the spec is self-contained or you resolve remaining gaps.

What gets created:

  • Epic issue with acceptance criteria, the dependency graph diagram, and research file references
  • Epic branch epic/ISSUE-42-feature-name, branched from main. Sub-issue PRs merge here instead of directly to main.
  • Draft tracking PR from the epic branch to main. You review the full epic here as a single unit when all sub-issues are complete.
  • Sub-issues each with acceptance criteria, Blocked by #N dependency annotations, exact file paths, implementation approach, and verification commands. Written so an agent with zero context can implement them without re-researching the codebase.
  • Research files committed to the epic branch with architectural decisions and codebase analysis.

The agent asks where sub-issues should start — To Do for immediate agent pickup, or another status if you want to review the board first.


When you choose “Fix a bug,” the agent switches to a fast triage conversation. The philosophy is capture and move on: the agent gets the bug on the board quickly rather than making you wait for a deep investigation. The worker agent that picks it up does the deep dive.

This path creates issues, not code fixes. If you ask the agent to patch the bug directly, it pushes back — “Build is for capturing bugs and getting them on the board. Want me to create the issue so a work agent can fix it properly with tests and a PR?”

The agent asks: “What’s broken? Describe what you’re seeing versus what you expected.”

It follows up with focused questions — when did this start happening, can you reproduce it reliably, are there error messages or logs? It gathers enough to know where to look without over-interviewing you.

The agent explores the affected area: the module or component the bug touches, recent changes in those files, related tests that should have caught the issue, and any prior context in .cate/research/.

If root cause isn’t surfacing quickly, the agent doesn’t keep you waiting. It says: “This needs deeper investigation — let me capture what we know and get it on the board.” The worker agent that picks up the issue has the time and context to dig deeper.

The agent presents what it found:

  • Symptoms — what’s happening from the user’s perspective
  • Affected code — which files and modules are involved
  • Root cause — stated directly if obvious. If not: “Needs investigation — here’s what I found so far” with clues and suspects.
  • Bug analysis — the chain of events: the normal flow, where it diverges, and relevant state at each step. Optionally a sequence diagram showing the failure path.

For visual or layout bugs, the agent produces paired ASCII mockups — “Current” (the broken state) and “Expected” (the correct state) — so the implementing agent has spatial context for the fix.

You confirm: “Does this match what you’re seeing? Anything I’m missing?”

The agent proposes how to verify the fix:

  • Automated test — what the failing test should assert. Cate mandates TDD for bug fixes: the worker agent writes a failing test first, then implements the fix. The test ensures the bug can’t silently recur.
  • Manual verification — step-by-step instructions a human can follow to confirm the fix works.

You review the test plan and add any edge cases.

The agent evaluates whether this is a single fix or multiple related problems.

Single fix: one root cause in one module, contained blast radius, one PR can fix it.

Multiple issues: the investigation uncovered several related problems across different modules, or the bug is a symptom of a deeper architectural issue.

The agent creates an issue with:

  • Bug description — symptoms versus expected behavior
  • Reproduction steps — from your report
  • Test plan — failing test assertion and manual verification steps
  • Bug analysis — normal flow, failure path, relevant state. Optionally a diagram attached to the issue.
  • Fix strategy — proposed approach if root cause is known
  • Context — codebase analysis from the triage session

The agent asks where to place the issue and whether Cate agents or a human will handle the fix.

When the investigation reveals a cluster of related problems, the agent creates an epic — the same structure as feature epics: epic issue with dependency graph, sub-issues with the full bug template, epic branch, and draft tracking PR.


Once issues are on your board, work dispatches automatically — code agent tabs claim unblocked issues from To Do and start implementing. Each agent reads the issue’s acceptance criteria and context from your Build session.

Bug fix agents follow the TDD mandate: write a failing test first that reproduces the bug, then implement the fix.

For epics, sub-issue PRs merge to the epic branch. When all sub-issues are complete, the tracking PR is marked ready and the epic moves to Human Review. You review the full epic as one unit and merge to main.

See Review features for the review workflow and Status lifecycle for the full flow from To Do through Done.