Each capability belongs to one of the three pillars that ship together: the Slack project-manager agent, the coding agent, and automatic PR previews.
An LLM-driven router classifies each request and dispatches it to the right specialist: issue creator, delegator, researcher, commenter, closer, or event narrator. Each specialist has a tight prompt and a narrow set of tools, which keeps behaviour predictable.
When a request is vague, the PM pushes back with specific questions instead of guessing. Only once the scope is clear does it create an issue or delegate work — which means developers downstream get well-formed tasks, not one-line tickets.
Every conversation carries a lifecycle phase — intake, open, in-progress, review, revision, done, abandoned. The same phrase (“looks good”) means different things during intake vs. after a preview is ready; the phase lets the router disambiguate.
Status events (CI passing, preview ready, PR closed) use a latest-wins lane; human comments use an append-all lane. The result: the thread stays readable under a burst of events, and nothing anyone wrote gets silently dropped.
When the PM has enough scope, it hands the work to the coding agent by posting a /opencode instruction. The agent reads the issue, works inside a branch, and opens the PR — no developer needs to pick the issue up first.
Before opening a new issue, the PM searches existing open issues to avoid creating a duplicate. If it finds a close match, it links the new conversation to the existing work rather than starting a parallel track.
A small GitHub Actions workflow in the target repo listens for the PM’s /opencode comment, creates a branch, runs the agent, and opens the PR when it’s done. From the Slack side, this just looks like “the PR appeared”.
When the reviewer asks for changes in Slack, the PM translates the feedback into an actionable instruction for the coding agent and posts it back on the PR. The agent iterates; the preview refreshes; the cycle repeats until the reviewer is happy.
Every PR deploys to pr-{number}-preview.{domain}. The URL is the same for the life of the PR, so the same Slack message points reviewers at the latest version even after five rounds of changes.
Previews move through pending → building → deploying → ready on new commits, and deleted when the PR closes. Containers, volumes, and routing labels are cleaned up automatically — no long-lived preview graveyards.
Each target repo declares how its previews should run via .productbuilding/preview/config.yml: compose services, health checks, database migrations, post-deploy commands. The orchestrator reads the contract; the repo stays in charge of its own deploy shape.
On startup the orchestrator reconciles its SQLite state against Docker and GitHub. If a preview was half-torn-down or a PR was closed while the system was offline, it catches up without manual intervention.
The PM runs on Anthropic’s Claude by default, with an optional OpenAI-compatible fallback provider (Fireworks, Together, etc.) for resilience. Request timeouts, retry counts, and token budgets are all env-configurable.
A single EC2 instance runs the orchestrator and Traefik, with wildcard TLS handled automatically. The OpenTofu/Terraform module in the repo turns that into a reusable deploy for your own AWS account.
Want the complete catalogue — including smaller features and implementation details?
See all features →