Workflow to build context for coding agents

Posted by wek 7 hours ago

Counter3Comment1OpenOriginal

Here’s the workflow my team and I have found works best with coding agents:

- Plan: Write a plan in markdown. Edit this. Iterate. The plan isn’t a throwaway note. It tracks status as work progresses (draft -> in-development -> in-review -> completed), versions with git alongside the code, and serves as the single source of truth. When the agent later implements, it reads this document. When we review the work, we compare against it.

- Diagram: Have the agent enrich the plan with architecture diagrams and data models. Edit this. Iterate. These artifacts live alongside the plan and the code. When the agent later implements, it doesn’t need us to re-explain the architecture or the schema. It reads them directly.

- Mockup: For anything with a UI, we create mockups before touching code. We generate interactive html/javascript most of the time. This replaces the Figma-to-engineering handoff entirely. When the agent implements the UI, it already knows what it should look like. No exporting, no describing screenshots in words, no “make it look like the design.”

- Tests: We have the agent write tests based on the plan, diagrams, and mockup. We review them, add edge cases, and now we have an executable definition of “done.”

- Implement: Now we tell the agent: “Implement the notification system. Run tests after each major change. Keep going until all tests pass.” The agent works iteratively. Implements the database migration from the data model. Runs tests — schema tests pass. Builds the WebSocket server. Implements the frontend. Runs Playwright and catches a CSS issue from the screenshot, fixes it, reruns. Eventually: all green.

- Review the work. When the agent finishes, we review. We click through the changes, see exactly what was added, modified, or removed, and compare it against the plan and mockup.

- Commit

- Update the Plan: After committing, we close the loop. We ask the agent to update the plan: status moves from in-development to completed, acceptance criteria get checked off, and any implementation notes get added. If anything doesn't match, either the plan gets updated or we rework the code.

- Update Docs and Website: The agent updates our documentation and our website, keeping everything in sync and up-to-date

What I like about this and why it works is that each step produces context that the next step consumes. By the time the agent starts writing code, it has the spec, the architecture diagram, the database schema, the mockup, and the test suite. Once its done coding, we update everything giving us clean context to build on.

Comments

Comment by jseabra 3 hours ago

The plan-as-living-document idea is the part that actually changes outcomes. Most people treat the spec as a throwaway artifact that gets stale the moment implementation starts. Versioning it alongside the code and closing the loop after each commit turns it into something the agent can actually trust on the next session. That compounds fast across a long project. The step I'd add between Diagram and Mockup is a constraints doc. Not architecture, not UI, just the things that are off the table: third party services you won't use, patterns that caused problems before, decisions that were made for non-obvious reasons. Agents are optimistic by default. They'll reach for the clean solution without knowing why you're not using it. A short "don't do X because Y" document prevents a whole category of review comments. One thing I've noticed with the test-first step: it works best when you write the acceptance criteria in plain language first, before asking the agent to generate the test code. If you hand it to the agent to interpret, it tends to write tests that pass rather than tests that catch failures. The distinction sounds subtle but it shows up in code review constantly.