Show HN: Saguaro: CLI that makes Claude Code fix its own mistakes

Posted by Mitchem 2 hours ago

Counter3Comment3OpenOriginal

I've been using Claude Code Max and Codex daily and kept hitting the same problem; AI quickly ships working code that have real issues: logic errors, security gaps, subtle regressions. You catch them in review, fix them, but the agent session has already closed. Doesn't it make sense to have the AI fix its own mistakes while it still knows why it made them?

Saguaro is a background daemon that reviews AI-generated code and feeds findings back to the same agent that wrote it. The agent evaluates the critique, it knows why it made those decisions in the first place, and self-corrects what's actually wrong.

The flow: you tell Claude Code to build something. Claude writes code. Saguaro's stop hook triggers a background review (the user sees nothing). On the next turn, findings come back to Claude. Claude says "I see some issues with my approach, fixing now" and corrects itself. No human typed anything. No blocking.

It uses your existing Claude Code / Codex / Gemini subscription. No API key needed. No external account. Everything runs locally. The daemon self-spawns on demand and auto-shuts down after 30 minutes of inactivity.

There's also a rules engine for teams that want more deterministic enforcement. You write rules as markdown files with YAML frontmatter, scoped to specific file globs. But the daemon works out of the box with zero rules. It reviews like a senior staff engineer: bugs, security, regressions, dead code. The rules engine adds more precision for teams/individuals that need it.

Setup is "sag init" + restart CC + go back to coding. That's it.

Apache-2.0. TypeScript.

Comments

Comment by Mitchem 2 hours ago

Hey HN, author here.

The thing that makes this work is where in the loop the review happens. CodeRabbit, Greptile, etc review at the PR level after the agent is done. The findings go to a human who has to interpret them. The agent that wrote the code never sees the critique. We find that most people just spin up a new agent and ask "Are these review findings correct?" anyways.

Saguaro reviews during the agent's session and sends findings back to the same agent. Because the agent still has its full context window, it knows why it made each decision, it can evaluate the findings intelligently. "I made this choice for X reason, but this review shows a gap in my thinking, let me fix that." Or "This finding isn't relevant because of Y." The agent has the context to make that judgment call. That's why false positives are lower.

The daemon is completely invisible to the user. It self-spawns from the Claude Code stop hook, runs a SQLite-backed job queue on localhost, and auto-shuts down after 30 minutes idle. The review happens in the background while the user keeps working. We feed context from the original programming session into the review process. The findings surface on the next stop hook, your agent just starts fixing things.

For teams that want more precision, there's a rules engine: markdown files with YAML frontmatter that enforce specific patterns (architectural boundaries, security invariants, etc). But the daemon works with zero rules out of the box. The rules engine works great for teams with well-defined rules.

Some technical decisions: - SQLite (via better-sqlite3) as job queue, right amount of infrastructure for a local dev tool. - The daemon reviewer gets the original agent's summary ("the developer described their work as...") for context - Agent gets read-only tools (Read, Glob, Grep) with up to 15 tool calls per review, it can inspect the full codebase for context but can't edit.

Limitations: - The daemon review is async. Findings arrive on the next stop hook, not the current one. Fast iterations may miss a cycle. - Review quality depends on the model. We default to your configured model but you can override for daemon specifically. - Cost is your normal AI provider usage. `sag stats` tracks it.

Happy to answer technical questions about the architecture.

Comment by A7OM 2 hours ago

Nice work! Claude Code users will love knowing they can also add our MCP server to get live inference pricing directly in their workflow. Useful for cost-aware agent development. a7om.com/mcp

Comment by prallo 2 hours ago

co creator of Saguaro here. just wanted to add that you can configure saguaro to run just a rules review, just a daemon bg review, or both after every pass your coding agent makes. We've seen the rules review complete anywhere between 1 to 5 seconds on average.