Claude Code's new hidden feature: Swarms

Posted by AffableSpatula 2 hours ago

Counter43Comment44OpenOriginal

Comments

Comment by codethief 1 hour ago

Comment by neom 1 hour ago

Claude Code in the desktop app seems to do this? It's crazy to watch. It sets of these huge swarms of worker readers under master task headings, that go off and explore the code base and compile huge reports and todo lists, then another system behind the scenes seems to be compiling everything to large master schemas/plans. I create helper files and then have a devops chat, a front end chat, an architecture chat and a security chat, and once each it done it's work it automatically writes to a log and the others pick up the log (it seems to have a system reminder process build in that can push updates from other chats into other chats. It's really wild to watch it work, and it's very intuitive and fun to use. I've not tried CLI claude code only claude code in the desktop app, but desktop app sftp to a droplet with ssh for it to use the terminal is a very very interesting experience, it can seem to just go for hours building, fixing, checking it's own work, loading it's work in the browser, doing more work etc all on it's own - it's how I built this: https://news.ycombinator.com/item?id=46724896 in 3 days.

Comment by jswny 59 minutes ago

That’s just spawning multiple parallel explore agents instructed to look at different things, and then compiling results

That’s a pretty basic functionality in Claude code

Comment by neom 55 minutes ago

Sounds like I should probably switch to claude code cli. Thanks for the info. :)

Comment by deaux 1 hour ago

Sounds very similar to oh-my-opencode.

Comment by joshribakoff 31 minutes ago

This is just sub agents, built into Claude. You don’t need 300,000 line tmux abstractions written in go. You just tell Claude to do work in parallel with background sub agents. It helps to have a file for handing off the prompt, tracking progress, and reporting back. I also recommend constraining agents to their own worktrees. I am writing down the pattern here https://workforest.space while nearly everyone is building orchestrators i also noticed claude is already the best orchestrator for claude.

Comment by AffableSpatula 24 minutes ago

Claude already had subagents. This is a new mode for the main agent to be in (bespoke context oriented to delegation), combined with a team-oriented task system and a mailbox system for subagents to communicate with each other. All integrated into the harness in a way that plugins can't achieve.

Comment by Androider 30 minutes ago

Looks like agent orchestrators provided by the foundation model providers will become a big theme in 2026. By wrapping it in terms that are already used in software development today like team leads, team members, etc. rather than inventing a completely new taxonomy of Polecats and Badgers, will help make it more successful and understandable.

Comment by mohsen1 48 minutes ago

Everyone is wrapping Claude Code in Tmux and claiming they are a magician. I am not so good at marketing but I've done this here https://github.com/mohsen1/claude-code-orchestrator

Mine also rotate between Claude or Z.ai accounts as they ran out of credits

Comment by AffableSpatula 43 minutes ago

I think you've misunderstood what this is.

Comment by mohsen1 40 minutes ago

Sorry, you're right. went through the code and understood now. I'm going to try the patch. Claude Code doing team work natively would be amazing!

Honestly if people in AI coding write less hype-driven content and just write what they mean I would really appreciate it.

Comment by bicx 38 minutes ago

Well good sir, I _am_ a tmux magician.

Comment by basedrum 1 hour ago

How is this different from GSD: https://github.com/glittercowboy/get-shit-done

I've been using that and it's excellent

Comment by djfdat 14 minutes ago

Really boils down to the benefits of first party software from a company that has billions of dollars of funding vs similar third party software from an individual with no funding.

GSD might be better right now, but will it continue to be better in the future, and are you willing to build your workflows around that bet?

Comment by AffableSpatula 1 hour ago

a similar question was asked elsewhere in the thread; the difference is that this is tightly integrated into the harness

Comment by MetaMonk 1 hour ago

A guy who worked at docker on docker swarm now works at Anthropic so makes sense

Comment by mohsen1 39 minutes ago

Swarm is actually OpenAI's terminology https://github.com/openai/swarm

Comment by ecto 9 minutes ago

Swarm is actually bee terminology

Comment by brookst 52 minutes ago

Probably a beekeeper in spare time

Comment by MetaMonk 47 minutes ago

He's really into APIary things

Comment by wild_pointer 1 hour ago

Listen team lead and the whole team, make this button red.

Comment by brookst 48 minutes ago

Principal engineers! We need architecture! Marketing team, we need ads with celebrities! Product team, we need a roadmap to build on this for the next year! ML experts, get this into the training and RL sets! Finance folks, get me annual forecasts and ROI against WACCC! Ops, we’ll need 24/7 coverage and a guarantee of five nines. Procurement, lock down contracts. Alright everyone… make this button red!

Comment by AffableSpatula 1 hour ago

ha! The default system prompt appears to give the main agent appropriate guidance about only using swarm mode when appropriate (same as entering itself into plan mode). You can further prompt it in your own CLAUDE.md to be even more resistant to using the mode if the task at hand isn't significant enough to warrant it.

Comment by svara 23 minutes ago

I'm a fan of AI coding tools but the trend of adding ever more autonomy to agents confuses me.

The rate at which a person running these tools can review and comprehend the output properly is basically reached with just a single thread with a human in the loop.

Which implies that this is not intended to be used in a setting where people will be reading the code.

Does that... Actually work for anyone? My experience so far with AI tools would have me believe that it's a terrible idea.

Comment by dlojudice 1 hour ago

It feels like Auto-GPT, BabyAGI, and the like were simply ahead of their time

Comment by woeirua 1 hour ago

Had to wait for the models to catch up...

Comment by engates 1 hour ago

Isn't this pretty much what Ruv has been building for like two years?

https://github.com/ruvnet/claude-flow

Comment by AffableSpatula 1 hour ago

The difference is that this is tightly integrated into the harness. There's a "delegation mode" (akin to plan mode) that appears to clear out the context for the team lead. The harness appears to be adding system-reminder breadcrumbs into the top of the context to keep the main team lead from drifting, which is much harder to achieve without modifying the harness.

Comment by estearum 1 hour ago

It's insane to me that people choose to build anything in the perimeter of Claude Code (et al). The combination of the fairly primitive current state of them and the pace at which they're advancing means there is a lot of very obvious ideas/low-hanging fruit that will soon be executed 100x better by the people who own the core technology.

Comment by AffableSpatula 1 hour ago

yeah I tend to agree. They're must be reaching the point where they can automate the analysis of claude code prompts to extract techniques and build them directly into the harness. Going up against that is brave!

Comment by lysace 1 hour ago

I'm already burning through enough tokens and producing more code than can be maintained - with just one claude worker. Feel like I need to move into the other direction, more personal hands-on "management".

Comment by AffableSpatula 1 hour ago

I've seen more efficient use of tokens by using delegation. Unless you continually compact or summarise and clear a single main agent - you end up doing work on top of a large context; burning tokens. If the work is delegated to subagents they have a fresh context which avoids this whilst improving their reasoning, which both improve token efficiency.

Comment by storystarling 1 hour ago

I've found the opposite to be true when building this out with LangGraph. While the subagent contexts are cleaner, the orchestration overhead usually ends up costing more. You burn a surprising amount of tokens just summarizing state and passing it between the supervisor and workers. The coordination tax is real.

Comment by AffableSpatula 59 minutes ago

Task sizing is important. You can address this by including guidance in the CLAUDE.md around that ie. give it heuristics to use to figure out how to size tasks. Mine includes some heuristics and T shirt sizing methodology. Works great!

Comment by stuaxo 36 minutes ago

If there's any kind of management some of it could use small local models - e.g. to see when it looks like its stuck.

Comment by tom2948329494 1 hour ago

And… how?

Comment by AffableSpatula 1 hour ago

The feature is shipped in the latest builds of claude code, but it's turned off by a feature flag check that phones home to the backend to see if the user's account is meant to have it on. You can just patch out the function in the minified cli.js that does this backend check and you gain access to the feature.

Comment by bonsai_spool 1 hour ago

Do you know what patch to apply? The Github link from the OP seems to have a lot of other things included.

Comment by mohsen1 42 minutes ago

Comment by AffableSpatula 1 hour ago

it's my repo - it's a fork of cc-mirror which is an established project for parallel claude installs. I wanted to take the least disruptive approach for the sake of using working code and not spelunking through bugs. Having said that - if you look through the latest commits you'll see how the patch works, it's pretty straightforward - you could do it by hand if you wanted.

Comment by Blemiono 1 hour ago

[dead]

Comment by nehalem 1 hour ago

Answering the question how to sell more tokens per customer while maintaining ~~mediocre~~ breakthrough results.

Comment by AffableSpatula 1 hour ago

Delegation patterns like swarm lead to less token usage because:

1. Subagents doing work have a fresh context (ie. focused and not working on the top of a larger monolithic context) 2. Subagents enjoying a more compact context leads to better reasoning, more effective problem solving, less tokens burned.

Comment by nulone 39 minutes ago

Merge cost kills this. Does the harness enforce file/ownership boundaries per worker, and run tests before folding changes back into the lead context?

Comment by AffableSpatula 35 minutes ago

I don't know what you're referring to but I can say with confidence that I see more efficient token usage from a delegated approach, for the reasons I stated, provided that the tasks are correctly sized. ymmv of course :)

Comment by Blemiono 1 hour ago

[dead]