Readit News logoReadit News
Posted by u/unohee 19 days ago
Show HN: OpenSwarm – Multi‑Agent Claude CLI Orchestrator for Linear/GitHubgithub.com/Intrect-io/Ope...
I built OpenSwarm because I wanted an autonomous “AI dev team” that can actually plug into my real workflow instead of running toy tasks. OpenSwarm orchestrates multiple Claude Code CLI instances as agents to work on real Linear issues. It: • pulls issues from Linear and runs a Worker/Reviewer/Test/Documenter pipeline • uses LanceDB + multilingual-e5 embeddings for long‑term memory and context reuse • builds a simple code knowledge graph for impact analysis • exposes everything through a Discord bot (status, dispatch, scheduling, logs) • can auto‑iterate on existing PRs and monitor long‑running jobs Right now it’s powering my own solo dev workflow (trading infra, LLM tools, other projects). It’s still early, so there are rough edges and a lot of TODOs around safety, scaling, and better task decomposition. I’d love feedback on: • what feels missing for this to be useful to other teams • failure modes you’d be worried about in autonomous code agents • ideas for better memory/knowledge graph use in real‑world repos Repo: https://github.com/Intrect-io/OpenSwarm Happy to answer questions and hear brutal feedback.
csto12 · 19 days ago
Is there a new agent orchestrater posted every day? Is this the new JS framework?
guessmyname · 19 days ago
Yes. Everyone and their grandma wants to build the ultimate panacea of AI so of course you’ll see a myriad of AI-powered products and services on a daily basis until the tech industry as a whole is done with the topic.
himata4113 · 19 days ago
Everyone has different needs. I've made one for oh-my-pi that has file backed tasks which accept natural language to create jobs (parallelize them whenever relevant).

Haven't felt the need to show the world tho.

avoutic · 19 days ago
This! I have one with Linear, Nanobot, Claude Code, all automated in a way that works for me.

Welcome to the age of selfware! Where everybody makes what they need! :)

unohee · 19 days ago
Kind of. My point is that agent orchestrators become actually useful when the framework is specific about what's safe to delegate to machines — things that reduce friction in CI/CD operations, not agents that shoot iMessages, click around in browsers, or delete files without approval.
verdverm · 19 days ago
life with tools like openclaw means life with ns;nt abundance

hopefully it dies down as people realize there's more to it that the code

reconnecting · 19 days ago
The timeline is always the same.

Day one: Develop a new agent orchestration with 70K LOC from Claude.

Day three: Post it on Show HN.

Day four: Get 50–150 stars on GitHub.

Day seven: Never open this repo again.

verdverm · 19 days ago
That's slow, plenty of Claw HN pulling this off the first half in a couple of hours. Best I've seen is 25m
mihneadevries · 19 days ago
the reviewer/worker pipeline is honestly the part I'm most curious about. like how do you handle disagreements between agents, does the reviewer just block and the worker retries, or is there a loop with a hard cutoff?

the failure mode I'd worry about most is cascading context drift, where each agent in the chain slightly misunderstands the task and by the time you get to the test agent it's validating the wrong thing entirely. fwiw I think the LanceDB memory is the right call for this kind of setup, keeping shared context grounded is probably what prevents most of those drift issues.

unohee · 19 days ago
The worker-reviewer pipeline typically runs 1–2 self-revision iterations. In my experience, agents handle most tasks fine, but they tend to miss quality gates — docstrings, minor business logic edge cases, that kind of thing. The reviewer catches what slips through on the code quality side. This is all based on observed behavior from daily Claude Code CLI usage, where I've added hooks specifically to catch systematic failure patterns. OpenSwarm is essentially a productized version of those scaffoldings from my actual workflow — packaged into a more reusable architecture. On context drift — good call, and yeah, that's exactly why the shared memory layer matters. LanceDB keeps the grounding consistent across the chain so each agent isn't just working off its own drifting interpretation. As for disagreements: right now the reviewer blocks and the worker retries with feedback, with a hard cutoff to prevent infinite loops. It's simple but it works — the revision depth rarely needs to go beyond 2 rounds. And when it does fail, that's actually the useful signal — especially when you're triaging larger projects, the points where agents break down are exactly where a human engineer needs to step in. At this point, what OpenSwarm really needs is broader testing from other users to validate these patterns outside my own workflow.

Dead Comment

das-bikash-dev · 19 days ago
the context isolation approach is smart — cascading drift between agents is a real problem. i run 10 microservices with claude code and solved a similar issue by maintaining curated reference docs that agents read on-demand per task area instead of loading everything. the model escalation on failure (haiku → sonnet) is a nice touch too. do you find the lancedb memory layer actually helps with repeated similar tasks, or is it more useful for the code knowledge graph side?
vladgur · 19 days ago
have you consider having different models(e.g. codex) do the reviews? i wonder if its presents an opportunity to catch more issues than the same model
unohee · 19 days ago
For the collaboration between two different models — I’d love to explore that. Expanding model compatibility to broader providers (Codex, Aider, and other API models) is already on my roadmap. I’m planning to add a reviewer feature that supports multiple models, configurable simply by adding an API key to the .env file. Thanks for the suggestion!
kaicianflone · 19 days ago
I’ve been running OpenClaw Docker agents in Slack in a similar setup, using Gemini 2.5 Flash Lite through OpenRouter for most tasks, then Opus 4.6 and Codex 5.3 for heavier lifts. They share context via embeddings right now, but I’m going to try parameterizing them like you suggested because they can drift prettyy hard once a hallucinated idea takes off. I’m trying to get to a point where I don’t have to babysit them. I’ve also been thinking about giving them some “democracy” under the hood with a consensus policy engine. I’ve started tinkering an open-source version of that called consensus-tools that I can swap between agentic frameworks. Checking out if it can work with openswarm to work for me too.

Dead Comment

Dead Comment