I built OpenSwarm because I wanted an autonomous “AI dev team” that can actually plug into my real workflow instead of running toy tasks.
OpenSwarm orchestrates multiple Claude Code CLI instances as agents to work on real Linear issues. It:
• pulls issues from Linear and runs a Worker/Reviewer/Test/Documenter pipeline
• uses LanceDB + multilingual-e5 embeddings for long‑term memory and context reuse
• builds a simple code knowledge graph for impact analysis
• exposes everything through a Discord bot (status, dispatch, scheduling, logs)
• can auto‑iterate on existing PRs and monitor long‑running jobs
Right now it’s powering my own solo dev workflow (trading infra, LLM tools, other projects). It’s still early, so there are rough edges and a lot of TODOs around safety, scaling, and better task decomposition.
I’d love feedback on:
• what feels missing for this to be useful to other teams
• failure modes you’d be worried about in autonomous code agents
• ideas for better memory/knowledge graph use in real‑world repos
Repo: https://github.com/Intrect-io/OpenSwarm
Happy to answer questions and hear brutal feedback.
Haven't felt the need to show the world tho.
Welcome to the age of selfware! Where everybody makes what they need! :)
hopefully it dies down as people realize there's more to it that the code
Day one: Develop a new agent orchestration with 70K LOC from Claude.
Day three: Post it on Show HN.
Day four: Get 50–150 stars on GitHub.
Day seven: Never open this repo again.
the failure mode I'd worry about most is cascading context drift, where each agent in the chain slightly misunderstands the task and by the time you get to the test agent it's validating the wrong thing entirely. fwiw I think the LanceDB memory is the right call for this kind of setup, keeping shared context grounded is probably what prevents most of those drift issues.
Dead Comment
Dead Comment
Dead Comment