Readit News logoReadit News
Posted by u/louiskw 5 months ago
Show HN: Vibe Kanban – Kanban board to manage your AI coding agentsgithub.com/BloopAI/vibe-k...
Hey HN! I'm Louis, one of the creators of Vibe Kanban.

We started working on this a few weeks ago. Personally, I was feeling pretty useless working synchronously with coding agents. The 2-5 minutes that they take to complete their work often led me to distraction and doomscrolling.

But there's plenty of productive work that we (human engineers) could be doing in that time, especially if we run coding agents in the background and parallelise them.

Vibe Kanban lets you effortlessly spin up multiple coding agents. While some agents handle tasks in the background, you can focus on planning future work or reviewing completed tasks.

After a few weeks of internal dog fooding and sharing it with friends, we've now open-sourced Vibe Kanban, and it's stable enough for day-to-day use.

I'd love to hear your feedback, feel free to open an issue on the github and we'll respond ASAP.

gpm · 5 months ago
Hmm, analytics appear to default to enabled: https://github.com/BloopAI/vibe-kanban/blob/609f9c4f9e989b59...

It is harvesting email addresses and github usernames: https://github.com/BloopAI/vibe-kanban/blob/609f9c4f9e989b59...

Then it seems to track every time you start/finish/merge/attempt a task, and every time you run a dev server. Including what executors you are using (I think this means "claude code" or the like), whether attempts succeeded or not and their exit codes, and various booleans like whether or not a project is an existing one, or whether or not you've set up scripts to run with it.

This really strikes me as something that should be, must legally be in many jurisdictions, opt in.

louiskw · 5 months ago
That's fair feedback, I have a PR with a very clear opt-in here https://github.com/BloopAI/vibe-kanban/pull/146

I will leave this open for comments for the next hour and then merge.

TeMPOraL · 5 months ago
Nice, I vote for merging it :).

It really doesn't hurt to be honest about this and ask up-front. This is clear enough and benign enough that I'd actually be happy to opt-in.

smcleod · 5 months ago
Good on you for taking action on this kind of feedback!
bn-l · 5 months ago
Thanks, really appreciate the heads up. I put devs who do this on a personal black list for life.

I think also that this would be better as an mcp tool / resource. Let the model operate and query it as needed.

willsmith72 · 5 months ago
It's the email/username harvesting that you mean right? Or do people also have something against anonymised product analytics?
swyx · 5 months ago
could you point me to what jurisdictions require analytics opt in esp for open source devtools? thats not actually something ive seen as a legal requirement, more a community preference.

eg ok we all know about EU website cookie banners, but i am more ignorant about devtools/clis sending back telemetry. any actual laws cited here would update me significatnly

47282847 · 5 months ago
GDPR is not about cookies but about privacy in general. It’s an easy read, and yes, it applies to software and telemetry as much as it applies to websites and cookies, and it applies to anyone providing services and tools to Europeans.

"Personal data is information that relates to an identified or identifiable individual. If you cannot directly identify an individual from that information, then you need to consider whether the individual is still identifiable. You should take into account the information you are processing together with all the means reasonably likely to be used by either you or any other person to identify that individual."

gpm · 5 months ago
I mean, you've labelled one big one already with the GDPR covering a significant fraction of the world - and unlike your average analytics "username and email address" sounds unquestionably identifying/personal information.

Where I live I think this would violate PIPEDA, the Canadian privacy law that covers all business that do business in any Canadian province/territory other than BC/Alberta/Quebec (which all have similar laws).

There's generally no exception in these for "open source devtools" - laws are typically still laws even if release something for free. The Canadian version (though I don't think the GDPR does) has an exception for entirely non-commercial organizations, but Bloop AI appears to be a commercial organization so it wouldn't apply. It also contains an exception for business contact information - but as I understand it that is not interpreted broadly enough to cover random developers email addresses just because they happen to be used for a potentially personal github account.

Disclaimer: Not a lawyer. You should probably consult a lawyer in the relevant jurisdiction (i.e. all of them) if it actually matters to you.

jjangkke · 5 months ago
analytics stuff is fine but the email harvesting/github username appears to be illegal especially if its done without notifying the user?

great catch, many open source projects appear to be just an elaborate lead gen tool these days.

janoelze · 5 months ago
fork, task claude to remove all github dependence, build.
gpm · 5 months ago
I did this locally to try it out :) Also stubbed out the telemetry and added jj support. "Personalizing" software like this is definitely one of LLMs superpowers.

I'm not particularly inclined to publish it because I don't want to associate myself with a project harvesting emails like this.

hsbauauvhabzb · 5 months ago
Use a telemetry backed tool to remove telemetry from another telemetry backed tool?
swalsh · 5 months ago
I built something similar for my own workflow. Works okay. The hard part is as you scale, you end up with compounded false affirmatives. Model adds some fallback mechanism that makes it work, tests pass, etc. The nice part is you can ask models to review the code from others, call out fallbacks, hard coding, stuff like that. It does a good job at identifying buried bodies. But if you dig up a buried body, I'd manually confirm it was properly disposed of as the models usually hid the body in the first place because they needed some input they didn't have, got confused or ran into an issue.
oc1 · 5 months ago
We need something like a kitchen brigade in software - one who writes the vibe code tickets (Chef de Vibe), one who reviews the vibe code (Sous-Vibe), one who oversees the agents and restarts them if they get hung up (Agent de Station). We could theoretically smash thousand tickets a day with this principle
ggordonhall · 5 months ago
Completely agree!

You can actually use a coding agent to create tickets from within Vibe Kanban. Add the Vibe Kanban MCP server (from MCP settings) and ask the agent to plan a task and write tickets.

atavistically · 5 months ago
c.f. "Surgical Team" in 'The Mythical Man-Month' by Fred Brooks. That book is perennially relevant.
lharries · 5 months ago
I used this last week and it's excellent - feels like the same increase in productivity increase from when I first used Cursor.

Are you thinking of doing a hosted version so I can have my team collab on it?

And I found I could open lots of PRs at once but they often need to be dependent on each other - and then I want to make a change to the first one. How are you thinking of better managing that flow?

louiskw · 5 months ago
Yeah I think giving the option to move execution to the cloud makes a lot of sense, I already find my macbook slowing down after 4 concurrent runs, mainly rustc.

Also now we're pushing many more PRs think we defo need better ways to stack and review work. Will look into this asap

hddbbdbfnfdk · 5 months ago
Very productive increase sirs! Whole team well promoted.
adastra22 · 5 months ago
> AI coding agents are increasingly writing the world's code and human engineers now spend the majority of their time planning, reviewing, and orchestrating tasks.

Is this really the case?

sexeriy237 · 5 months ago
No, if we can review 10 PRs a day and AI writes one of them, we now have to review 11 PRs
barbazoo · 5 months ago
> human engineers now spend the majority of their time planning, reviewing, and orchestrating tasks

This feel like much too broad a statement to be true.

bwfan123 · 5 months ago
> AI coding agents are increasingly writing the world's code and human engineers now spend the majority of their time planning, reviewing, and orchestrating tasks.

This tactic is called "assuming the sale". ie, Make a statement as-if it is already true, and put the burden on the reader to negate it. Majority of us are too scared of what others think, and go-along by default. It is related to the FOMO tactic in that it could be used in conjunction with it to make it a double-whammy. for example, the statement above could have ended with: "and everyone is now using agents to increase their productivity, and if you arent using it, you are left behind"

Glad you stood up to challenge it.

skeeter2020 · 5 months ago
I'll add - often not adding the last part is even MORE powerful: "and everyone is now using agents to increase their productivity..."
lazarus01 · 5 months ago
> human engineers now spend the majority of their time planning, reviewing, and orchestrating tasks

> > This feel like much too broad a statement to be true.

This is just what they wish to be true.

lbrito · 5 months ago
I wonder how demographics (specifically age) tie into this. I'm well into my 30s and I found that statement absurd, but perhaps it is basically universally true among recent grads.
bigfishrunning · 5 months ago
Maybe it is -- the next few years are going to get really rough for them; they'll develop no skills outside of AI.
ljm · 5 months ago
I wouldn't say it's the majority of my time but the most utility I've got out of AI is using MCP to deal with the boring shit: update my jira tickets to in progress/in review, read feedback on a PR and address the trivial shit, check the CI pipeline and make it pass if it failed, and write commits in a consistent, descriptive way.

It's a lot more hands on when you try to write code with it, which I still try out, but it's only because I know exactly what the solution is and I'm just walking the agent towards it and improving how I write my prompts. It's slower than doing it myself in many cases.

rvz · 5 months ago
I read that too and these are the kind of statements which really tells you what happens when a profession embraces mediocrity and accepts something as crass as "Vibe-coding" which is somehow going to change "software engineering" even when adding so-called "AI agents" - which makes it worse.

All this cargo-culting is done without realizing that more code means more security issues, technical debt, more time for humans to review the mess and *especially* more testing.

Once again, Vibe-coding is not software engineering.

skeeter2020 · 5 months ago
and I came into the industry when software was not engineering. Still think this is mostly true (you can call yourself an engineer when you insure your product)
dhorthy · 5 months ago
i feel so strongly that this will rapidly become true over the next 6 months. if you don't believe me check out Sean Grove's talk from mid June - https://www.youtube.com/watch?v=8rABwKRsec4

Dead Comment

uxamanda · 5 months ago
If you use gitlab, you can use the command line "glab" tool to have agents work from the built in kanban. They can open and close tasks, start MRs off of them etc. It's not as integrated as this tool, but works well with a mix of humans and robots.
louiskw · 5 months ago
Interesting, hadn't heard of that. Would better GitLab support be useful in Vibe Kanban?
PaulIH · 5 months ago
Yes, being able to use Gitlab as a provider would mean that we would jump on the tool, being Gitlab-based. :)
deepdarkforest · 5 months ago
This is a launch by a YC company that converts enterprise cobol code into java. Maybe it's my fault, but i tried every single coding agent with a variety of similar tools and whenever i try to parallelize, they clash while editing files simultaneously, i lose mental context of what's going on, they rewrite tests etc.

It's chaos. Thats fine if you are vibe coding an unimportant nextjs/vercel demo, but i'm really sceptical of all this stance that you should be proud of how abstracted you are from code. A kanban board to just shoot off as many tasks as possible and just quickly read over the PR's is crazy to me. If you want to appear a serious company that should be allowed to write enterprise code, imo this path is so risky. I see this in quite a few podcasts, tweets etc. People bragging how abstracted they are from their own product anymore. Again, maybe i am missing something, but all of this github copilot/just reviewing like 10 coding agents PR's just introduces so much noise and slop. Is it really what you want your image to be as a code company?

unshavedyak · 5 months ago
> Maybe it's my fault, but i tried every single coding agent with a variety of similar tools and whenever i try to parallelize, they clash while editing files simultaneously, i lose mental context of what's going on, they rewrite tests etc.

Fwiw Claude suggests using different git workspaces for your agents. This would entirely solve the clashing, though they may still conflict and need normal git conflict resolves of course.

Theoretically that would work fine, as it would be just like two people working on different branches/repos/etc.

I've not tried that though. AI generates way too much code for me to review as it is, several subtasks working concurrently would be overwhelming for me.

helsinki · 5 months ago
This works in theory and somewhat in practice but it is not as clean as people make it seem, as someone who has spent tens of thousands on Opus tokens and worktrees - it’s just not that great. It works, but it’s just, ugh, boring, super tedious, etc. at the end of it all, you’re still sitting around waiting for Claude to merge conflicts.
louiskw · 5 months ago
This is a bet on a future where code is increasingly written by AI and we as human engineers need the best tools to review that work, catch issues and uphold quality.
deepdarkforest · 5 months ago
I don't disagree, but the current sentiment i was referring to seems to be "maximize AI code generation with tools helping you to do that" rather than "prioritize code quality over AI leverage, even if it means limiting AI use somewhat."
codingdave · 5 months ago
It is not just chaos, it is an unwanted product. Don't misunderstand - people would love this product if it works. But AI cannot do this yet. Products like this are built on an assumption that AI has matured enough to actually succeed at all tasks. But that simply isn't true. Vibe coding is still slop.

AI needs to do every single step of this type of flow to an acceptable quality level, with high standards on that definition of "acceptable", and then you could bring all the workflow together. But doing the workflow first and assuming quality will catch up later is just asking for a pile of rejections when you try to sell it.

I'm not just making this up, either... I've seen and talked to numerous people over the last couple years who all came up with similar ideas. Some even did have workable prototypes running. And they had sales from the mom/friends/family connections, but when they tried to get "real" sales, they hit walls.