We’ve come up with a bit of a different concept for what a coding agent should be. We believe it should work in tandem with a developer inside the IDE. Over time as the tech improves, it will get more and more autonomy.
We’ve been using our coding agent internally and see a 5-10x boost on some tasks.
The agent is available now to Codiumate VS Code users. We want to hear what kind of tasks it works well on and improve it over time to expand the task set. Would love to get feedback.
https://marketplace.visualstudio.com/items?itemName=Codium.c...
That's my experience with co-pilots too:
- Generating tests
- Generating functions consistent with prevailing style of similar functionality in the existing codebase. The greater the consistency, the more helpful the AI is at generating.
- Telling me why my code is crap by adding a `# todo: ` above some code and seeing what the AI suggests should be changed :)
What other tasks do you see as good target for 5-10x boosts?
AI will not replace devs. Devs that use AI will replace devs that do not use AI.
The most effective devs will be those employing a fleet of AI agents, acting as the glue and guiding hand for what the agents should produce.
This helps us get to that future, so I think this has legs.
I use VS code. I will try this out.
However, Code Review Doctor is more of a "this MIGHT be a problem. have you considered..." rather than "it wrong"
For example, I wonder how many errors would have been found if the definition of a format string was the default? That is, how many times would people have written something like "hello {previously-defined-variable}" and not meant to substitute the value of that previously defined variable at runtime?
FWIW my reaction was classic "expectations not meeting reality": weeks of work to do (what I thought) was a mutually beneficial helpful thing. I was naively not expecting non-positive responses and was ill prepared when you raised valid concerns I had not considered.
Again, I am working on that and sorry I was passive aggressive to you.