This is like 99% of my CC interactions working on top of a well structured codebase and it just works perfectly for almost any task I throw at it.
I’m a guy who likes to DO to validate assumptions: if there’s some task about how something should be written concurrently to be efficient and then we need some post processing to combine the results, etc, etc, well, before Claude Code, I’d write a scrappy prototype (think like a single MVC “slice” of all the distinct layers but all in a single Java file) to experiment, validate assumptions and uncover the unknown unknowns.
It’s how I approach programming and always will. I think writing a spec as an issue or ticket about something without getting your hands dirty will always be incomplete and at odds with reality. So I write, prototype and build.
With a “validated experiment” I’d still need a lot of cleaning up and post processing in a way to make it production ready. Now it’s a prompt! The learning is still the process of figuring things out and validating assumptions. But the “translation to formal code” part is basically solved.
Obviously, it’s also a great unblocking mechanism when I’m stuck on something be it a complex query or me FEELING an abstraction is wrong but not seeing a good one etc.
Can't really get value out reading this if you don't compare it to the leading coding agent