Even the small presentations we gave to execs or the board were checked for errors so many times that nothing could possibly slip through.
The most positive metaphor I have heard about why LLM coding assistance is so great is that it's like having a hard-working junior dev that does whatever you want and doesn't waste time reading HN. You still have to check the work, there will be some bad decisions in there, the code maybe isn't that great, but you can tell it to generate tests so you know it is functional.
OK, let's say I accept that 100% (I personally haven't seen evidence that LLM assistance is really even up to that level, but for the sake of argument). My experience as a senior dev is that adding juniors to a team slows down progress and makes the outcome worse. You only do it because that's how you train and mentor juniors to be able to work independently. You are investing in the team every time you review a junior's code, give them advice, answer their questions about what is going on.
With an LLM coding assistant, all the instruction and review you give it is just wasted effort. It makes you slower overall and you spend a lot of time explaining code and managing/directing something that not only doesn't care but doesn't even have the ability to remember what you said for the next project. And the code you get out, in my experience at least, is pretty crap.
I get that it's a different and, to some, interesting way of programming-by-specification, but as far as I can tell the hype about how much faster and better you can code with an AI sidekick is just that -- hype. Maybe that will be wrong next year, maybe it's wrong now with state-of-the-art tools, but I still can't help thinking that the fundamental problem, that all the effort you spend on "mentoring" an LLM is just flushed down the toilet, means that your long term team health will suffer.'
I think that belies the fundamental misunderstanding of how AI is changing the goalposts in coding
Software engineering has operated under a fundamental assumption that code quality is important.
But why do we value the "quality" of code?
* It's easier for other developers (including your future self) to understand, and easier to document. * Easier to change when requirements change * More efficient with resources, performs better (cpu/network/disk) * Easier to develop tests if its properly structured
AI coding upends a lot of that, because all of those goals presume a human will, at some point, interact with that code in the future.
But the whole purpose of coding in the first place is to have a running executable that does what we want it to do.
The more we focus on the requirements and guiding AI to write tests to prove those requirements are fulfilled, the less we have to actually care about the 'quality' of the code it produces. Code quality isn't a requirement, its a vestigal artifact of human involvement in communicating with the machine.
We appreciate your leadership and collaboration on Spegel and see your project solving a real challenge for the cloud native community. I wanted to thank you for your blog post https://philiplaine.com/posts/getting-forked-by-microsoft/, let you know what we’re doing, and address a few points.
We’ve just raised a pull request https://github.com/Azure/peerd/pull/110 amending the license headers in the source files. We absolutely should have done better here: our company policy is to maintain copyright headers in files – we have added headers to the files to attribute your work.
I also wanted to share why we felt making a new project was the appropriate path: the primary reason peerd was created was to add artifact streaming support. When you spoke with our engineers about implementing artifact streaming you said it was probably out of scope for Spegel at that time, which made sense. We made sure to acknowledge the work in Spegel and that it was used as a source of inspiration for peerd which you noted in your blog but we failed to give you the attribution you, that was a mistake and I’m sorry. We hear you loud and clear and are going to make sure we improve our processes to help us be better stewards in the open-source community.
Thanks again for bringing this to our attention. We will improve the way we work and collaborate in open source and are always open to feedback.
It seems like it would have been a much better strategy to add artifact streaming, submit a pull request and then if the maintainer isn't interested in adding it, proceeding with a fork.
"Probably out of scope" sounds like "I dont have time to implement a feature of that scope"
We could call postage for the usps a tax too, but nobody thinks of it that way.
Semver notation rather than branches or tags is a great solution to this problem. Specify the version that want, let the package manager resolve it, and then periodically update all of your packages. It would also improve build stability.
your build should always use hashes and not version tags of GHA's