I’m using a mix of Gemini, grok, and gpt to translate some matlab into c++. It is kinda okay at its job but not great? I am rapidly reading Accelerated C++ to get to the point where I can throw the llm out the window. If it was python or Julia I wouldn’t be using an LLM at all bc I know those languages. AI is barely better than me at C++ because I’m halfway through my first ever book on it. What LLMs are these people using?
The code I’m translating isn’t even that complex - it runs analysis on ecg/ppg data to implement this one dude’s new diagnosis algorithm. The hard part was coming up with the algorithm, the code is simple. And the shit the LLM pours out works kinda okay but not really? I have to do hours of fix work on its output. I’m doing all the hard design work myself.
I fucking WISH I could only work on biotech and research and send the code to an LLM. But I can’t because they suck so I gotta learn how computer memory works so my C++ doesn’t eat up all my pc’s memory. What magical LLMs are yall using??? Please send them my way! I want a free llm therapist and a programmer! What world do you live in?? Let me in!
(But for real, a good test suite seems like a great place to start before letting an LLM run wild... or alternatively just do what you're doing. We definitely respect textbook-readers more than prompters!)
Deleted Comment
It pushes and crosses boundaries, it is a mixture of technology and art, it is provocative. It takes stochastic neural nets and mashes them together in bizarre ways to see if anything coherent comes out the other end.
And the reaction is a bunch of Very Serious Engineers who cross their arms and harumph at it for being Unprofessional and Not Serious and Not Ready For Production.
I often feel like our industry has lost its sense of whimsy and experimentation from the early days, when people tried weird things to see what would work and what wouldn't.
Maybe it's because we also have suits telling us we have to use neural nets everywhere for everything Or Else, and there's no sense of fun in that.
Maybe it's the natural consequence of large-scale professionalization, and stock option plans and RSUs and levels and sprints and PMs, that today's gray hoodie is just the updated gray suit of the past but with no less dryness of imagination.
I suppose that has little to do with the technical merits of the work, but it's such a bad look, and it makes everyone boosting this stuff seem exactly as dysregulated/unwise as they've appeared to many engineers for a while.
I met Sean Goedecke for lunch a few weeks ago, who uses LLMs a bunch, and is clearly a serious adult, but half the folks being shoved in front of everyone are behaving totally manic and people are cheering them on. Absolutely blows my mind to watch.
https://pivot-to-ai.com/2026/01/22/steve-yegges-gas-town-vib...
"LLMs drastically decrease the cost of experimenting during the very earliest phases of a project, like when you're trying to figure out if the thing is even worth building or a specific approach might yield improvements, but loses efficacy once you're past those stages. You can keep using LLMs sustainably with a very tight loop of telling it to do the thing the cleaning up the results immediately, via human judgement."
I.e, I don't think he can relate at all to the experience of letting them run wild and getting a good result.
That's not preferring good software to bad software, though. In order for a value to be meaningful when expressed, it has to result in some kind of trade off. If you value honesty over safety but never are put in a situation where you have to choose between honesty and safety, then that value is fairly meaningless.
Edit: The title of the post originally started with "If code is free,"