I'm not sure what your "empirical evidence and repeatable tests" is supposed to be. The AI not successfully converting a 3000 line C program to Python, in a test you probably designed to fail, doesn't strike me as particularly relevant.
Also, I suspect that AI could most likely guess that 80 lines of Python aren't correctly replicating 3000 lines of C, if you prompted it correctly.
For some definition of "works". This seems to be yours:
> I'd go further and say vibe coding it up, testing the green case, and deploying it straight into the testing environment is good enough. The rest we can figure out during testing, or maybe you even have users willing to beta-test for you.
> This way, while you're still on the understanding part and reasoning over the code, your competitor already shipped ten features, most of them working.
> Ok, that was a provocative scenario. Still, nowadays I am not sure you even have to understand the code anymore. Maybe having a reasonable belief that it does work will be sufficient in some circumstances.
It becomes a huge pain in the bum as soon as you have to deal with moving anything more than trivial types around, because you have to manually allocate memory for the WASM runtime and move bytes around. The byte representation needs to be understandable by the source language for your WASM code (the v language in my case). This is why these WASM runtimes use ints in their README examples, it would look and be horrendously complex otherwise.
If one is looking to use WASM for something for plugin development in the backend, I would try and look for something that is not WASM generic, but works with the source language, and where the WASM aspect is an under-the-hood detail.
That's good and expect that could be shaved down even more. I was spending most of time just waiting for it do the work.
But I don't know if that fundamentally changes the situation or not. We've had steady improvements in developer technology for decades. Even pre-LLM, I'm building significantly more complicated applications now in less time than ever before. But as quickly as our developer technology improved, the demands on applications we build has gone up. I'm not sure even LLMs can outpace the demand for software.
Literally everything sucks right now because all industries are running a massive software deficit. It's just not possible (and maybe not economical viable) to build enough software to make everything not suck. We are making do with the scraps we have.
This gives me somewhat of a knee jerk reaction.
When I started programming professionally in the 90s, the internet came of age and I remember being told "in my days, we had books and we remembered things" which of course is hilarious because today you can't possibly retain ALL the knowledge needed to be software engineer due to the sheer size of knowledge required today to produce a meaningful product. It's too big and it moves too fast.
There was this long argument that you should know things and not have to look it up all the time. Altavista was a joke, and Google was cheating.
Then syntax highlighting came around and there'd always be a guy going "yeah nah, you shouldn't need syntax highlighting to program, you screen looks like a Christmas tree".
Then we got stuff like auto-complete, and it was amazing, the amount of keystrokes we saved. That too, was seen as heresy by the purists (followed later by LSP - which many today call heresy).
That reminds me also, back in the day, people would have entire Encyclopaedia on DVDs collections. Did they use it? No. But they criticised Wikipedia for being inferior. Look at today, though.
Same thing with LLMs. Whether you use them as a powerful context based auto-complete, as a research tool faster than wikipedia and google, as rubber-duck debugger, or as a text generator -- who cares: this is today, stop talking like a fossil.
It's 2025 and junior developers can't work without LSP and LLM? It's fine. They're not in front of a 386 DX33 with 1 book of K&R C and a blue EDIT screen. They have massive challenged ahead of them, the IT world is complete shambles, and it's impossible to decipher how anything is made, even open source.
Today is today. Use all the tools at hand. Don't shame kids for using the best tools.
We should be talking about sustainability of such tools rather than what it means to use them (cf. enshittification, open source models etc.)
Reading books was never about knowledge. It was about knowhow. You didn't need to read all the books. Just some. I don't know how many developers I met who would keep asking questions that would be obvious to anyone who had read the book. They never got the big picture and just wasted everyone's time, including their own.
"To know everything, you must first know one thing."
In a sense, it is a powerful kind of freedom to choose a language that protects us from the statistically likely blunders. I prefer a higher-level kind of freedom -- one that provides peace of mind from various safety properties.
This comment is philosophical -- interpret and apply it as you see fit -- it is not intended be interpreted as saying my personal failure modes are the same as yours. (e.g. Maybe you don't mind null pointer exceptions in the grand scheme of things.)
Random anecdote: I still have a fond memory of a glorious realization in Haskell after a colleague told me "if you design your data types right, the program just falls into place".
A programming language forces a culture on everybody in the project - it's not just a personal decision like your example.
- Gilad Bracha
https://gbracha.blogspot.com/2022/06/the-prospect-of-executi...
So, true creativity, basically? lol
I mean, the reason why programming is called a “craft” is because it is most definitely NOT a purely mechanistic mental process.
But perhaps you still harbor that notion.
Ah, I suddenly realized why half of all developers hate AI-assisted coding (I am in the other half). I was a Psych major, so code was always more “writing” than “gears” to me… It was ALWAYS “magic.” The only job where literally writing down words in a certain way produces machines that eliminate human labor. What better definition of magic is there, actually?
I’ll never forget the programmer _why. That guy’s Ruby code was 100% art and “vibes.” And yet it worked… Brilliantly.
Does relying on “vibes” too heavily produce poor engineering? Absolutely. But one can be poetic while staying cognizant of the haiku restrictions… O-notation, untested code, unvalidated tests, type conflicts, runtime errors, fallthrough logic, bandwidth/memory/IO costs.
Determinism. That’s what you’re mad about, I’m thinking. And I completely get you there- how can I consider a “flagging test” to be an all-hands-on-deck affair while praising code output from a nondeterministic machine running off arbitrary prompt words that we don’t, and can’t, even know whether they are optimal?
Perhaps because humans are also nondeterministic, and yet we somehow manage to still produce working code… Mostly. ;)
> I suddenly realized why half of all developers hate AI-assisted coding (I am in the other half).
Thanks for this. It helps me a lot to understand your half. I like my literature and music as much as the next person but when it comes to programming it's all about the mechanics of it for me. I wonder if this really does explain the split that there seems to be in every thread about programming and LLMs