AI is good at producing code for scenarios where the stakes are low, there's no expectation about future requirements, or if the thing is so well defined there is a clear best path of implementation.
AI is good at producing code for scenarios where the stakes are low, there's no expectation about future requirements, or if the thing is so well defined there is a clear best path of implementation.
I think we should move past this quickly. Coding itself is fun but is also labour , building something is the what is rewarding.
It's not even always a more efficient form of labour. I've experienced many scenarios with AI where prompting it to do the right thing takes longer and requires writing/reading more text compared to writing the code myself.
If you see your job as a "thinking about what code to write (or not)" monkey, then you're safe. I expect most seniors and above to be in this position, and LLMs are absolutely not replacing you here - they can augment you in certain situations.
The perks of a senior is also knowing when not to use an LLM and how they can fail; at this point I feel like I have a pretty good idea of what is safe to outsource to an LLM and what to keep for a human. Offloading the LLM-safe stuff frees up your time to focus on the LLM-unsafe stuff (or just chill and enjoy the free time).
AI is getting better at picking up some important context from other code or documentation in a project, but it's still miles away from what it needs to be, and the needed context isn't always present.
I do find it that the developers that focused on "build the right things" mourn less than those who focused on "build things right".
But I do worry. The main question is this - will there be a day that AI will know what are "the right things to build" and have the "agency" (or illusion of) to do it better than an AI+human (assuming AI will get faster to the "build things right" phase, which is not there yet)
My main hope is this - AI can beat a human in chess for a while now, we still play chess, people earn money from playing chess, teaching chess, chess players are still celebrated, youtube influencers still get monetized for analyzing games of celebrity chess players, even though the top human chess player will likely lose to a stockfish engine running on my iPhone. So maybe there is hope.
What makes you think AI already isn't at the same level of quality or higher for "build the right things" as it is for "building things right"?
You can use AI to write all your code, but if you want to be a programmer and can't see that the code is pretty mid then you should work on improving your own programming skills.
People have been saying the 6 month thing for years now, and while I do see it improving in breadth, quality/depth still appears to be plateauing.
It's okay if you don't want to be a programmer though, you can be a manager and let AI do an okay job at being your programmer. You better be driven to be a good at manager though. If you're not... then AI can do an okay job of replacing you there too.
Train Claude without the programming dataset and give it a dozen of the best programming books, it'll have no chance of writing a compiler. Do the same for a human with an interest in learning to program and there's a good chance.
"This AI can do 99.99%* of all human endeavours, but without that last 0.01% we'd still be in the trees", doesn't stop that 99.99% getting made redundant by the AI.
* vary as desired for your preference of argument, regarding how competent the AI actually is vs. how few people really show "true intelligence". Personally I think there's a big gap between them: paradigm-shifting inventiveness is necessarily rare, and AI can't fill in all the gaps under it yet. But I am very uncomfortable with how much AI can fill in for.
The part I find concerning is that I wouldn't be in the place I am today without spending a fair amount of time in that monotony and really delving in to understand it and slowly push outside it's boundary. If I was starting programming today I can confidently say I would've given up.
…or be grateful you can just use an existing HTML5 parser that hides all this stuff to your innocent eyes :-)
Using existing parsers only hides the poor design up to a point.
I address that in part right there itself. Programming has parts like chess (ie bounded) which is what people assume to be actual work. Understanding future requiremnts / stakeholder incentives is part of the work which LLMs dont do well.
> many domains are chess-like in their technical core but become poker-like in their operational context.
This applies to programming too.