I guess we'd probably agree that "writing code is an irrelevant skill" actually all comes down to whether LLMs will improve enough to match humans at programming, and thus comprehensively remove the need for fixing their work.
They currently don't, so at the time you claimed this it was incorrect. Maybe they will in the future, at which point it would be correct.
So, would it be responsible for me to bet my career on your advice today? Obviously not, which is why most people here disagree with your article.
You were prepared in advance to explain that criticism as people having a strong negative emotional reaction, so I'm not sure why you posted it here in the first place instead of LinkedIn where it might reach a more supportive audience.
What I pointed out in my post is a trend I notice where an LLM can do more and more of a developer's work. Nowhere did I claim LLMs can replace human developers today, but when a technology consistently reduces the need for manual programming while improving its capabilities, the trajectory is clear. You can disagree with the timeline, but the transformation is already underway.
I posted on HN precisely because I wanted rigorous technical discussion, not validation.
But the actual relevant prediction here (the one you're confident enough about to give skills development advice on) is whether they'll improve sufficiently that programming is no longer a relevant skill.
I think that's possible, but I'm not nearly so confident I'd write your article: LLMs went mainstream ~2 years ago, and they still have some pretty basic limitations when it comes to computational/mathematical reasoning, which they'll need to solve novel software engineering tasks. (Articles about these limitations get posted here pretty frequently)
To your second point, I'm still not sure how you will debug someone else's code without learning to write code yourself, because you need to be able to read code, and understand it well enough to execute it inside your mind. I am not totally convinced you understand the difference between "understanding programming concepts" and "being able to understand whether this code works".
Sorry if this comes across as rude, but I think the reason the feedback on your post is overall quite negative is that you're excited about AI making this job much easier, and your advice about which skills are worth learning are too confident. Ironically I think an LLM would give a more balanced view than you have.
As for the reception, I did not expect it to be positive. People usually have a strong negative emotional reaction when you suggest their skills are, or are going to become, less relevant.
Obviously you'll have to debug the code yourself, for which you'll need those programming skills that you claimed weren't relevant any more.
Eventually you'll ask a software engineer, who will probably be paid more than you because "knowing what to build" and "evaluating the end result" are skills more closely related to product management - a difficult and valuable job that just doesn't require the same level of specialisation.
Lots of us have been the engineer here, confused and asking why you took approach X to solve this problem and sheepishly being told "Oh I actually didn't write this code, I don't know how it works".
You are confidently asserting that people can safely skip learning a whole way of thinking, not just some syntax and API specs. Some programmers can be replaced by an LLM, but not most of them.
You are also making the assumption that LLMs won't improve, which I think is shortsighted.
I fully agree with the part about the job becoming more like product management. I would like to cite an excerpt of a post [2] by Andrew Ng, which I found valuable:
Writing software, especially prototypes, is becoming cheaper. This will lead to increased demand for people who can decide what to build. AI Product Management has a bright future! Software is often written by teams that comprise Product Managers (PMs), who decide what to build (such as what features to implement for what users) and Software Developers, who write the code to build the product. Economics shows that when two goods are complements — such as cars (with internal-combustion engines) and gasoline — falling prices in one leads to higher demand for the other. For example, as cars became cheaper, more people bought them, which led to increased demand for gas. Something similar will happen in software. Given a clear specification for what to build, AI is making the building itself much faster and cheaper. This will significantly increase demand for people who can come up with clear specs for valuable things to build. (...) Many companies have an Engineer:PM ratio of, say, 6:1. (The ratio varies widely by company and industry, and anywhere from 4:1 to 10:1 is typical.) As coding becomes more efficient, teams will need more product management work (as well as design work) as a fraction of the total workforce.
To address your last point - no, I am not saying people should skip learning a whole way of thinking. In fact, the skills I outline for the future (supervising AI, evaluating results) all require understanding programming concepts and system thinking. They do not, however, require manual debugging, writing lines of code by hand, a deep understanding of syntax, reading stack traces and googling for answers.
> Even he, subconsciously, knows it doesn't pay off to waste cognitive energy on what a machine could do instead.
> It's not laziness, but efficiency.
It's only efficient in the short-term, not long-term. Now the programmer never understood the problem, and that is a problem in the long-run. As any experienced engineer knows - understanding problems is how anyone gets better in the engineering field.
Using LLMs can even help you understand the problem better. And it can bring you towards the solution faster. Using an LLM to solve a problem does not prevent understanding it. Does using a calculator prevent us from understanding mathematical concepts?
Technical understanding will still be valuable. Typing out code by hand will not.
One could argue that for applications where correctness is not critical my position does not apply, however this is not the analogy that the article is making.
I don't know whether you used some of the more recent models like Claude 3.5 Sonnet and o1. But to me it is very clear where the trajectory is headed. o3 is just around the corner, and o4 is currently in training.
People found value even in a model like GPT 3.5 Turbo, and that thing was really bad. But hey, at least it could write some short scripts and boilerplate code.
You are also comparing mathematical computation - which has only 1 correct solution - with programming, where the solution space is much broader. There are multiple valid solutions. Some are more optimal than others. It is up to the human to evaluate that solution, as I've said in the post. Today, you may even need to fix the LLM's output. But in my experience, I'm finding I need to do this far less often than before.
[...]
3. evaluating whether the end result works as intended.
I have some news for the author about what 80% of a programmer's job consists of already today.
There is also the issue #4, that "idea guy" types frequently gloss over: If things do not work as intended, find out why they don't do so and work out a way to fix the root cause. It's not impossible that an AI can get good at that, but I haven't seen it so far and it definitely doesn't fit into the standard "write down what I want to have, press button, get results" AI workflows.
Generally, I feel this ignores the inherent value that there is in having an actual understanding of a system.
What I am trying to say is that people who see the output of their work as "code" will be replaced just like human computers did. I believe even debugging will be increasingly aided by AI. I do not believe that AI will eliminate the need for system understanding, just to be clear.
Then again, you might argue that writing lines of code and manually debugging issues is exactly what builds your understanding of the system. I agree with that too, I suppose the challenge will be maintaining deep system knowledge as more tasks become automated.