Readit News logoReadit News
ben30 · 3 months ago
This echoes my experience with Claude Code. The bottleneck isn't the code generation itself—it's two critical judgment tasks:

1. Problem decomposition: Taking a vague idea and breaking it down into well-defined, context-bounded issues that I can effectively communicate to the AI

2. Code review: Carefully evaluating the generated code to ensure it meets quality standards and integrates properly

Both of these require deep understanding of the domain, the codebase, and good software engineering principles. Ironically, while I can use AI to help with these tasks too, they remain fundamentally human judgment problems that sit squarely on the critical path to quality software.

The technical skill of writing code has been largely commoditized, but the judgment to know what to build and how to validate it remains as important as ever.

bcrosby95 · 3 months ago
This would be at least the third time in history we've tried to shunt writing code to low paid labor. We'll see if it's successful this time.

The problem tends to be that small details affect large details which affect small details. If you aren't good at both you're usually shit at both.

mlinhares · 3 months ago
The problem wasn't low paid labor, it was just incompetent labor. You can find competent developers in all these countries offering lower pay, India, Brazil, Romania, Poland, China, Pakistan, its just that they would already be hired by other higher paying companies and what is left for the ones that are looking for the lowest paid possible workers are the incompetent ones.
wvoch235 · 3 months ago
IMO attempts to make it low paid work will fail, just like almost every STEM profession. But... the number of engineers that we need who operate as "power multipliers" on team will continue to decrease. Many startup and corporate teams already aren't needing junior/mid level engineers any longer.

They just need "drivers", senior/lead/staff engineers that can run independent tracks. AI becomes the "power multiplier" in the teams who amplify the effects of the "driver".

Many people pretend that 10x engineers don't exist. But anyone who has worked on an adequately high performing team at a large (or small) company knows that skill, and quite frankly intelligence, operate on power laws.

The bottom 3 quartiles will be virtually unemployable. Talent in the top quartile will be impossible to find because they're all employed. Not all that unlike today, though which quartile you fall into is largely going to depend on how "great" of an engineer you are AND how effectively you use AI.

As this happens, the tap of new engineers who are learning how to make it into the top quartile, will cutoff for everyone except for those who are passionate/sadistic enough to programming without AI, then learn to program WITH AI.

Meanwhile the number of startups disrupting corporate monopolies will increase as the cost of labor goes down due to lower headcount requirements. Lower head counts will lead to better team communication and in general business efficiency.

At some point the upper quartile will get automated too. And with that, corporate moats evaporate to solo-entrepreneurs and startups. The ship is sinking, but the ocean is about to boil too. When economic formulas start dividing by zero, we can be pretty sure that we can't predict the impact.

tom_m · 3 months ago
Someone told me AI was like having a bunch of junior coders. You have to be very explicit in telling it what to do and have to go through several iterations to get it right. Though it was cheaper.
gherkinnn · 3 months ago
That matches my experience.

Decomposing a problem so that it is solvable with ease is what I enjoy most about programming and I am fine with no longer having to write as much code myself, but resent having to review so much more.

Now, how do we solve the problem of people blindly accepting what an LLM spat out based on a bad prompt. This applies universally [0] and is not a technological problem.

0 - https://www.theverge.com/policy/677373/lawyers-chatgpt-hallu...

ben30 · 3 months ago
Agreed on the review burden being frustrating. Two strategies I've found helpful for managing the cognitive load:

1. Tight issue scoping: Making sure each issue is narrowly defined so the resulting PRs are small and focused. Easier to reason about a 50-line change than a 500-line one.

2. Parallel PR workflow: Using git worktrees to have multiple small PRs open simultaneously against the same repo. This lets me break work into digestible chunks while maintaining momentum across different features.

The key insight is that smaller, well-bounded changes are exponentially easier to review thoroughly. When each PR has a single, clear purpose, it's much easier to catch issues and verify correctness.

Im finding these workflow practices help because they force me to engage meaningfully with each small piece rather than rubber-stamping large, complex changes.

steveBK123 · 3 months ago
So really the same two skills that a senior engineer needs to delegate tasks to juniors & review the results..
skydhash · 3 months ago
Nope, dealing with juniors is way less frustrating because they learn. So overtime, you can increase the complexity of their tasks until they're no longer junior.
AndrewKemendo · 3 months ago
This is exactly how to use it and exactly why it’s a huge deal

In my experience so far, the people that aren’t getting value out of LLM code assistants, fundamentally like the process of writing code and using the tooling

All of my senior, staff, principals love it because we can make something faster than having to deal with a junior because it’s trivial to write the spec/requirement for Claude etc…

prmph · 3 months ago
What the heck, the code generation _is_ absolutely still a bottle-neck.

I dare anyone who making these arguments that LLMs have removed the need for actual programming skill, for example, to share in a virtual pair programming session with me, and I will demonstrate their basic inability to do _any_ moderately complex coding in short order. Yes, I think that's the only way to resolve this controversy. If they have some magic sauce for prompting, they should post a session or chat that can be verified by other (even if not exactly repeatable).

Yesterday almost my whole day was wasted because I chose to attack a problem primarily by using Claude 4 Sonnet. Having to hand hold it every step of the way, continually keep correcting basic type and logic errors (even ones I had corrected previously in the same session), and in the end it just could solve the challenge I gave it.

I have to be cynical and believe those shouting about LLMs taking over technical skill must have lots of stock in the AI companies.

coffeefirst · 3 months ago
Indeed.

All this “productivity” has not resulted in one meaningful open source PR or one interesting indie app launch, and I can’t square my own experience with the hype machine.

If it’s not all hat and no cattle, someone should be able to show me some cows.

sgarland · 3 months ago
> Yesterday almost my whole day was wasted because I chose to attack a problem primarily by using Claude 4 Sonnet

I have been extremely cynical about LLMs up until Claude 4. For the specific project I've been using it on, it's done spectacularly well at specific asks - namely, performance and memory optimization in C code used as a Python library.

whatarethembits · 3 months ago
Honestly, its mind boggling. Am I the worst prompter ever?

I have three python files (~4k LOC total) that I wanted to refactor with help from Claude 4 (Opus and Sonnet) and I followed Reed Harper's LLM workflow...the results are shockingly bad. It produces an okay plan, albeit full of errors, but usable with heavy editing. In the next step though, most of the code it produced was pretty much unusable. It would've been far quicker for me to just do it myself. I've been trying to get LLMs on various tasks to help me be faster but I'm just not seeing it! There is definitely value in it in helping to straighten out ideas in my head and using it as StackOverflow on roids but that's where the utility starts to hit a wall for me.

Who are these people who are "blown away" by the results and declaring an end to programming as we know it? What are they making? Surely there ought to be more detailed demos of a technology that's purported to be this revolutionary!?

I'm going to write a blog post with what I started with, every prompt I wrote to get a task done and responses from LLMs. Its been challenging to find a detailed writeup of implementing a realistic programming project; all I'm finding is small one off scripts (Simon Willison's blog) and CRUD scaffolding so far.

sokoloff · 3 months ago
I don’t think AI marks the end of software engineers, but it absolutely can grind out code for well specified, well scoped problem statements in quarter-minutes that would take a human an hour or so.

To me, this makes my exploration workflow vastly different. Instead of stopping at the first thing that isn’t obviously broken, I can now explore nearby “what if it was slightly different in this way?”

I think that gets to a better outcome faster in perhaps 10-25% of software engineering work. That’s huge and today is the least capable these AI assistants will ever be.

Even just the human/social/mind-meld aspects will be meaningful. If it can make a dev team of 7 capable of making the thing that used to take a dev team of 8, that's around 15% less human coordination needed overall to get the product out. (This might even turn out to be half the benefit of productivity enhancing tools.)

nyarlathotep_ · 3 months ago
> I have to be cynical and believe those shouting about LLMs taking over technical skill must have lots of stock in the AI companies.

I'm far from being a "vibe" LLM supporter/advocate (if anything I'm the opposite, despite using Copilot on a regular basis).

But, have you seen this? Seems to be the only example of someone actually putting their "proompts" where their mouth is, in a manner of speaking. https://news.ycombinator.com/item?id=44159166

ofjcihen · 3 months ago
It’s interesting that your point about wasting time makes a second point in your favor as well.

If you don’t have the knowledge that begets the skills to do this work then you would never have known you were wasting your time or at least how to stop wasting time.

LLM fanboys don’t want to hear this but you can’t successfully use these tools without also having the skills.

prmph · 3 months ago
Edit for the parent comment:

> in the end it just could NOT solve the challenge I gave it.

numpad0 · 3 months ago
Last week I was like, I might as well vibe code with free Gemini and steal his credit than researching something destined to be horrible as Android Camera2 API, and found out that at least me using this version of Gemini do better if I prompt it in a... casual language.

"ok now i want xyz for pqr using stu can you make code that do" rather than "I'm wondering if...", with lowercase I and zero softening languages. So as far as my experience goes, tiny details in prompting matter and said details can be unexpected ones.

I mean, please someone just downvote and tell me it's MY skill issue.

Deleted Comment

dgb23 · 3 months ago
I want to add something to this which is rarely discussed.

I personally value focus and flow extremely highly when I'm programming. Code assistance often breaks and prevents that in subtle ways. Which is why I've been turning it off much more frequently.

In an ironic way, using assistance more regularly helped me realize little inefficiencies, distractions and bad habits and potential improvements while programming:

I mean that in a very broad sense, including mindset, tooling, taking notes, operationalizing, code navigation, recognizing when to switch from thinking/design to programming/prototyping, code organization... There are many little things that I could improve, practice and streamline.

So I disagree with this statement at a fundamental level:

> The technical skill of writing code has been largely commoditized (...)

In some cases, I find setting yourself up to get into a flow or just high focus state and then writing code very effective, because there's a stronger connection with the program, my inner mental model of how it works in a more intricate manner.

To me there are two important things to learn at the moment: Recognizing what type of approach I should be using when and setting myself up to use each of them more effectively.

thrwthsnw · 3 months ago
Just move up an abstraction level and put that flow into planning the features and decomposing them into well defined tasks that can be assigned to agents. Could also write really polished example code to communicate the style and architectural patterns and add full test coverage for it.

I do notice the same lack of flow when using an agent since you have to wait for it to finish but as others have suggested if you set up a few worktrees and have a really good implementation plan you can use that time to get another agent started or review the code of a separate run and that might lend itself to a type of flow where you’re keeping the whole design of the project in your head and rapidly iterating on it.

eweise · 3 months ago
"these require deep understanding of the domain, the codebase, and good software engineering principles" Most of this AI can figure out eventually, except maybe the domain. But essentially software engineering will look a lot like product management in a few years.
virgilp · 3 months ago
As a (very good I would say) product manager once told me - the product vision and strategy depends very much on the ability to execute. The market doesn't stand still, and what you _can_ do defines very much what you _should_ do.

What I mean to say here is that not even product management is reduced to just "understand the domain" - so it kinda' feels that your entire prediction leans on overly-simplified assumptions.

Deleted Comment

TaupeRanger · 3 months ago
That's a narrow view of the issue described in the blog post. You're coming at this from the perspective of a software engineer, which is understandable given the website we're posting on, but the post is really focusing on something higher level - the ability to decide whether the problems you're decomposing and the code you're reviewing is for something "good" or "worthwhile" in the first place. Claude could "decompose problems" and "review code" 10x better than it currently does, but if the thing it's making is useless, awkward, or otherwise bad (because of prompts given by people without the qualities in the blog post), it won't matter.

Deleted Comment

Deleted Comment

esafak · 3 months ago
You still need to be able to code to recognize when it's done poorly, and to write the technical specification.
tingle · 3 months ago
Chap. CCCLXIV. — On the Judgment of Painters.

When the work is equal to the knowledge and judgment of the painter, it is a bad sign; and when it surpasses the judgment, it is still worse, as is the case with those who wonder at having succeeded so well. But when the judgment surpasses the work, it is a perfectly good sign ; and the young painter who possesses that rare disposition, will, no doubt, arrive at great perfection. He will produce few works, but they will be such as to fix the admiration of every beholder.

Leonardo da Vinci, "A Treatise on Painting.", p. 225

https://archive.org/details/davincionpainting00leon/page/224...

gavmor · 3 months ago
> Nobody tells this to people who are beginners, I wish someone told me. All of us who do creative work, we get into it because we have good taste. But there is this gap. For the first couple years you make stuff, it’s just not that good. It’s trying to be good, it has potential, but it’s not. But your taste, the thing that got you into the game, is still killer. And your taste is why your work disappoints you. A lot of people never get past this phase, they quit. Most people I know who do interesting, creative work went through years of this. We know our work doesn’t have this special thing that we want it to have. We all go through this. And if you are just starting out or you are still in this phase, you gotta know its normal and the most important thing you can do is do a lot of work. Put yourself on a deadline so that every week you will finish one story. It is only by going through a volume of work that you will close that gap, and your work will be as good as your ambitions. And I took longer to figure out how to do this than anyone I’ve ever met. It’s gonna take awhile. It’s normal to take awhile. You’ve just gotta fight your way through. (Ira Glass)
Suppafly · 3 months ago
>For the first couple years you make stuff, it’s just not that good.

Sadly, that's why I don't start a lot of things that would interest me. You need to get into things when you're a kid and don't realize how junk your work is, because as an adult you just don't have time to dedicate to producing a lot of junk to get good at something. The are shortcuts and more directed learning you can do in a lot of areas to reduce some of the undirected learning you do as a child, but it's till time consuming when time is a rare commodity.

ChrisMarshallNY · 3 months ago
I like to do a good job on small stuff.

It works nicely for me, but doesn't really bring accolades (but a hell of a lot of folks actually rely on stuff I authored; they just don't know it, or care -which is just fine).

analog31 · 3 months ago
>>> But your taste, the thing that got you into the game, is still killer. And your taste is why your work disappoints you.

This is why it's so hard for good classical musicians to learn jazz improvisation, even if they love jazz.

mjklin · 3 months ago
“The reason the gentleman is called worthy is not that he is able to do everything that the most skillful man can do. The reason the gentleman is called wise is not because he knows everything that the wise man knows. When he is called discriminating, this does not mean that he is able to split hairs so exhaustively as the sophists. That he is called an investigator does not mean that he is able to examine exhaustively into everything that an investigator may examine. He has his limit.

In observing high and low lands, in judging whether fields are poor or fertile, and in deciding where the various grains should be planted, the gentleman is not as capable as a farmer. When it is a matter of understanding commodities and determining their quality and value, the gentleman cannot vie with a merchant. As regards skill in the use of the compass, square, plumb line, and other tools, he is less able than an artisan. In disregarding right and wrong, truth and falsehood, but manipulating them so that they seem to change places and shame each other, the gentleman cannot compare with Hui Shih and Teng Hsi.

However, if it is a question of ranking men according to their virtue; if offices are to be bestowed according to ability; if both the worthy and the unworthy are to be put in their proper places… if all things and events are to be dealt with properly; if the charter of Shen Tzu and Mo Tzu are to be suppressed; if Hui Shih and Teng Hsi are not to dare to put forth their arguments; if speech is always to accord with the truth and affairs are always to be properly managed — it is in these matters that the gentleman excels.”

— Hsun-tzu, Chinese (300–235 B.C.)

nomel · 3 months ago
To put it simply, "it's easy to do well with tasks that are easy for you."

This is how you make sure to produce good work while simultaneously halting the development of your skills.

physicsguy · 3 months ago
A similar debate has happened in education where people seem to think that having ability to critically analyse texts is more important than knowledge. and to some degree that’s true but personally I think that without building on some decent level of foundational level of knowledge and having a mental model of a subject, you can’t tackle thorny questions because you don’t have enough to draw upon as examples and counterpoints about how to proceed.

My current employer is currently going on a top down driven “one tech” mission and trying to rationalise the technology stacks across diverse product lines. Which is all fine but the judgement is a poor one because the biggest developer bottleneck that comes up in internal developer surveys is the corporate mandated IT things and a relatively hostile setup without even local admin rights, which make sense for general office workers and don’t make sense at all for software developers.

dsjoerg · 3 months ago
> a relatively hostile setup without even local admin rights

Taking a diversion into this -- how about local admin rights to a virtual VM / sandboxed machine? I imagine that would allow developers to be productive, while protecting everything that IT wants to protect.

Once you do that, I imagine everyone will discover the issue isn't actually _local_ admin rights, but having admin rights to a machine that's on the internal network and can access internal company resources. Which might mean that IT has taken a strategy that once you're inside the local network, you have access to lots of valuable goodies. Which is a scary strategy.

tekno45 · 3 months ago
sounds like a containerized workspace. You can do this without a VM.
mihaaly · 3 months ago
Education itself is supposed to teach us learning, not the mere facts/methods, not just the hard knowledge. Hard knowledge comes with it anyway as some sort of 'side product'. You cant learn on nothing, something will be used for it, that something forms the hard knowledge eventually. Typically broad set but shallow hard knowledge.

Ironicly, this is what I feel chipping away in modern collaborative developments. The appreciation of learning capability. In the self interest of the organization (short term self interest, long term is too unpredictable, so does not exists in the practice) specific technical knowledge parcticed individuals are sought out for the purpose of easy replacement: not to be dependent on personnel, have it like a plug and play component. The ability to learn is not valuable while inside the organization. Should be practiced enough for years beforehand and applied intensely after joined. For the sake of claiming evolving organization the teaching may be outsourced in a very limited time to some sort of external enterprise making money on disseminating hard knowledge with made up examples or generic (artificial) applicability, instead of doing it in the actual context of the organization. Be part of the organization. Daily. Application of the new hard knowledge in the specific context of the organization will be casual by the random enthusiast. If they can break through of the company policy and established ways of management. Eventually the policies and practicies must be rigid as well, shouldn't they, so the personnel working in the management could be as easily replacable as the foot soldiers of code. For the sake of the organization. Call this approach the Organization Oriented Development.

physicsguy · 3 months ago
> You cant learn on nothing, something will be used for it, that something forms the hard knowledge eventually. Typically broad set but shallow hard knowledge.

As a counterpoint though, the way things have gone in the U.K. is to go deep on niche topics without building up appreciation of the broad strokes. To give an example, there’s a GCSE History course for 14-16 year olds where the syllabus is effectively “medicine through time” and “the American West” without ever going near the British Empire, colonialism, the Tudor or Elizabethan periods, the reformation, the Industrial Revolution, Irish home rule and independence, etc. etc. any one of which gives much more insight into the formation of the state and cultural affairs as it stands today.

To my mind it’s too narrow a focus at too young an age when teaching a subject that a lot of children take. It also means there are constantly “we don’t even teach that at school” debates.

Wololooo · 3 months ago
Reminds me of that concept that I saw pop up in HEP in recent years between "users" and "experts".

This distinction in that case is so dumb I cannot wrap my head around it: You first encounter the code, are unfamiliar with it but very quickly you become expert in order to solve your problem and advance the thing forward.

It does not matter which codebase you start on, what matters is that you understand what the actual stack does and what is involved in there because people are supposed to understand deeply what they are doing.

But this comes from the "corporatisation" of every single entity, where random metrics are used in order to assess performance instead of asking the simple question of "does it work" or "does it need fixing" or "will this thing break".

There is a clear disconnect between the manager type people that are removed from the work and the managers still doing things practically, which understand what the stressors are and where some work of deep understanding and extra contextualisation of the systems, is required, in order to not mess the whole thing up.

This being said, this is coming from a very peculiar perspective and with a very specific tech stack which is and is not industry standard at many levels...

kragen · 3 months ago
High-energy physics?
mehulashah · 3 months ago
I would argue that this is already true in roles where one supervises the work of another person with skill. Great leaders, for example, were once practitioners. Over time their skills may fade, but their judgment makes them effective and able to the scale their impact.
drewcoo · 3 months ago
In software, we promote good engineers to management, effectively accelerating the Peter Principle.

It doesn't have to be that way. Management skills are not an outgrowth of the skills of the managed, but orthogonal to them. This is similar to the lesson many PhD candidates I've known learn: expertise in their field is not pedagogical expertise. Companies who promoted from within used to provide training for new managers.

apwell23 · 3 months ago
> In software, we promote good engineers to management

i've not seen this. Infact its the opposite.

TrackerFF · 3 months ago
AI works great for providing you a starting point, and giving a big picture view of how certain things work, and how you should structure them.

Sometimes, even if you're a really seasoned software engineer, you'll encounter something you haven't seen before. Maybe to the point that you don't really even know what to search for to get started. So instead of spending half a day scrounging various forums, e-books, etc. you can ask the model, in somewhat vague terms, what you're looking for - and some of the LLMs are quite good at just that.

Now, the implementation of such things, not quite there yet. My experience has been that the more obscure the problems you deal with, the more obsolete code the model will spit out - with dead and unsupported libraries etc.

nluken · 3 months ago
A side note to the general point of the article, but I hate how the tech industry uses the word "democratization" to mean "lowering the barrier to entry". These concepts differ from each other but many use the former term because in doing so they justify their actions as driven by some sort of moral imperative when in reality, the development of LLMs is morally neutral, not inherently bad by any stretch, but as much a wealth and power play as any other technology of the last 25 years.
greg7gkb · 3 months ago
I agree with you however I've also not found a better substitute. Any candidates?
neochief · 3 months ago
Commoditization.
lordnacho · 3 months ago
Judgement and technical skill go hand in hand. Technology merely moves the boundary of what is considered judgement, and what is considered technical skill.

I know someone who wrote programs in the punch-card era. Back then, technical skill meant being diligent and thoughtful enough that you avoided most bugs when writing the program. If you screwed this up, you had to wait for another time slot. What does this mean for the complexity of programs you could write? Well, it means you are quite limited. You can't build judgement about things above what is now considered a very basic program.

I learned to program before the AI era that seems to be nascent. Technical skill means things like being able to write programs in python and c++, getting many computers to work together, being able to find hints when something goes wrong, and so on. Judgement now covers things like how a large swarm of programs interact, which was not really in scope for punch-card guy.

Now AI arrives, and it appears that we are free from technical skill problems. Indeed, it does fix a lot of my little syntax issues, but actually it just moves the goalposts. There's soon going to be no excuse for spending time working out the syntax for a lambda function, you'll be expected to generate a much more complicated product, for which you will need an even higher overview to say you are providing judgement.

red_admiral · 3 months ago
And what is that judgement based on? Jobs that an AI can't do yet, like designing a system architecture and drawing boundaries (which features go in the same service), need someone with experience.

We can apply this to all points in the Future of Work section. Even the conclusion "What should you do, and why?" is basically a disguised "What domain-specific knowledge do you have to make an informed opinion on the 'why' anyway?"