Readit News logoReadit News
reb · 5 months ago
I would love to see an anti-AI take that doesn't hinge on the idea that technology forces people to be lazy/careless/thoughtless.

The plan-build-test-reflect loop is equally important when using an LLM to generate code, as anyone who's seriously used the tech knows: if you yolo your way through a build without thought, it will collapse in on itself quickly. But if you DO apply that loop, you get to spend much more time on the part I personally enjoy, architecting the build and testing the resultant experience.

> While the LLMs get to blast through all the fun, easy work at lightning speed, we are then left with all the thankless tasks

This is, to me, the root of one disagreement I see playing out in every industry where AI has achieved any level of mastery. There's a divide between people who enjoy the physical experience of the work and people who enjoy the mental experience of the work. If the thinking bit is your favorite part, AI allows you to spend nearly all of your time there if you wish, from concept through troubleshooting. But if you like the doing, the typing, fiddling with knobs and configs, etc etc, all AI does is take the good part away.

PessimalDecimal · 5 months ago
> I would love to see an anti-AI take that doesn't hinge on the idea that technology forces people to be lazy/careless/thoughtless.

The article sort of goes sideways with this idea but pointing out that AI coding robs you a deep understanding of the code it produces is a valid and important criticism of AI coding.

A software engineer's primary job isn't producing code, but producing a functional software system. Most important to that is the extremely hard to convey "mental model" of how the code works and an expertise of the domain it works in. Code is a derived asset of this mental model. And you will never know code as well as a reader and you would have as the author for anything larger than a very small project.

There are other consequences of not building this mental model of a piece of software. Reasoning at the level of syntax is proving to have limits that LLM-based coding agents are having trouble scaling beyond.

danpat · 5 months ago
> And you will never know code as well as a reader and you would have as the author for anything larger than a very small project.

This feels very true - but also consider how much code exists for which many of the current maintainers were not involved in the original writing.

There are many anecdotal rules out there about how much time is spent reading code vs writing. If you consider the industry as a whole, it seems to me that the introduction of generative code-writing tools is actually not moving the needle as far as people are claiming.

We _already_ live in a world where most of us spend much of our time reading and trying to comprehend code written by others from the past.

What's the difference between a messy codebase created by a genAI, and a messy codebase where all the original authors of the code have moved on and aren't available to ask questions?

mattlutze · 5 months ago
> The article sort of goes sideways with this idea but pointing out that AI coding robs you a deep understanding of the code it produces is a valid and important criticism of AI coding.

In any of my teams with moderate to significant code bases, we've always had to lean very hard into code comments and documentation, because a developer will forget in a few months the fine details of what they've previously built. And further, any org with turnover needs to have someone new come in and be able to understand what's there.

I don't think I've met a developer that keeps all of the architecture and design deeply in their mind at all times. We all often enough need to go walk back through and rediscover what we have.

Which is to say... if the LLM generator was instead a colleague or neighboring team, you'd still need to keep up with them. If you can adapt those habits to the generative code then it doesn't seem to be a bit leap.

jstummbillig · 5 months ago
> The article sort of goes sideways with this idea but pointing out that AI coding robs you a deep understanding of the code it produces is a valid and important criticism of AI coding.

Why? Code has always been the artifact. Thinking about and understanding the domain clearly and solving problems is where the intrinsic value is at (but I'd suspect that in the future this, too, will go away).

noosphr · 5 months ago
>The article sort of goes sideways with this idea but pointing out that AI coding robs you a deep understanding of the code it produces is a valid and important criticism of AI coding.

You can describe what the code should do with natural language.

I've found that using literate programming with agent calls to write the tests first, then code, then the human refining the description of the code, and going back to 1 is surprisingly good at this. One of these days I'll get around to writing an emacs mode to automate it because right now it's yanking and killing between nearly a dozen windows.

Of course this is much slower than regular development but you end up with world class documentation and understanding of the code base.

jay_kyburz · 5 months ago
I can imagine an industry where we describe business rules to apply to data in natural language, and the AI simply provides an executable without source at all.

The role of the programmer would then be to test if the rules are being applied correctly. If not, there are no bugs to fix, you simply clarify the business rules and ask for a new program.

I like to imagine what it must be like for a non technical business owner who employees programmers today. There is a meeting where a process or outcome is described, and a few weeks / months / years a program is delivered. The only way to know if it does what was requested is to poke it a bit and see if it works. The business owner has no metal modal of the code and can't go in and fix bugs.

update: I'm not suggesting I believe AI is anywhere near being this capable.

KoolKat23 · 5 months ago
Not really, its more a case of "potentially can" rather than "will". This dynamic has always been there with the whole junior, senior dev. split, its not a new problem. You 100% can use it without losing this, in an ideal world you can even go so far as to not worry about the understanding for parts that are inconsequential.
enraged_camel · 5 months ago
>> The article sort of goes sideways with this idea but pointing out that AI coding robs you a deep understanding of the code it produces is a valid and important criticism of AI coding.

All code is temporary and should be treated as ephemeral. Even if it lives for a long time, at the end of the day what really matters is data. Data is what helps you develop the type of deep understanding and expertise of the domain that is needed to produce high quality software.

In most problem domains, if you understand the data and how it is modeled, the need to be on top of how every single line of code works and the nitty-gritty of how things are wired together largely disappears. This is the thought behind the idiom “Don’t tell me what the code says—show me the data, and I’ll tell you what the code does.”

It is therefore crucial to start every AI-driven development effort with data modeling, and have lots of long conversations with AI to make sure you learn the domain well and have all your questions answered. In most cases, the rest is mostly just busywork, and handing it off to AI is how people achieve the type of productivity gains you read about.

Of course, that's not to say you should blindly accept everything the AI generates. Reading the code and asking the AI questions is still important. But the idea that the only way to develop an understanding of the problem is to write the code yourself is no longer true. In fact, it was never true to begin with.

posix86 · 5 months ago
What is "understanding code", mental model of the problem? These are terms for which we all have developed a strong & clear picture of what they mean. But may I remind us all that used to not be the case before we entered this industry - we developed it over time. And we developed it based on a variety of highly interconnected factors, some of which are e.g.: what is a program, what is a programming language, what languages are there, what is a computer, what software is there, what editors are there, what problems are there.

And as we mapped put this landscape, hadn't there been countless situations where things felt dumb and annoying, and then situation in sometimes they became useful, and sometimes they remained dumb? Something you thought is making you actively loosing brain cells as you're doing them, because you're doing them wrong?

Or are you to claim that every hurdle you cross, every roadblock you encounter, every annoyance you overcome has pedagogical value to your career? There are so many dumb things out there. And what's more, there's so many things that appear dumb at first and then, when used right, become very powerful. AI is that: Something that you can use to shoot yourself in the foot, if used wrong, but if used right, it can be incredibly powerful. Just like C++, Linux, CORS, npm, tcp, whatever, everything basically.

halfadot · 5 months ago
> The article sort of goes sideways with this idea but pointing out that AI coding robs you a deep understanding of the code it produces is a valid and important criticism of AI coding.

No it isn't. There's literally nothing about the process that forces you to skip understanding. Any such skips are purely due to the lack of will on the developer's side. This lack of will to learn will not change the outcomes for you regardless of whether you're using an LLM. You can spend as much time as you want asking the LLM for in-depth explanations and examples to test your understanding.

So many of the criticisms of coding with LLMs I've seen really do sound like they're coming from people who already started with a pre-existing bias, fiddled with with for a short bit (or worse, never actually tried it at all) and assumed their limited experience is the be-all end-all of the subject. Either that, or they're typical skill issues.

weego · 5 months ago
Who are this endless cohort of develops who need to maintain a 'deep understanding' of their code. I'd argue a high % of all code written globally on any given day that is not some flavour of boilerplate, while written with good intention, is ultimately just short-lived engineering detritus of it even gets a code review to pass.
_fat_santa · 5 months ago
> I would love to see an anti-AI take that doesn't hinge on the idea that technology forces people to be lazy/careless/thoughtless.

Here's mine, I use Cline occasionally to help me code but more and more I find myself just coding by hand. The reason is pretty simple which is with these AI tools you for the most part replace writing code with writing a prompt.

I look at it like this, if writing the prompt, and the inference time is less than what it would take me to write the code by hand I usually go the AI route. But this is usually for refactoring tasks where I consider the main bottleneck to be the speed at which my fingers can type.

For virtually all other problems it goes something like this: I can do X task in 10 minutes if i code it manually or I can prompt AI to do it and by the time I finish crafting the prompt and execute, it takes me about 8 minutes. Yes that's a savings of 2 minutes on that task and that's all fine and good assuming that the AI didn't make a mistake, if I have to go back and re-prompt or manually fix something, then all of a sudden the time it took me to complete that task is now 10-12 minutes with AI. Here the best case scenario is I just spent some AI credits for zero time savings and worse case is I spent AI credits AND the task was slower in the end.

With all sorts of tasks I now find myself making this calculation and for the most part, I find that doing it by hand is just the "safer" option, both in terms of code output but also in terms of time spent on the task.

didibus · 5 months ago
> The reason is pretty simple which is with these AI tools you for the most part replace writing code with writing a prompt

I'm convinced I spend more time typing and end up typing more letters and words when AI coding than when not.

My hands are hurting me more from the extra typing I have to do now lol.

I'm actually annoyed they haven't integrated their voice to text models inside their coding agents yet.

rapind · 5 months ago
I find myself often writing pseudo code (CLI) to express some ideas to the agent. Code can be a very powerful and expressive means of communication. You don't have to stop using it when it's the best / easiest tool for a specific case.

That being said, these agents may still just YOLO and ignore your instructions on occasion, which can be a time suck, so sometimes I still get my hands dirty too :)

bccdee · 5 months ago
> the idea that technology forces people to be careless

I don't think anyone's saying that about technology in general. Many safety-oriented technologies force people to be more careful, not less. The argument is that this technology leads people to be careless.

Personally, my concerns don't have much to do with "the part of coding I enjoy." I enjoy architecture more than rote typing, and if I had a direct way to impose my intent upon code, I'd use it. The trouble is that chatbot interfaces are an indirect and imperfect vector for intent, and when I've used them for high-level code construction, I find my line-by-line understanding of the code quickly slips away from the mental model I'm working with, leaving me with unstable foundations.

I could slow down and review it line-by-line, picking all the nits, but that moves against the grain of the tool. The giddy "10x" feeling of AI-assisted coding encourages slippage between granular implementation and high-level understanding. In fact, thinking less about the concrete elements of your implementation is the whole advantage touted by advocates of chatbot coding workflows. But this gap in understanding causes problems down the line.

Good automation behaves in extremely consistent and predictable ways, such that we only need to understand the high-level invariants before focusing our attention elsewhere. With good automation, safety and correctness are the path of least resistance.

Chatbot codegen draws your attention away without providing those guarantees, demanding best practices that encourage manually checking everything. Safety and correctness are the path of most resistance.

godelski · 5 months ago
(Adding to your comment, not disagreeing)

  > The argument is that this technology leads people to be careless.
And this will always be a result of human preference optimization. There's a simple fact: humans prefer lies that they don't know are lies over lies that they do know are lies.

We can't optimize for an objective truth when that objective truth doesn't exist. So while doing our best to align our models they must simultaneously optimize they ability to deceive us. There's little to no training in that loop where outputs are deeply scrutinized, because we can't scale that type of evaluation. We end up rewarding models that are incorrect in their output.

We don't optimize for correctness, we optimize for the appearance of correctness. We can't confuse the two.

The result is: when LLMs make errors, those errors are difficult for humans you detect.

This results in a fundamentally dangerous tool, does it not? Tools that when they error or fail they do so safely and loudly. Instead this one fails silently. That doesn't mean you shouldn't use the tool but that you need to do so with an abundance of caution.

  > I could slow down and review it line-by-line, picking all the nits, but that moves against the grain of the tool.
Actually the big problem I have with coding with LLMs is that it increases my cognitive load, not decreases it. Bring over worked results in carelessness. Who among us does not make more mistakes when they are tired or hungry?

That's the opposite of lazy, so hopefully answers OP.

lelanthran · 5 months ago
> If the thinking bit is your favorite part, AI allows you to spend nearly all of your time there if you wish, from concept through troubleshooting.

This argument is wearing a little thin at this point. I see it multiples times a day, rephrased a little bit.

The response, "How well do you think your thinking will go if you had not spent years doing the 'practice' part?", is always followed by either silence or a non-sequitor.

So, sure, keep focusing on the 'thinking' part, but your thinking will get more and more shallow without sufficient 'doing'

t0mas88 · 5 months ago
Separate from AI, as your role becomes more tech lead / team lead / architect you're also not really "doing" as much and still get involved in a lot of thinking by helping people get unstuck. The thinking part still builds experience. You don't need to type the code to have a good understanding of how to approach problems and how to architect systems. You just need to be making those decisions and gaining experience from them.
kristianbrigman · 5 months ago
It's about as much time as I think about caching artifacts and branch mispredict latencies. Things I cared a lot about when I was doing assembly, but don't even think about really in Python (or C++).

My assembly has definitely rotted and I doubt I could do it again without some refreshing but it's been replaced with other higher-level skills, some which are general like using correct data structures and algorithms, and others that are more specific like knowing some pandas magic and React Flow basics.

I expect this iteration I'll get a lot better at systems design, UML, algorithm development, and other things that are slightly higher level. And probably reverse-engineering as well :) The computer engineering space is still vast IMHO....

johnfn · 5 months ago
Do you think that all managers and tech leads atrophy because they don’t spend all day “doing”? I think a good number of them become more effective because they delegate the simple parts of their work that don’t require deep thought, leaving them to continue to think hard about the thorniest areas of what they’re working on.

Or perhaps you’re asking how people will become good at delegation without doing? I don’t know — have you been “doing” multiple years of assembly? If not, how are you any good at Python (or whatever language you currently use?). Probably you’d say you don’t need to think about assembly because it has been abstracted away from you. I think AI operates similarly by changing the level of abstraction you can think at.

jayd16 · 5 months ago
My take is just that debugging is harder than writing so I'd rather just write it instead of debugging code I didn't write.
rwmj · 5 months ago
I think it's more like code review, which really is the worst part of coding. With AI, I'll be doing less of the fun bits (writing, debugging those super hard customer bugs), and much much more code review.
erichocean · 5 months ago
Are people really not using LLMs to debug code?
sciencejerk · 5 months ago
@shredprez the website in your bio appears to sell AI-driven products: "Design anything in Claude, Cursor, or VS Code

Consider leaving a disclaimer next time. Seems like you have a vested interest in the current half-baked generation of AI products succeeding

moffkalast · 5 months ago
Conflict of interest or not, he's not really wrong. Anyone shipping code in a professional setting doesn't just push to prod after 5 people say LGTM to their vibe coded PR, as much as we like to joke around with it. There are stages of tests and people are responsible for what they submit.

As someone writing lots of research code, I do get caught being careless on occasion since none of it needs to work beyond a proof of concept, but overall being able to just write out a spec and test an idea out in minutes instead of hours or days has probably made a lot of things exist that I'd otherwise never be arsed to bother with. LLMs have improved enough in the past year that I can easily 0-shot lots of ad-hoc visualization stuff or adapters or simple simulations, filters, etc. that work on the first try and with probably fewer bugs than I'd include in the first version myself. Saves me actual days and probably a carpal tunnel operation in the future.

closeparen · 5 months ago
It's "anti-AI" from the perspective of an investor or engineering manager who assumes that 10x coding speed should 10x productivity in their organization. As a staff IC, I find it a realistic take on where AI actually sits in my workflow and how it relates to juniors.
visarga · 5 months ago
> assumes that 10x coding speed should 10x productivity

This same error in thinking happens in relation to AI agents too. Even if the agent is perfect (not really possible) but other links in the chain are slower, the overall speed of the loop still does not increase. To increase productivity with AI you need to think of the complete loop, reorganize and optimize every link in the chain. In other words a business has to redesign itself for AI, not just apply AI on top.

Same is true for coding with AI, you can't just do your old style manual coding but with AI, you need a new style of work. Maybe you start with constraint design, requirements, tests, and then you let the agent loose and not check the code, you need to automate that part, it needs comprehensive automated testing. The LLM is like a blind force, you need to channel it to make it useful. LLM+Constraints == accountable LLM, but LLM without constraints == unaccountable.

overfeed · 5 months ago
> AI allows you to spend nearly all of your time there if you wish, from concept through troubleshooting

It does not! If you're using interactive IDE AI, you spend your time keeping the AI on the rails, and reminding it what the original task is. If you're using agents, then you're delegating all the the mid-level/tactical thinking, and perhaps even the planning, and you're left with the task of writing requirements granular enough for an intern to tackle, but this hews closer to "Business Analyst" than "Software Engineer"

Marha01 · 5 months ago
From my experience, current AI models stay on the rails pretty well. I don't need to remind them of the task at hand.
malyk · 5 months ago
Using an agentic workflow does not require you to delegate tge thinking. Agents are great at taking exactly what you want to do and executing. So spend an extra few minutes and lay out the architecture YOU want then let the ai do the work.
HiPhish · 5 months ago
> I would love to see an anti-AI take that doesn't hinge on the idea that technology forces people to be lazy/careless/thoughtless.

I think this might simply be how the human brain works. Take autonomous driving as an example: while the car drives on its own the human driver is supposed to be alert and step in if needed. But does that work? Or will the driver's mind wander off because the car has been driving properly for the last half hour? My gut feeling is that it's inevitable that we'll eventually just shut out everything that goes smoothly and by the time it doesn't it might be too late.

We are not that different from our ancestors who used to roam the forests, trying to eat before they get eaten. In such an environment there is constantly something going on, some critters crawling, some leaves rustling, some water flowing. It would drive us crazy if we could not shut out all this regular noise. It's only when an irregularity appears that our attention must spring into action. When the leaves rustle differently than they are supposed to there is a good chance that there is some prey or a predator to be found. This mechanism only works if we are alert. The sounds of the forest are never exactly the same, so there is constant stimulation to keep up on our toes. But if you are relaxing in your shelter the tension is gone.

My fear is that AI is too good, to the point where it makes us feel like being in our shelter rather than in the forest.

halfcat · 5 months ago
> My gut feeling is that it's inevitable that we'll eventually just shut out everything that goes smoothly and by the time it doesn't it might be too late.

Yes. Productivity accelerates at an exponential rate, right up until it drives off a cliff (figuratively or literally).

rhetocj23 · 4 months ago
Ah yes, finally someone who gets it. You're a smart fella.

I view the story of LLMs akin to the Concorde. Something catastrophic will happen that will be too big to ignore and all trust will implode.

didibus · 5 months ago
> If the thinking bit is your favorite part, AI allows you to spend nearly all of your time there if you wish, from concept through troubleshooting

I think this depends. I prefer the thinking bit, but it's quite difficult to think without the act of coding.

It's how white boarding or writing can help you think. Being in the code helps me think, allows me to experiment, uncover new learnings, and evolve my thinking in the process.

Though maybe we're talking about thinking of different things? Are you thinking in the sense of what a PM thinks about ? User features, user behavior, user edge cases, user metrics? Or do you mean thinking about what a developer thinks about, code clarity, code performance, code security, code modularization and ability to evolve, code testability, innovative algorithms, innovative data-structure, etc. ?

nativeit · 5 months ago
I’m struggling to understand how they are asserting one follows from the other. I’m not a SWE, but do a lot of adjacent types of work (infrastructure automation and scripting, but also electronics engineering, and I’m also a musician), and the “thinking” part where I get to deploy logic and reasoning to solve novel challenges is certainly a common feature among these activities I certainly enjoy, and I feel it’s a core component of what I’m doing.

But the result of that thinking would hardly ever align neatly with whatever an LLM is doing. The only time it wouldn’t be working against me would be drafting boilerplate and scaffolding project repos, which I could already automate with more prosaic (and infinitely more efficient) solutions.

Even if it gets most of what I had in mind correct, the context switching between “creative thinking” and “corrective thinking” would be ruinous to my workflow.

I think the best case scenario in this industry will be workers getting empowered to use the tools that they feel work best for their approach, but the current mindset that AI is going to replace entire positions, and that individual devs should be 10x-ing their productivity is both short-sighted and counterproductive in my opinion.

strogonoff · 5 months ago
I never made a case against LLMs and similar ML applications in the sense that they negatively impact mental agility. The cases I made so far include, but are not limited to:

— OSS exploded on the promise that software you voluntarily contributed to remains to benefit the public, and that a large corporation cannot tomorrow simply take your work and make it part of their product, never contributing anything back. Commercially operated LLMs threaten OSS both by laundering code and by overwhelming maintainers with massive, automatically produced and sometimes never read by a human patches and merge requests.

— Being able to claim that any creative work is merely a product of an LLM (which is a reality now for any new artist, copywriter, etc.) removes a large motivator for humans to do fully original creative work and is detrimental to creativity and innovation.

— The ends don’t justify the means, as a general philosophical argument. Large-scale IP theft had been instrumental at the beginning of this new wave of applied ML—and it is essentially piracy, except done by the powerful and wealthy against the rest of us, and for profit rather than entertainment. (They certainly had the money to license swaths of original works for training, yet they chose to scrape and abuse the legal ambiguity due to requisite laws not yet existing.)

— The plain old practical “it will drive more and more people out of jobs”.

— Getting everybody used to the idea that LLMs now mediate access to information increases inequality (making those in control of this tech and their investors richer and more influential, while pushing the rest—most of whom are victims of the aforementioned reverse piracy—down the wealth scale and often out of jobs) more than it levels the playing field.

— Diluting what humanity is. Behaving like a human is how we manifest our humanness to others, and how we deserve humane treatment from them; after entities that walk and talk exactly like a human would, yet which we can be completely inhumane to, become commonplace, I expect over time this treatment will carry over to how humans treat each other—the differentiator has been eliminated.

— It is becoming infeasible to operate open online communities due to bot traffic that now dwarves human traffic. (Like much of the above, this is not a point against LLMs as technology, but rather the way they have been trained and operated by large corporate/national entities—if an ordinary person wanted to self-host their own, they would simply not have the technical capability to cause disruption at this scale.)

This is just what I could recall off the top of my head.

m0rde · 5 months ago
Good points here, particularly the ends not justifying the means.

I'm curious for more thoughts on "will drive more and more people out of jobs”. Isn't this the same for most advances in technology (e.g., steam engine, computers s, automated toll plazas, etc.). In some ways, it's motivation for making progress; you get rid of mundane jobs. The dream is that you free those people to do something more meaningful, but I'm not going to be that blindly optimistic :) still, I feel like "it's going to take jobs" is the weakest of arguments here.

benoau · 5 months ago
> — The ends don’t justify the means. IP theft that lies in the beginning of this new wave of applied ML is essentially piracy

Isn't "AI coding" trained almost entirely on open source code and published documentation?

LittleCloud · 5 months ago
> There's a divide between people who enjoy the physical experience of the work and people who enjoy the mental experience of the work. If the thinking bit is your favorite part, AI allows you to spend nearly all of your time there if you wish, from concept through troubleshooting. But if you like the doing, the typing, fiddling with knobs and configs, etc etc, all AI does is take the good part away.

I don't know... that seems like a false dichotomy to me. I think I could enjoy both but it depends on what kind of work. I did start using AI for one project recently: I do most of the thinking and planning, and for things that are enjoyable to implement I still write the majority of the code.

But for tests, build system integration, ...? Well that's usually very repetitive, low-entropy code that we've all seen a thousand times before. Usually not intellectually interesting, so why not outsource that to the AI.

And even for the planning part of a project there can be a lot of grunt work too. Haven't you had the frustrating experience of attempting a re-factoring and finding out midway it doesn't work because of some edge case. Sometimes the edge case is interesting and points to some deeper issue in the design, but sometimes not. Either way it sure would be nice to get a hint beforehand. Although in my experience AIs aren't at a stage to reason about such issues upfront --- no surprise since it's difficult for humans too --- of course it helps if your software has an oracle for if the attempted changes are correct, i.e. it is statically-typed and/or has thorough tests.

bluefirebrand · 5 months ago
> Usually not intellectually interesting, so why not outsource that to the AI.

Because it still needs to be correct, and AI still is not producing correct code

mhitza · 5 months ago
I agree with your comment sentiment, but I believe that you, like many others have the cycle in wrong order. I don't fault anyone for it because it's the flow that got handed down to us from the days of waterfall development.

My strong belief after almost twenty years of professional software development is that both us and LLMs should be following the order: build, test, reflect, plan, build.

Writing out the implementation is the process of materializing the requirements, and learning the domain. Once the first version is out, you can understand the limits and boundaries of the problem and then you can plan the production system.

This is very much in line with Fred Brooks' "build one to throw away" (written ~40 years ago in the "The Mythical Man-Month". While often quoted, if you never read his book, I urge you to do so, it's both entertaining, and enlightening on our software industry), startup culture (if you remove the "move fast break things" mantra), and governmental pilot programs (the original "minimum viable").

bgwalter · 5 months ago
"AI" does not encourage real thinking. "AI" encourages hand waving grand plans that don't work, CEO style. All pro-"AI" posts focus on procedures and methodologies, which is just LARPing thinking.

Using "AI" is just like speed reading a math book without ever doing single exercise. The proponents rarely have any serious public code bases.

rhetocj23 · 4 months ago
Exactly.

And this should not be a surprise at all. Humans are optimisers of truth NOT maximisers. There is a subtle and nuanced difference. Very few actually spend their entire existence being maximizers, its pretty exhausting to be of that kind.

Optimising = we look for what feels right or surpassses some threshold of "looks about right". Maximizing = we think deeply and logically reason to what is right and conduct tests to ensure it is so.

Now if you have the discipline to choose when to shift between the two modes this can work. Most people do not though. And therein lies the danger.

cgh · 5 months ago
A surprising conclusion to me at least is that a lot of programmers simply don’t like to write code.
belter · 5 months ago
> "AI" encourages hand waving grand plans that don't work

You described the current AI Bubble.

AnotherGoodName · 5 months ago
I see a lot of comments like this and it reflects strongly negatively on the engineers who write it imho. As in I've been a staff level engineer at both Meta and Google and a lead at various startups in my time. I post open source projects here on HN from time to time that are appreciated. I know my shit. If someone tells me that LLMs aren't useful i think to myself "wow this person is so unable to learn new tools they can't find value in one of the biggest changes happening today".

That's not to say that LLMs as good as some of the more outrageous claims. You do still need to do a lot of work to implement code. But if you're not finding value at all it honestly reflects badly on you and your ability to use tools.

The craziest thing is i see the above type of comment on linked in regularly. Which is jaw dropping. Prospective hiring managers will read it and think "Wow you think advertising a lack of knowledge is helpful to your career?" Big tech co's are literally firing people with attitudes like the above. There's no room for people who refuse to adapt.

I put absolute LLM negativity right up there with comments like "i never use a debugger and just use printf statements". To me it just screams you never learnt the tool.

abustamam · 4 months ago
> The plan-build-test-reflect loop is equally important when using an LLM to generate code, as anyone who's seriously used the tech knows

Yeah I'm actually quite surprised that so many people are just telling AI to do X without actually developing a maintainable plan to do so first. It's no wonder that so many people are anti-vibe-coding — it's because their exposure to vibe coding is just telling Replit or Claude Code to do X.

I still do most of my development in my head, but I have a go-to prompt I ask Claude code when I'm stuck: "without writing any code, and maintaining existing patterns, tell me how to do X." it'd spit out some stuff, I'd converse with it to make sure it is a feasible solution that would work long term, then I tell it to execute the plan. But the process still starts in my head, not with a prompt.

elicash · 5 months ago
My approach has been to "yolo" my way through the first time, yes in a somewhat lazy and careless manner, get a working version, and then build a second time more thoughtfully.
stein1946 · 5 months ago
> in every industry where AI has achieved any level of mastery.

Which industries are those? What does that mastery look like?

> There's a divide between people ...

No, there is not. If one is not willing to figure out a couple of ffmpeg flags, comb through k8s controller code to see what is possible and fix that booting error in their VMs then failure in "mental experiences" is certain.

The most successful people I have met in this profession are the ones who absolutely do not tolerate magic and need to know what happens from the moment they press the ON on their machine, till the moment they turn is OFF again.

jmull · 5 months ago
> There's a divide between people who enjoy the physical experience of the work and people who enjoy the mental experience of the work.

Pretty clearly that’s not the divide anyone’s talking about, right?

Your argument should maybe be something about thinking about the details vs thinking about the higher level. (If you were to make that argument, my response would be: both are valuable and important. You can only go so far working at one level. There are certainly problems that can be solved at one level, but also ones that can’t.)

wat10000 · 5 months ago
I suspect the root of the disagreement is more about what kinds of work people do. There are many different kinds of programming and you can’t lump them all together. We shouldn’t expect an AI tool to be a good fit for all of them, any more than we should expect Ruby to be a good fit for embedded development or C to be a good fit for web apps.

My experience with low level systems programming is that it’s like working with a developer who is tremendously enthusiastic but has little skill and little understanding of what they do or don’t understand. Time I would have spent writing code is replaced by time spent picking through code that looks superficially good but is often missing key concepts. That may count as “thinking” but I wouldn’t categorize it as the good kind.

Where it excels for me is as a superpowered search (asking it to find places where we play a particular bit-packing game with a particular type of pointer works great and saves a lot of time) and for writing one-off helper scripts. I haven’t found it useful for writing code I’m going to ship, but for stuff that won’t ship it can be a big help.

It’s kind of like an excavator. If you need to move a bunch of dirt from A to B then it’s great. If you need to move a small amount of dirt around buried power lines and water mains, it’s going to cause more trouble than it’s worth.

Balinares · 5 months ago
I think this is one of the most cogent takes on the topic that I've seen. Thanks for the good read!

It's also been my experience that AI will speed up the easy / menial stuff. But that's just not the stuff that takes up most of my time in the first place.

chamomeal · 5 months ago
Idk I feel like even without using LLMs the job is 90% thinking and planning. And it’s nice to go the last 10% on your own to have a chance to reflect and challenge your earlier assumptions.

I actually end up using LLMs in the planning phase more often than the writing phase. Cursor is super good at finding relevant bits of code in unfamiliar projects, showing me what kind of conventions and libraries are being used, etc.

ChrisMarshallNY · 5 months ago
It's like folks complaining that people don't know how to code in Assembly or Machine Language.

New-fangled compiled languages...

Or who use modern, strictly-typed languages.

New-fangled type-safe languages...

As someone that has been coding since it was wiring up NAND gates on a circuit board, I'm all for the new ways, but there will definitely be a lot of mistakes, jargon, and blind alleys; just like every other big advancement.

martin-t · 5 months ago
The last paragraph feels more wrong the more I think about it.

Imagine an AI as smart as some of the smartest humans, able to do everything they intellectually do but much faster, cheaper, 24/7 and in parallel.

Why would you spend any time thinking? All you'll be doing it is the things an AI can't do - 1) feeding it input from the real world and 2) trying out its output in the real world.

1) Could be finding customers, asking them to describe their problem, arranging meetings, driving to the customer's factory to measure stuff and take photos for the AI, etc.

2) Could be assembling the prototype, soldering, driving it to the customer's factory, signing off the invoice, etc.

None of that is what I as a programmer / engineer enjoy.

If actual human-level AI arrives, it'll do everything from concept to troubleshooting, except the parts where it needs presence in the physical world and human dexterity.

If actual human-level AI arrives, we'll become interfaces.

gspr · 4 months ago
For me it's simply this: the best thing about computers and programming is that they do exactly what the code I write says they'll do. That is a quality that humans and human/natural languages don't have. To me, LLMs feel like replacing the best property of computers with a (in this context) terrible property of humans.

Why would I want a fuzzy, vague, imprecise, up-to-interpretation programming language? I already have to struggle with that in documentation, specifications, peers and – of course – myself. Why would I take the one precise component and make it suffer from the same?

This contrasts of course with tasks such as search, where I'm not quite able to precisely express what I want. Here I find LLMs to be a fantastic advance. Same for e.g. operations between imprecise domains, like between natural languages.

jaredklewis · 4 months ago
> There's a divide between people who enjoy the physical experience of the work and people who enjoy the mental experience of the work.

Does this divide between "physical" and "mental" exist? Programming languages are formal languages that allow you to precisely and unambiguously express your ideas. I would say that "fiddling" with the code (as you say) is a kind of mental activity.

If there is actually someone out there that only dislikes AI coding assistants because they enjoy the physical act of typing and now have to do less of it (I have not seen this blog post yet), then I might understand your point.

latexr · 5 months ago
> I would love to see an anti-AI take that doesn't hinge on the idea that technology forces people to be lazy/careless/thoughtless.

Are you genuinely saying you never saw a critique of AI on environmental impact, or how it amplifies biases, or how it widens the economic gap, or how it further concentrates power in the hands of a few, or how it facilitates the dispersion of misinformation and surveillance, directly helping despots erode civil liberties? Or, or, or…

You don’t have to agree with any of those. You don’t even have to understand them. But to imply anti-AI arguments “hinge on the idea that technology forces people to be lazy/careless/thoughtless” is at best misinformed.

Go grab whatever your favourite LLM is and type “critiques of AI”. You’ll get your takes.

jayd16 · 5 months ago
I'm not an AI zealot but I think some of these are over blown.

The energy cost is nonsensical unless you pin down a value out vs value in ratio and some would argue the output is highly valuable and the input cost is priced in.

I don't know if it will end up being a concentrated power. It seems like local/open LLMs will still be in the same ballpark. Despite the absurd amounts of money spent so far the moats don't seem that deep.

Baking in bias is a huge problem.

The genie is out of the bottle as far as people using it for bad. Your own usage won't change that.

kiitos · 5 months ago
> If the thinking bit is your favorite part, AI allows you to spend nearly all of your time there if you wish, from concept through troubleshooting...

What about if the "knowing/understanding" bit is your favorite part?

swiftcoder · 5 months ago
> I would love to see an anti-AI take that doesn't hinge on the idea that technology forces people to be lazy/careless/thoughtless.

What makes you regard this as an anti-AI take? To my mind, this is a very pro-AI take

analog8374 · 5 months ago
Here's one

AI can only recycle the past.

grim_io · 5 months ago
Most of us do nothing but remix the past solutions.

Since we don't know what else might already exist in the world without digging very deep, we fool ourselves into thinking that we do something very original and unique.

Vegenoid · 5 months ago
I'm not sure if you are insinuating that the article is an anti-AI take, but in case it wasn't clear, it's not. It is about doing just what you suggested:

> Just as tech leads don't just write code but set practices for the team, engineers now need to set practices for AI agents. That means bringing AI into every stage of the lifecycle

The technology doesn't force people to be careless, but it does make it very easy to be careless, without having to pay the costs of that carelessness until later.

layer8 · 5 months ago
My experience is that you need the “physical” coding work to get a good intuition of the mechanics of software design, the trade-offs and pitfalls, the general design landscape, and so on. I disagree that you can cleanly separate the “mental” portion of the work. Iterating on code builds your mental models, in a way that merely reviewing code does not, or only to a much more superficial degree.
pluto_modadic · 5 months ago
it's mostly seeing juniors and project managers writing garbage that creates a massive pile of BS for us to clean up that pisses us off.
resonious · 5 months ago
I actually didn't really interpret this as anti-AI. In the end it was pretty positive about AI and I pretty much agree with the conclusion.

Though I will also dogpile on the "thankless tasks" remark and say that the stuff that I have AI blast through is very thankless. I do not enjoy changing 20 different files to account for a change in struct definition.

raincole · 5 months ago
The first two paragraphs are so confusing. Since Claude Code became a thing my "thinking" phase has been much, much longer than before.

I honestly don't know how one can use Claude Code (or other AI agents) in a 'coding first thinking later' manner.

pg3uk · 4 months ago
What you've described there is the difference between a good developer and a bad one.

A dev that spends an undue amount of time fiddling with knobs and configs probably sucks. Their mind isn't on the problem that needs to be solved.

croes · 5 months ago
>I would love to see an anti-AI take that doesn't hinge on the idea that technology forces people to be lazy/careless/thoughtless.

It’s not force but simply human nature. We invent tools to do less. That’s the whole point of tools.

giantg2 · 5 months ago
"I would love to see an anti-AI take that doesn't hinge on the idea that technology forces people to be lazy/careless/thoughtless."

I'm not impressed by AI because it generates slop. Copilot can't write a thorough working test suite to save it's life. I think we need a design and test paradigm to properly communicate with AI for it to build great software.

benterix · 4 months ago
> I would love to see an anti-AI take that doesn't hinge on the idea that technology forces people to be lazy/careless/thoughtless.

Not forces, encourages.

nimithryn · 5 months ago
I think that the problem is, at the end of the day, the engineer must specify exactly what they want the program to do.

You can do this in Python, or you can do this in English. But at the end of the day the engineer must input the same information to get the same behavior. Maybe LLMs make this a bit more efficient but even in English it is extremely hard to give exact specification without ambiguity (maybe even harder than Python in some cases).

EGreg · 5 months ago
Most of my anti-AI takes are either:

1) Bad actors using AI at scale to do bad things

2) AI just commodifying everything and making humans into zoo animals

specproc · 5 months ago
My anti AI take is that it's no fun.

I'm on a small personal project with it intentionally off, and I honestly feel I'm moving through it faster and certainly having a better time. I also have a much better feel for the code.

These are all just vibes, in the parlance of our times, but it's making me question why I'm bothering with LLM assisted coding.

Velocity is rarely the thing in my niche, and I'm not convinced babysitting an agent is all in all faster. It's certainly a lot less enjoyable, and that matters, right?

add-sub-mul-div · 5 months ago
More specifically for (1), the combined set of predators, advertisers, businesses, and lazy people using it to either prey or enshittify or cheat will make up the vast majority of use cases.
_heimdall · 4 months ago
Read Eliezer Yudkowsky. He raises plenty of anti-AI arguments, none of them have to do with laziness.
otabdeveloper4 · 5 months ago
> technology forces people to be lazy/careless/thoughtless

AI isn't a technology. (No more than asking your classmate to do your homework for you is a "technology".)

Please don't conflate between AI and programming tools. AI isn't a tool, it is an oracle. There's a huge fundamental gap here that cannot be bridged.

solumunus · 4 months ago
It’s crazy to me that some people love the pressing keys parts so much.
haskellshill · 4 months ago
Funny that you imagine AI-coders doing any sort of thinking
agentcoops · 5 months ago
Completely agreed. Whether it be AI or otherwise, I consider anything that gives me more time to focus on figuring out the right problem to solve or iterating on possible solutions to be good.

Yet every time that someone here earnestly testifies to whatever slight but real use they’ve found of AI, an army of commentators appears ready to gaslight them into doubting themselves, always citing that study meant to have proven that any apparent usefulness of AI is an illusion.

At this point, even just considering the domain of programming, there’s more than enough testimony to the contrary. This doesn’t say anything about whether there’s an AI bubble or overhype or anything about its social function or future. But, as you note, it means these cardboard cutout critiques of AI need to at least start from where we are.

blehn · 4 months ago
> There's a divide between people who enjoy the physical experience of the work and people who enjoy the mental experience of the work

Eh, physical and mental isn't the divide — it's more like people who enjoy code itself as a craft and people who simply see it as a means to an end (the application). Much like a writer might labor over their prose (the code) while telling a story (the application). Writing code is far more than the physical act of typing to those people.

martin-t · 5 months ago
> I would love to see an anti-AI take that doesn't hinge on the idea that technology forces people to be lazy/careless/thoughtless.

Here's a couple points which are related to each other:

1) LLMs are statistical models of text (code being text). They can only exist because huge for-profit companies ingested a lot of code under proprietary, permissive and copyleft licenses, most of which at the very least require attribution, some reserve rights of the authors, some give extra rights to users.

LLM training mixes and repurposes the work of human authors in a way which gives them plausible deniability against any single author, yet the output is clearly only possible because of the input. If you trained an LLM on only google's source code, you'd be sued by google and it would almost certainly reproduce snippets which can be tracked down to google's code. But by taking way, way more input data, the blender cuts them into such fine pieces that the source is undetectable, yet the output is clearly still based on the labor of other people who have not been paid.

Hell, GPT3 still produced verbatim snippets of inverse square root and probably other well known but licensed code. And github has a checkbox which scans for verbatim matches so you don't accidentally infringe copyright by using copilot in a way which is provable. Which means they take extra care to make it unprovable.

If I "write a book" by taking an existing book but replacing every word with a synonym, it's still plagiarism and copyright infringement. It doesn't matter if the mechanical transformation is way more sophisticated, the same rules should apply.

2) There's no opt out. I stopped writing open source over a year ago when it became clear all my code is unpaid labor for people who are much richer than me and are becoming richer at a pace I can't match through productive work because they own assets which give them passive income. And there's no license I can apply which will stop this. I am not alone. As someone said, "Open-Source has turned into a form of unpaid internship"[0]. It might lead to a complete death of open source because nobody will want to see their work fed into a money printing machine (subscription based LLM services) and get nothing in return for their work.

> But if you like the doing, the typing, fiddling with knobs and configs, etc etc, all AI does is take the good part away.

I see quite the opposite. For me, what makes programming fun is deeply understanding a problem and coming up with a correct, clear to understand, elegant solution. But most problems a working programmer has are just variations of what other programmers had. The remaining work is prompting the LLMs in the right way that they produce this (describing the problem instead of thinking about its solutions) and debugging bugs LLMs generated.

A colleague vibe coded a small utility. It's useful but it's broken is so many ways, the UI falls apart when some text gets too long, labels are slightly incorrect and misleading, some text handle decimal numbers in weird ways, etc. With manually written code, a programmer would get these right the right time. Potential bugs become obvious as you're writing the code because you are thinking about it. But they do not occur to someone prompting an LLM. Now I can either fix them manually which is time consuming and boring, or I can try prompting an LLM about every single one which is less time consuming but more boring and likely to break something else.

Most importantly, using an LLM does not give me deeper understanding of the problem or the solution, it keeps knowledge locked in a black box.

[0]: https://aria.dog/barks/forklift-certified-license/

nchmy · 5 months ago
Strongly agree with this
nenenejej · 5 months ago
OK: AI is slow when using the said loop. AI is like poker. You bet with time. 60 seconds to type prompt and generate a response. Oh it is wrong ok let's gamble another 60 seconds...

At least when doing stuff the old way you learn something if you waste time.

That said AI is useful enough and some poker games are +EV.

So this is more caution-AI than anti-AI take. It is more an anti-vibe-koolaid take.

lukaslalinsky · 5 months ago
This depends entirely on how you use said AI. You can have it read code, explain why was it done this or that way, and once it has the context you ask to think about implementing feature X. There is almost no gambling involved there, at best the level frustration you would have with a colleague. If you start from blank context, tell it to implement full app, you are purely just gambling.
trepaura · 5 months ago
I'll gove you what you're asking for. Academic, genuine research has shown a clear result. AI is slower than an experienced engineer. It doesn't speed up the process because the loop you describe, it's terrible at it.
tptacek · 5 months ago
It's a fine post, but two canards in here:

First, skilled engineers using LLMs to code also think and discuss and stare off into space before the source code starts getting laid down. In fact: I do a lot, lot more thinking and balancing different designs and getting a macro sense of where I'm going, because that's usually what it takes to get an LLM agent to build something decent. But now that pondering and planning gets recorded and distilled into a design document, something I definitely didn't have the discipline to deliver dependably before LLM agents.

Most of my initial prompts to agents start with "DO NOT WRITE ANY CODE YET."

Second, this idea that LLMs are like junior developers that can't learn anything. First, no they're not. Early-career developers are human beings. LLMs are tools. But the more general argument here is that there's compounding value to working with an early-career developer and there isn't with an LLM. That seems false: the LLM may not be learning anything, but I am. I use these tools much more effectively now than I did 3 months ago. I think we're in the very early stages of figuring how to get good product out of them. That's obvious compounding value.

badsectoracula · 5 months ago
> the LLM may not be learning anything, but I am

Regardless of that, personally i'd really like it if they could actually learn from interacting with them. From a user's perspective what i'd like to do is to be able to "save" the discussion/session/chat/whatever, with everything the LLM learned so far, to a file. Then later be able to restore it and have the LLM "relearn" whatever is in it. Now, you can already do this with various frontend UIs, but the important part in what i'd want is that a) this "relearn" should not affect the current context window (TBH i'd like that entire concept to be gone but that is another aspect) and b) it should not be some sort of lossy relearning that loses information.

There are some solutions but there are all band-aids to fundamental issues. For example you can occasionally summarize whatever discussed so far and restart the discussion. But obviously that is just some sort of lossy memory compression (i do not care that humans can do the same, LLMs are software running on computers, not humans). Or you could use some sort of RAG but AFAIK this works via "prompt triggering" - i.e. only via your "current" interaction, so even if the knowledge is in there but whatever you are doing now wouldn't trigger its index the LLM will be oblivious to it.

What i want is, e.g., if i tell to the LLM that there is some function `foo` used to barfize moo objects, then go on and tell it other stuff way beyond whatever context length it has, save the discussion or whatever, restore it next day, go on and tell it other stuff, then ask it about joining splarfers, it should be able to tell me that i can join splarfers by converting them to barfized moo objects even if i haven't mentioned anything about moo objects or barfization since my previous session yesterday.

(also as a sidenote, this sort of memory save/load should be explicit since i'd want to be able to start from clean slate - but this sort of clean slate should be because i want to, not as a workaround to the technology's limitations)

didibus · 5 months ago
You want something that requires an engineering breakthrough.

Models don't have memory, and they don't have understanding or intelligence beyond what they learned in training.

You give them some text (as context), and they predict what should come after (as the answer).

They’re trained to predict over some context size, and what makes them good is that they learn to model relationships across that context in many dimensions. A word in the middle can affect the probability of a word at the end.

If you insanely scale the training and inference to handle massive contexts, which is currently far too expensive, you run into another problem: the model can’t reliably tell which parts of that huge context are relevant. Irrelevant or weakly related tokens dilute the signal and bias it in the wrong direction, the distribution flatten or just ends up in the wrong place.

That's why you have to make sure you give it relevant well attended context, aka, context engineering.

It won't be able to look at a 100kloc code base and figure out what's relevant to the problem at hand, and what is irrelevant. You have to do that part yourself.

Or what some people do, is you can try to automate that part a little as well by using another model to go research and build that context. That's where people say the research->plan->build loop. And it's best to keep to small tasks, otherwise the context needing for a big task will be too big.

epiccoleman · 5 months ago
I'm using a "memory" MCP server which basically just stores facts to a big json file and makes a search available. There's a directive in my system prompt that tells the LLM to store facts and search for them when it starts up.

It seems to work quite well and I'll often be pleasantly surprised when Claude retrieves some useful background I've stored, and seems to magically "know what I'm talking about".

Not perfect by any means and I think what you're describing is maybe a little more fundamental than bolting on a janky database to the model - but it does seem better than nothing.

zmmmmm · 5 months ago
I routinely ask the LLM to summarise the high level points as guidance and add them to the AGENTS.md / CONVENTIONS.md etc. It is limited due to context bloat but it's quite effective at getting it to persist important things that need to carry over between sessions.
boredemployee · 5 months ago
DO NOT WRITE ANY CODE YET.

haha I always do that. I think it's a good way to have some control and understand what it is doing before the regurgitation. I don't like to write code but I love the problem solving/logic/integrations part.

tptacek · 5 months ago
I'm surprised (or maybe just ignorant) that Claude doesn't have an explicit setting for this, because it definitely tends to jump the gun a lot.
closeparen · 5 months ago
>First, skilled engineers using LLMs to code also think and discuss and stare off into space before the source code starts getting laid down

Yes, and the thinking time is a significant part of overall software delivery, which is why accelerating the coding part doesn't dramatically change overall productivity or labor requirements.

zmmmmm · 5 months ago
I don't like the artificial distinction b/w thinking and coding. I think they are intimately interwoven. Which is actually one thing I really like about the LLM because it takes away the pain of iterating on several different approaches to see how they pan out. Often it's only when I see code for something that I know I want to do it a different way. Reducing that iteration time is huge and makes me more likely to actually go for the right design rather than settling for something less good since I don't want to throw out all the "typing" I did.
tptacek · 5 months ago
This logic doesn't even cohere. Thinking is a significant part of software delivery. So is getting actual code to work.
lomase · 5 months ago
I have not profiled how much time I am just codding at work, but is not the biggest time sink.

If their job is basically to generate code to close jira tickets I can see the appeal of LLMs.

yggdrasil_ai · 5 months ago
Self disciplined humans are far and few between, that seems to be the point of most of these anti-ai articles, and I tend to agree with them.
onion2k · 5 months ago
Most of my initial prompts to agents start with "DO NOT WRITE ANY CODE YET."

Copilot has Ask mode, and GPT-5 Codex has Plan/Chat mode for this specific task. They won't change any files. I've been using Codex for a couple of days and it's very good if you give it plenty of guidance.

AlexCoventry · 5 months ago
> figuring how to get good product out of them

What have you figured out so far, apart from explicit up-front design?

surgical_fire · 5 months ago
> Most of my initial prompts to agents start with "DO NOT WRITE ANY CODE YET."

I really like that on IntelliJ I have to approve all changes, so this prompt is unnecessary.

There's a YOLO mode that just changes shit without approval, that I never use. I wonder if anyone does.

t0mas88 · 5 months ago
I use YOLO mode all the time with Claude Code. Start on a new branch, put it in plan mode (shift + tab twice), get a solid plan broken up in logical steps, then tell it to execute that plan and commit in sensible steps. I run that last part in "YOLO mode" with commit and test commands white listed.

This makes it move with much less scattered interactions from me, which allows focus time on other tasks. And the committing parts make it easier for me to review what it did just like I would review a feature branch created by a junior colleague.

If it's done and tests pass I'll create a pull request (assigned to myself) from the feature branch. Then thoroughly review it fully, this really requires discipline. And then let Claude fetch the pull request comments from the Github API and fix them. Again as a longer run that allows me to do other things.

YOLO-mode is helpful for me, because it allows Claude to run for 30 minutes with no oversight which allows me to have a meeting or work on something else. If it requires input or approval every 2 minutes you're not async but essentially spending all your time watching it run.

dvratil · 5 months ago
It's more about having the LLM give you a plan of what it wants to do and how it wants to do it, rather rhan code. Then you can mold the plan to fit what you really want. Then you ask it to actually start writing code.

Even Claude Code lets you approve each change, but it's already writing code according to a plan that you reviewed and approved.

dpflan · 5 months ago
> Most of my initial prompts to agents start with "DO NOT WRITE ANY CODE YET."

I like asking for the plan of action first, what does it think to do before actually do any edits/file touching.

james_marks · 5 months ago
I’ve also had success writing documentation ahead of time (keeping these in a separate repo as docs), and then referencing it for various stages. The doc will have quasi-code examples of various features, and then I can have a models stubbed in one pass, failing tests in the next, etc.

But there’s a guiding light that both the LLM and I can reference.

pron · 5 months ago
> LLMs are tools

With tools you know ahead of time that they will do the job you expect them to do with very high probability, or fail (with low probability) in some obvious way. With LLMs, there are few tasks you can trust them to do, and you also don't know their failure mode. They can fail yet report success. They work like neither humans nor tools.

An LLM behaves like a highly buggy compiler that too frequently reports success while emitting incorrect code. Not knowing where the bugs are, the only thing you can try to do is write the program in some equivalent way but with different syntax, hoping you won't trigger a bug. That is not a tool programmers often use. Learning to work with such a compiler is a skill, but it's unclear how transferable or lasting that skill is.

If LLMs advance as significantly and as quickly as some believe they will, it may be better to just wait for the buggy compiler to be fixed (or largely fixed). Presumably, much less skill will be required to achieve the same result that requires more skill today.

topherPedersen · 5 months ago
I agree with this. My bosses boss thinks that AI is going to end up doing 95% of our work for us. From my experience (so far) AI coding follows the 80/20 rule, it can get you 80% of what you want for 20% of the time/effort. And the ratio might be more like, it'll get you 80% of what you want IMMEDIATELY, but it can't get you the last 20%, it needs a human to get it over the finish line.

It's super impressive in my opinion, but if you think it's going to straight up replace humans right now, I think you probably aren't a software developer in the trenches cranking out features.

I'm sort of a Neanderthal when it comes to understanding AI, but I don't think AI in it's current form works like a human. Right now, it kind of just cranks out all the code in one fell swoop. A human on the other hand works more iteratively. You write a little bit of code then you run it and look at an iPhone simulator, look at Figma designs, and see if you're getting closer to what you want. AI doesn't appear to know how to iterate, run code, look at designs, and debug things. I imagine in 100 years it will know how to do all that stuff though. And who knows, maybe in 1 year it will be able to do that. But as of right now, September 28th, 2025 it can't do that yet.

DustinKlent · 4 months ago
It depends on which application you're using. Applications like "RooCode", which is a free extension for VSCode, have several "modes" which allow the user to create an outline of the project using an "architect" LLM, followed by coding the project with a "Coding" LLM, followed by debugging the project with a "Debugging" LLM if there are bugs. There's also an LLM that answers questions about the project. Only the coding and the debugging LLMs do actual coding but you can set it so you have to approve each change it makes.
closeparen · 5 months ago
I agree about the 80/20 part. On the workflow front, there’s been enormous progress from Copilot to Cursor to Claude Code just in the last 2 years. A lot of this is down to the plumbing and I/O bits rather than the mysterious linear algebra bits, so it’s relatively tractable to regular software engineering.
sothatsit · 4 months ago
This tracks with my AI usage as well. I often use AI to get the first 80% of the work done (kinda like a first draft), and then I finish things off from there.
budro · 5 months ago
I think what the article gets at, but doesn't quite deliver on, is similar to this great take from Casey Muratori [1] about how programming with a learning-based mindset means that AI is inherently not useful to you.

I personally find AI code gen most useful for one-off throwaway code where I have zero intent to learn. I imagine this means that the opposite end of the spectrum where learning is maximized is one where the AI doesn't generate any code for me.

I'm sure there are some people for which the "AI-Driven Engineering" approach would be beneficial, but at least for me I find that replacing those AI coding blocks with just writing the code myself is much more enjoyable, and thus more sustainable to actually delivering something at the end.

[1] https://youtu.be/apREl0KmTdQ?t=4751 (relevant section is about 5 minutes long)

pietz · 4 months ago
Interesting take.

I think it boils down to personal preference where some people want to use AI while others don't. I also learn when coding with my AI agent. I learn about using the tool more effectively. As someone who has been coding for 10 years, I find more pleasure in AI assisted coding.

But aside from taste, the product and the business don't care about what I like. It's about shipping quality updates more quickly. And while there might be some tension in saying this, I'm convinced that I can do that much more quickly in AI assisted coding.

dcre · 5 months ago
"learning is maximized is one where the AI doesn't generate any code for me"

Obviously you have to work to learn, but to me this is a bit like saying learning is maximized when you never talk to anyone or ask for help — too strong.

budro · 5 months ago
I don't think it was that strong of an over-generalization. AI doesn't seem to help out in the same way a human would. My teammates will push back and ask for proof of effort (a PR, some typedefs, a diagram, etc.). And sometimes they'll even know how to solve my problem since they have experience with the codebase.

On the other hand you have AI which, out of the box, seems content to go along with anything and will happily write code for me. And I've never seen it have a single insight on the same level as my teammates. All of which is to say, AI doesn't really feel like something you can properly "ask" something. It's especially far away from that when it's just generating code and nothing else.

iambateman · 5 months ago
I spend more time thinking now that I use Claude Code. I write features that are often 400-600 word descriptions of what I want—-something I never would’ve done beforehand.

That thinking does result in some big tradeoffs…I generally get better results faster but I also have a less complete understanding of my code.

But the notion that Claude Code means an experienced developer spends less thinking carefully is simply wrong. It’s possible (even likely) that a lot of people are using agents poorly…but that isn’t necessarily the agent’s fault.

qazxcvbnmlp · 5 months ago
What these articles miss:

1) not all coding is the same. You might be working on a production system. I might need a proof of concept

2) not everyone's use of the coding agents is the same

3) developer time, especially good developer time has a cost too

I would like to see an article that frames the tradeoffs of AI assisted coding. Specifically without assigning value judgments (ie goodness or badness). Really hard when your identity is built around writing code.

mehagar · 5 months ago
This article explicitly mentions your first point.
alshival · 5 months ago
Every day I think to myself: "Just fake it like you want to be here for 30 more years and then you can retire."

I have been working in machine-learning for 10 years. I am tired of the computer. I am tired of working. I just want to lay in the grass.

Herring · 5 months ago
I’ve been grassing for 7 months. That gets old too.
_ink_ · 4 months ago
The how-to-become-a-gardener meme comes to mind. But I feel you. Feels like a golden cage currently. It's very comfy, but you waste your life in front of screen.
metachris · 4 months ago
Can you work less (maybe for some time)? Getting yourself bigger chunks of free time might help. All the best!
djeastm · 5 months ago
It sounds like you need a sabbatical.
jsmith99 · 5 months ago
> lack in-depth knowledge of your business, codebase, or roadmap

So give them some context. I like Cline's memory bank approach https://docs.cline.bot/prompting/cline-memory-bank which includes the architecture, progress, road map etc. Some of my more complex projects use 30k tokens just on this, with the memory bank built from existing docs and stuff I told the model along the way. Too much context can make models worse but overall it's a fair tradeoff - it maintains my coding style and architecture decisions pretty well.

I also recommend in each session using Plan mode to get to a design you are happy with before generating any code.