Readit News logoReadit News
cglan · a day ago
I find LLMs so much more exhausting than manual coding. It’s interesting. I think you quickly bump into how much a single human can feasibly keep track of pretty fast with modern LLMs.

I assume until LLMs are 100% better than humans in all cases, as long as I have to be in the loop there will be a pretty hard upper bound on what I can do and it seems like we’ve roughly hit that limit.

Funny enough, I get this feeling with a lot of modern technology. iPhones, all the modern messaging apps, etc make it much too easy to fragment your attention across a million different things. It’s draining. Much more draining than the old days

afandian · 17 hours ago
Same feeling as pair programming in my experience.

If your consciousness is driving, your brain is internally aligned. You type as you think. You can get flow state, or at least find a way to think around a problem.

If you're working with someone else and having to discuss everything as you go, then it's just a different activity. I've collaboratively written better code this way in the past. But it's slower and more exhausting.

Like pair programming, I hope people realise that there's a place for both, and doing exclusively one or the other full time isn't in everyone's best interests.

fluoridation · 15 hours ago
I've had a similar experience, where I pair-programmed with a coworker for a few days in a row (he understood the language better and I understood the problem better) and we couldn't be in the call for more than an hour at a time. Still, although it was more tiring, I found it quite engaging and enjoyable. I'd much rather bounce ideas back and forth with another person than with an LLM.
superfrank · a day ago
> I find LLMs so much more exhausting than manual coding

I do as well, so totally know what you're talking about. There's part of me that thinks it will become less exhausting with time and practice.

In high school and college I worked at this Italian place that did dine in, togo, and delivery orders. I got hired as a delivery driver and loved it. A couple years in there was a spell where they had really high turnover so the owners asked me to be a waiter for a little while. The first couple months I found the small talk and the need to always be "on" absolutely exhausting, but overtime I found my routine and it became less exhausting. I definitely loved being a delivery driver far more, but eventually I did hit a point where I didn't feel completely drained after every shift of waiting tables.

I can't help but think coding with LLMs will follow a similar pattern. I don't think I'll ever like it more than writing the code myself, but I have to believe at some point I'll have done it enough that it doesn't feel completely draining.

qq66 · a day ago
I think it's because traditionally, software engineering was a field where you built your own primitives, then composited those, etc... so that the entire flow of data was something that you had a mental model for, and when there was a bug, you simply sat down and fixed the bug.

With the rise of open source, there started to be more black-box compositing, you grabbed some big libraries like Django or NumPy and honestly just hoped there weren't any bugs, but if there were, you could plausibly step through the debugger and figure out what was going wrong and file a bug report.

Now, the LLMs are generating so many orders of magnitude more code than any human could ever have the chance to debug, you're basically just firing this stuff out like a firehose on a house fire, giving it as much control as you can muster but really just trusting the raw power of the thing to get the job done. And, bafflingly, it works pretty well, except in those cases where it doesn't, so you can't stop using the tool but you can't really ever get comfortable with it either.

prmph · 18 hours ago
I think what will eventually help is something I call AI-discipline. LLMs are a tool, not more, no less. Just like we now recognize unbridled use of mobile phones to be a mental health issue, causing some to strictly limit their use, I think we will eventually recognize that the best use of LLMs is found by being judicious and intentional.

When I first started dabbling in the use of LLMs for coding, I almost went overboard trying to build all kinds of tools to maximize their use: parallel autonomous worktree-based agents, secure sandboxing for agents to do as they like, etc.

I now find it much more effective to use LLMs in a target and minimalist manner. I still architecturally important and tricky code by hand, using LLMs to do several review passes. When I do write code with LLMs, I almost never allow them to do it without me in the loop, approving every single edit. I limit the number of simultaneous sessions I manage to at most 3 or 4. Sometimes, I take a break of a few days from using LLMs (and ofter from writing any code at all), and just think and update the specs of the project(s) I'm working on at a high level, to ensure I not doing busy-work in the wrong direction.

I don't think I'm missing anything by this approach. If anything, I think I am more productive.

apsurd · a day ago
Thanks for the story. I also spent time as a delivery driver at an italian restaurant. It was a blast in the sense that i look back at that slice of life with pride and becoming. Never got the chance to be a waiter, but definitely they were characters and worked hard for their money. Also the cooking staff. What a hoot.
hombre_fatal · a day ago
I think the upper limit is your ability to decide what to build among infinite possibilities. How should it work, what should it be like to use it, what makes the most sense, etc.

The code part is trivial and a waste of time in some ways compared to time spent making decisions about what to build. And sometimes even a procrastination to avoid thinking about what to build, like how people who polish their game engine (easy) to avoid putting in the work to plan a fun game (hard).

The more clarity you have about what you’re building, then the larger blocks of work you can delegate / outsource.

So I think one overwhelming part of LLMs is that you don’t get the downtime of working on implementation since that’s now trivial; you are stuck doing the hard part of steering and planning. But that’s also a good thing.

SchemaLoad · a day ago
I've found writing the code massively helps your understanding of the problem and what you actually need or want. Most times I go into a task with a certain idea of how it should work, and then reevaluate having started. While an LLM will just do what you ask without questing, leaving you with none of the learnings you would have gained having done it. The LLM certainly didn't learn or remember anything from it.
galaxyLogic · a day ago
Right when you're coding with LLM it's not you asking the LLM questions, it's LLM asking you questions, about what to build, how should it work exactly, should it do this or that under what conditions. Because the LLM does the coding, it's you have to do more thinking. :-)

And when you make the decisions it is you who is responsible for them. Whereas if you just do the coding the decisions about the code are left largely to you nobody much sees them, only how they affect the outcome. Whereas now the LLM is in that role, responsible only for what the code does not how it does it.

clickety_clack · a day ago
I’d love to see what you’ve built. Can you share?
grey-area · a day ago
Maintenance is the hard part, not writing new code or steering and planning.
ipaddr · a day ago
You can outsource that to another llm
raincole · a day ago
If you care at code quality of course it is exhausting. It's supposed to be. Now there is more code for you to assure quality in the same length of time.
onion2k · 21 hours ago
If you care about code quality you should be steering your LLM towards generating high quality code rather than writing just 'more code' though. What's exhausting is believing you care about high quality code, then assuming the only way to get high quality code from an LLM is to get it to write lots of low quality code that you have to fix yourself.

LLMs will do pretty much exactly what you tell them, and if you don't tell them something they'll make up something based on what they've been trained to do. If you have rules for what good code looks like, and those are a higher bar than 'just what's in the training data' then you need to build a clear context and write an unambiguous prompt that gets you what you want. That's a lot of work once to build a good agent or skill, but then the output will be much better.

Cthulhu_ · 18 hours ago
I suspect it's because you need to keep more things in your head yourself; after a while of coding by hand, it becomes more labor and doesn't cost as much brain power anymore. But when offloading the majority of that coding to an LLM, you're left with the higher level tasks of software engineering, you don't get the "breaks" while writing code anymore.
lelanthran · 18 hours ago
How often, in your life, did you write code without stopping, in the middle of writing, to go back and review assumptions that turned out to be wrong?

I'm not talking about "oh, this function is deprecated, have to use this other one, but more "this approach is wrong, maybe delete it all and try a different approach"?

Because IME an AI never discards an approach, they just continue adding band aids and conditional to make the wrong approach work.

simonask · 18 hours ago
The tactical process of writing the code is also when you discover the errors in your design.

Like, did we think waterfall suddenly works now just because typing can be automated? No.

gotwaz · a day ago
Theory of Bounded Rationality applies. Tech tools scale systemic capability limits. 3 inch chimp brain limits dont change. The story writes itself.
rhysfonixone · 19 hours ago
Working with LLMs for coding tasks feels more like juggling I think. You're fixating on the positions of all of the jobs you're handling simultaneously and while muscle memory (in this metaphor, the LLMs) are keeping each individual item in the air, you're actively managing, considering your next trick/move, getting things back on track when one object drifts from what you'd anticipated, etc. It simultaneously feels markedly more productive and requiring carefully divided (and mentally taxing) focus. It's an adjustment, though I do worry if there's a real tangible trade-off at play and I'm loosing my edge for instances where I need to do something carefully, meticulously and manually.
Sparkyte · 15 hours ago
It feels no different than inhheriting someone's code base when you start at a company. I hate this feeling. AI removes the developer's attachment and first hand understanding of the code.
akomtu · a day ago
You used to be a Formula 1 driver. Now you are an instructor for a Formula 1 autopilot. You have to watch it at all times with full attention for it's a fast and reckless driver.
esafak · a day ago
You're being generous to the humans; we're more like Ladas in comparison.
ModernMech · 9 hours ago
Classic coding was the process of incrementally saying "Ah, I'm getting it!" -- as your compile your code and it works better each time, you get a little dopamine hit from "solving" the puzzle. This creates states where time can pass with great alacrity as we enter these little dopamine induced trances we call "flow", which we all experience.

AI is not that, it's a casino. Every time you put words into the prompt you're left with a cortisol spike as you hope the LLM lottery gives you a good answer. You get a little dopamine spike when it does, but it's not the same as when you do it yourself because it's punctuated by anxiety, which is addictive but draining. And I personally have never gotten into a state of LLM-induced "flow", but maybe others have and can explain that experience. But to me there's too much anxiety around the LLM from the randomness of what it produces.

empath75 · 13 hours ago
I go through phases with it where I am extraordinarily productive and times where i can't even bear to open a terminal window.
senectus1 · a day ago
I imagine code reviewing is a very different sort of skill than coding. When you vibe code (assuming you're reading teh code that is written for you) you become a coder reviewer... I suspect you're learning a new skill.
qudat · a day ago
It’s easier to write code than read it.
pessimizer · a day ago
The way I've tried to deal with it is by forcing the LLM to write code that is clear, well-factored and easy to review i.e. continually forcing it to do the opposite of what it wants to do. I've had good outcomes but they're hard-won.

The result is that I could say that it was code that I myself approved of. I can't imagine a time when I wouldn't read all of it, when you just let them go the results are so awful. If you're letting them go and reviewing at the end, like a post-programming review phase, I don't even know if that's a skill that can be mastered while the LLMs are still this bad. Can you really master Where's Waldo? Everything's a mess, but you're just looking for the part of the mess that has the bug?

I'm not reviewing after I ask it to write some entire thing. I'm getting it to accomplish a minimal function, then layering features on top. If I don't understand where something is happening, or I see it's happening in too many places, I have to read the code in order to tell it how to refactor the code. I might have to write stubs in order to show it what I want to happen. The reading happens as the programming is happening.

Schlagbohrer · 16 hours ago
Reminds me of the best saying I ever got from my CS professor. She would make us first write out our code and answer the question, "What will the output be?" before we were allowed to run it.

"If you don't know what you want your code to do, the computer sure as heck won't know either." I keep this with me today. Before I run my code for the first time or turn on my hardware for the first time, I ask myself, "What _exactly_ am I expecting to see here?" and if I can't answer that it makes me take a closer and more adversarial look at my own output before running it.

swat535 · 14 hours ago
Isn't this the whole idea of TDD? Write your assertions, then write the code the fulfill it.
Tenemo · 15 hours ago
I'm not 100% convinced, while iterating fast on an early prototype, what's wrong with legitimately not knowing what e.g. the data structure will end up looking? Just let it run, check debugger/stdout/localhost page and adjust: "Oh, right, the entries are missing canonical IDs, but at the same time there are already all the comments in them, forgot they would be there – neat". What's wrong with that? Especially at uni, when working on low-stakes problems.
ssivark · 14 hours ago
> what's wrong with legitimately not knowing what e.g. the data structure will end up looking?

But that's not what the above comment said.

> Just let it run, check debugger/stdout/localhost page and adjust: "Oh, right, the entries are missing canonical IDs, but at the same time there are already all the comments in them, forgot they would be there

So you did have an expectation that the entries should have some canonical IDs, and anticipated/desired a certain specific behavior of the system.

Which is basically the meaning of "what will the output be?" when simplified for programming novices at university.

Connection error.Connection error.
Connection error.
salawat · 12 hours ago
This is a restatement of the old wisdom that to safely use a tool you must be 10% smarter than it is." Or stated differently, you must be "ahead" of the tool (capable of accurately modeling and predicting the outcome), not "behind" (only reacting). TDD is kind of an outgrowth of it. I've lived by the wisdom, but admit that for me there is a lot of fun in the act of verifying hypotheses in the course of development, even in the "test case gap" when you're writing the lines of code that don't make a difference in terms of making a long term test case go from red to green, or doing other exploratory work where the totality of behavior is not well charted. Those times are the best. "Moodily scowling at the computer screen again," has been a status update from chilluns on what I'm doing more times than I like to admit.
rednafi · a day ago
I have always enjoyed the feeling of aporia during coding. Learning to embrace the confusion and the eventual frustration is part of the job. So I don’t mind running in a loop alongside an agent.

But I absolutely loathe reviewing these generated PRs - more so when I know the submitter themselves has barely looked at the code. Now corporate has mandated AI usage and is asking people to do 10k LOC PRs every day. Reviewing this junk has become exhausting.

I don’t want to read your code if you haven’t bothered to read it yourselves. My stance is: reviewing this junk is far more exhausting. Coding is actually the fun part.

bmurphy1976 · a day ago
> Now corporate has mandated AI usage and is asking people to do 10k LOC PRs every day.

That's a big red flag if I ever saw one. Corporate should be empowering the engineering team to use AI tooling to improve their own process organically. Is this true or exaggeration? If it's true I'd start looking for a more balanced position at more disciplined org.

rednafi · a day ago
True at Doordash, Amazon, and salesforce - speaking from experience.
chewbacha · a day ago
Mandates are becoming normal. Most devs don’t seem to want to but they want to keep their jobs.
civvv · 14 hours ago
10k LoC per day? Wow, my condolences to you.

On a different note: something I just discovered is that if you google "my condolences", the AI summary will thank you for the kindness before defining its meaning, fun.

hnthrow0287345 · 15 hours ago
>Reviewing this junk has become exhausting.

Nitpick it to death. Ask the reviewer questions on how everything works. Even if it looks good, flip a coin and reject it anyway. Drag that review time out. You don't want unlucky PRs going through after all.

Corporate is not going to wake up and do the sensible thing on its own.

rednafi · 15 hours ago
Ha ha I wish. Then both corporate and your coworkers hate you.

Also, there is no point in asking questions when you know that they just yoloed it and won't be able to answer anything.

We have collectively lost our common sense and reasonable people are doing unreasonable things because there's an immense amount of pressure from the top.

Connection error.
anonzzzies · a day ago
I always wonder where HNers worked or work; we do ERP and troubleshooting on legacy systems for medium to large corps; PRs by humans were always pretty random and barely looked at as well, even though the human wrote it (copy/pasted from SO and changed it somewhat); if you ask what it does they cannot tell you. This is not an exception, this is the norm as far as I can see outside HN. People who talk a lot, don't understand anything and write code that is almost alien. LLMs, for us, are a huge step up. There is a 40 nested if with a loop to prevent it from failing on a missing case in a critical Shell (the company) ERP system. LLMs would not do that. It is a nightmare but makes us a lot of money for keeping things like that running.
sarchertech · a day ago
I currently work at one of the biggest tech companies. I’ve been doing this for over 20 years, and I’ve worked at scrappy startups, unicorns, and medium size companies.

I’ve certainly seen my share of what I call slot driven development where a developer just throws things at the wall until something mostly works. And plenty if cut and paste development.

But it’s far from the majority. It’s usually the same few developers at a company doing it, while the people who know what they’re doing furiously work to keep things from falling apart.

If the majority of devs were doing this nothing would work. My worry is that AI lets the bad devs produce this kind of work on a massive scale that overwhelms the good devs ability to fight back or to even comprehend the system.

nightpool · a day ago
I would hope that most people who are technically competent enough to be on HN are technically competent enough to quit orgs with coding standards that bad. Or, they're masochists who have taken on the chamllenge of working to fix them
shiandow · a day ago
The one thing I don't quite get is how running a loop alongside an agent is any different from reviewing those PRs.
bsjshshsb · a day ago
Use AI to review.
dyauspitr · 21 hours ago
I do “TDD” LLM coding and only review the tests. That way if the tests pass I ship it. It hasn’t bitten me in the ass yet.
xyzal · 20 hours ago
10k, really? Are you supposed to understand all that code? This is crazy and a one way street to burnout.
rednafi · 16 hours ago
Yep and now we are encouraged to use AI to review the code as well. But if shit hits the fan then you are held responsible.
jumploops · a day ago
A lot of these resonate with me, particularly the mental fatigue. It feels like normal coding forced me to slow my brain down, whereas now my mind is the limit.

For context, I started an experiment to rebuild a previous project entirely with LLMs back in June '25 ("fully vibecoded" - not even reading the source).

After iterating and finally settling on a design/plan/debug loop that works relatively well, I'm now experiencing an old problem like new: doing too much!

As a junior engineer, it's common to underestimate the scope of some task, and to pile on extra features/edge cases/etc. until you miss your deadline. A valuable lesson any new programmer/software engineer necessarily goes though.

With "agentic engineering," it's like I'm right back at square one. Code is so cheap/fast to write, I find myself doing it the "right way" from the get go, adding more features even though I know I shouldn't, and ballooning projects until they reach a state of never launching.

I feel like a kid again (:

hackit2 · a day ago
I spent more time correcting LLMs or agentics systems than just learning the domain and doing the coding myself. I mainly leave LLM to the boring work of doing tedious repetitive code.

If I give it anything resembling anything that I'm not an expert on, it will make a mess of things.

jumploops · 21 hours ago
Yeah the old adage "what you put in is what you get out" is highly relevant here.

Admittedly I'm knowledgable in most of the domains I use LLMs for, but even so, my prompts are much longer now than they used to be.

LLMs are token happy, especially Claude, so if you give it a short 1-2 sentence prompt, your results will be wildly variable.

I now spend a lot of mental energy on my prompting, and resist the urge to use less-than-professional language.

Instead of "build me an app to track fitness" it's more like:

> "We're building a companion app for novice barbell users, roughly inspired by the book 'Starting Strength.' The app should be entirely local, with no back-end. We're focusing on iOS, and want to use SwiftUI. Users should [..] Given this high-level description, let's draft a high-level design doc, including implementation decisions, open questions, etc. Before writing any code, we'll review and iterate on this spec."

I've found success in this method for building apps/tools in languages I'm not proficient in (Rust, Swift, etc.).

apsurd · a day ago
What do you mean doing it the "right way" from the get-go, as then paired with more features, ballooning projects, and never launching?

Is that why it's in quotes because it's the opposite of the right way?

If there's one thing I learned in a decade+ of professional programming, it's that we can't predict the future. That's it, that simple. YANGNI. (also: model the data, but I'm trying to make a point here)

We got into coding because we like to code; we invent reasons and justifications to code more, ship more, all the world's problems can be solved if only developers shipped more code.

Nirvana is reached when they that love and care about the shipping of the code know also that it's not the shipping of the code that matters.

jumploops · a day ago
Yeah exactly, "right way" is in quotes because there is no right way.

The most important thing is shipping/getting feedback, everything else is theatre at best, or a project-killing distraction at worst.

As a concrete example, I wanted to update my personal website to show some of these fully-vibecoded projects off. That seemed too simple, so instead I created a Rotten Tomatoes-inspired web app where I could list the projects. Cool, should be an afternoon or two.

A few yak shaves later, and I'm adding automatic repo import[0] from Github...

Totally unnecessary, because I don't actually expect anyone to use the site other than me!

[0]https://github.com/jumploops/slop.haus/pull/9

olejorgenb · a day ago
I find working more asynchronous with the agents help. I've disabled the in-your-face agent-is-done/need-input notifications [1]. I work across a few different tasks at my own pace. It works quite well, and when/if I find a rhythm to it, it's absolutely less intense than normal programming.

You might think that the "constant" task switching is draining, but I don't switch that frequently. Often I keep the main focus on one task and use the waiting time to draft some related ideas/thoughts/next prompt. Or browse through the code for light review/understanding. It also helps to have one big/complex task and a few simpler things concurrently. And since the number of details required to keep "loaded" in your head per task is fewer, switching has less cost I think. You can also "reload" much quicker by simply chatting with the agent for a minute or two, if some detail have faded.

I think a key thing is to NOT chase after keeping the agents running at max efficiency. It's ok to let them be idle while you finish up what your doing. (perhaps bad of KV cache efficiency though - I'm not sure how long they keep the cache)

(And obviously you should run the agent in a sandbox to limit how many approvals you need to consider)

[1] I use the urgent-window hint to get a subtle hint of which workspace contain an agent ready for input.

EDIT: disclaimer - I'm relative new to using them, and have so far not used them for super complex tasks.

mlaretallack · a day ago
Yes, I follow the same sort of pattern, it took a while to convince myself that it was ok to leave the agent waiting, but it helps with the human context switching. I also try to stagger the agests, so one may be planning and designing, while another is coding, that way i can spend more time on the planning and designing ones and leave the coding one to get on with it.
nixpulvis · 21 hours ago
That's actually one of the best parts. You can trust some of the context you have loaded is side loaded in the LLM, making task switching feel less risky and often improving your ability to work on needed and/or related changes elsewhere.
skybrian · a day ago
Yes, I briefly felt like I needed to keep agents busy but got over it. The point of having multiple things going on is so you have a another task to work on.
193572 · a day ago
It looks like Stockholm syndrome or a traditional abusive relationship 100 years ago where the woman tries to figure out how to best prompt her husband to do something.

You know you can leave abusive relationships. Ditch the clanker and free your mind.

razorbeamz · a day ago
LLMs do not actually make anything better for anyone. You have to constantly correct them. It's like having a junior coder under your wing that never learns from its mistakes. I can't imagine anyone actually feeling productive using one to work.
bmurphy1976 · a day ago
I don't know what to think about comments like this. So many of them come from accounts that are days or at most weeks old. I don't know if this is astroturfing, or you really are just a new account and this is your experience.

As somebody who has been coding for just shy of 40 years and has gone through the actual pain on learning to run a high level and productive dev team, your experience does not match mine. Even great devs will forget some of the basics and make mistakes and I wish every junior (hell even seniors) were as effective as the LLMs are turning out to be. Put the LLM in the hands of a seasoned engineer who also has the skills to manage projects and mentor junior devs and you have a powerful accelerator. I'm seeing the outcome of that every day on my team. The velocity is up AND the quality is up.

qudat · a day ago
> The velocity is up AND the quality is up.

This is not my experience on a team of experienced SWEs working on a product worth 100m/year.

Agents are a great search engine for a codebase and really nice for debugging but anytime we have it write feature code it makes too many mistakes. We end up spending more time tuning the process than it takes to just write the code AND you are trading human context with agent context that gets wiped.

razorbeamz · a day ago
Who would I possibly be astroturfing for? The entire industry is all-in on LLMs.
nixpulvis · 21 hours ago
Agreed.

It's clear to me as a more seasoned engineer that I can prompt the LLM to do what I want (more or less) and it will catch generally small errors in my approach before I spend time trying them. I don't often feel like I ended up in a different place than I would have on my own. I just ended up there faster, making fewer concessions along the way.

I do worry I'll become lazy and spoiled. And then lose access to the LLM and feel crippled. That's concerning. I also worry that others aren't reading the patches the AI generates like I am before opening PRs, which is also concerning.

halayli · a day ago
This is a very reasonable comment. IMO it's a falacy to take into consideration the age of an account especially when it is subjective experience.
nixpulvis · 21 hours ago
A junior engineer who might spend a few hours trying to understand why you added a mutex, reading blogs on common patterns, might come back with a question about why you locked it twice in one thread in some case you didn't consider. Just because someone lacks the experience and knowledge you have, doesn't mean they cannot learn and be helpful. Sometimes those with the most to learn are the most willing to put the hours in trying.
solumunus · 19 hours ago
You're just bad at using them. It's a skill like anything else. I also suspect bad coders become even worse with LLM's, and the opposite is true.
jatora · a day ago
You need to learn to use the tool better, clearly, if you have such an unhinged take as this.
Sirental · a day ago
No to be fair I do see what he's saying. I see a major difference between the more expensive models and the cheaper ones. The cheaper (usually default) ones make mistakes all the damn time. You can be as clear as day with them and they simply don't have the context window or specs to make accurate, well reasoned desicions and it is a bit like having a terrible junior work alongside you, fresh out of university.
voxl · a day ago
It's not unhinged at all, it's a lack of imagination on both of your parts.
razorbeamz · a day ago
The only people who use LLMs "as a tool" are those who are incapable of doing it without using it at all.
jrecursive · 18 hours ago
The most honest, logical, and practical take I've seen on this. People consistently underestimate the skill and effort it takes to write precisely and think critically both about their problem, and their processes. The closer you are to knowing what to ask for in the way knowledgeable people ask for it with respect to the process you are using to complete work, the closer the output will be to what you want.

Dead Comment