The tradeoff of higher velocity for less enjoyment may feel less welcome when it becomes the new baseline and the expectation of employers / customers. The excitement of getting a day's work done in an hour* (for example) is likely to fade once the expectation is to produce 8 of such old-days output per day.
I suspect it doesn't matter how we feel about it mind you. If it's going to happen it will, whether we enjoy the gains first or not.
* setting aside whether this is currently possible, or whether we're actually trading away more quality that we realise.
> The excitement of getting a day's work done in an hour* (for example) is likely to fade once the expectation is to produce 8 of such old-days output per day.
That dumb attitude (which I understand you’re criticising) of “more more more” always reminds me of Lenny from the Simpsons moving fast through the yellow light, with nowhere to go.
> "If it's going to happen it will" - That is quite the defeatist attitude. Society becoming shittier isn’t inevitable
You're right in general, but I don't think that'll save you/us from OP's statement. This is simple economic incentives at play. If AI-coding is even marginally more economically efficient (i.e. more for less) the "old way" will be swept aside at breathtaking pace (as we're seeing). The "from my cold dead hands" hand-coding crowd may be right, but they're a transitional historical footnote. Coding was always blue-collar white-collar work. No-one outside of coders will weep for what was lost.
If you're a salaried or hourly employee, you aren't paid for your output, you are paid for your time, with minimum expectations of productivity.
If you complete all your work in an hour... you still owe seven hours based on the expectations of your employment agreement, in order to earn your salary and benefits.
If you'd rather work in an output based capacity, you'll want to move to running your own contacting business in a fixed-bid type capacity.
moving fast through the yellow light, with nowhere to go.
My company has been preparing for this for a while now, I guess, as my backlog clearly has years' worth of work in it and positions of people who have left the org remain unfilled. My colleagues at other companies are in a similar situation. Considering round after round of layoffs, if I got ahead a little bit and found that I had nothing else to do, I'd be worried for my job.
Society becoming shittier isn’t inevitable
Yes, I agree, but the deck is usually stacked against the worker, especially in America. I doubt this will be the issue that promotes any sort of collectivism.
> That dumb attitude (which I understand you’re criticising) of “more more more” always reminds me of Lenny from the Simpsons moving fast through the yellow light, with nowhere to go.
Realizing that attitude in myself at times has given me so much more peace of mind. Just in general, not everything needs to be as fast and efficient as possible.
Not to mention the times where in the end I spend a lot of time and energy in trying to be faster only to end up with this xkcd: https://xkcd.com/1319/
As far as LLM use goes, I don't need moar velocity! so I don't try to min max my agentic workflow just to squeeze out X amount more lines code.
In fact, I don't really work with agentic workflows to begin with. I more or less still use LLMs as tools external to the process. Using them as interactive rubber duckies. Things like deciphering spaghetti code, do a sanity check on code I wrote (and being very critical of the suggestions they come up with), getting a quick jump start on stuff I haven't used again (how do I get started with X of Y again?), that sort of stuff.
Using LLMs in the IDE and other agentic use is something I have worked with. But to me it falls under what I call "lazy use" where you are further and further removed from the creation of code, the reasoning behind it, etc. I know it is a highly popular approach with many people on HN. But in my experience, it is an approach that makes skills of experienced developers atrophy and makes sure junior developers are less likely to pick them up. Making both overly reliant on tools that have been shown to be less than reliable when the output isn't properly reviewed.
I get the firm feeling that velocity crowd works in environments where they are judged by the amount of tickets closed. Basically "feature complete, test green, merged, move on!". In that context, it isn't really "important" that the tests that are green are also refactored by the thing itself, just that they are green. It is a symptom of a corporate environment where the focus is on these "productivity" metrics. From that perspective I can fully see the appeal for LLM heavy workflows as they most certainly will have an impact on metrics like "tickets closed" or "lines of code written".
> That is quite the defeatist attitude. Society becoming shittier isn’t inevitable, though inaction and giving up certainly helps that along.
This feels like kicking someone when they’re down! Given the current state of corporate and political America, it doesn’t look likely there will be any pressure for anything but enshittification to me. Telling people at the coal face to stay cheerful seems unlikely to help. What mechanism do you see for not giving up to actually change the experience of people in 10 ish years time?
> The trick is not telling anyone you spent an hour to do 7 hours of work.
I wish that the hype crowd would do that. It would make a for a much more enjoyable and sane experience on platforms like this. It's extremely difficult to have actual conversations subjects when there are crowds of fans involved who don't want to hear anything negative.
Yes, I also do realize there are people completely on the other side as well. But to honest, I can see why they are annoyed by the fan crowd.
Until your coworkers who've never heard of work-life balance start bragging about it, and volunteering to spend 8 hours to do 56 hours of work, or maybe spending 11 hours to impress the boss.
The most challenging thing I'm finding about working with LLM-based tools is the reduction in enjoyment. I'm in this business because I love it, and I'm worried about that going forward.
My daughter who switching from engineering to software because she enjoyed coding expressed that LLMs are taking away everything she found enjoyable about the job and reducing her to QA. She hates it and if the trend continues I won’t be surprised if she switches industries.
Exactly, maybe "prompt engineering" is really a skill, but the reward for getting better at this is just pumping out more features at a low skill grade. What's excited about this ? Unless I want to spend all my time building minimum viable product.
Prompt engineering is just writing acceptance criteria; it's moving from someone who writes code to someone who writes higher level feature descriptions. Or user stories, if you will.
Thing is though, many people don't know how to do that (user stories / acceptance criteria) properly, and it's been up to software developers to poke holes and fill in the blanks that the writer didn't think about.
For the longest time, IT workers were 'protected' from Marx's alienation of labor by the rarity of your skill, but now it's coming for you/us, too. Claude Code is to programmers what textile machines were to textile workers.
>In the capitalist mode of production, the generation of products (goods and services) is accomplished with an endless sequence of discrete, repetitive motions that offer the worker little psychological satisfaction for "a job well done." By means of commodification, the labour power of the worker is reduced to wages (an exchange value); the psychological estrangement (Entfremdung) of the worker results from the unmediated relation between his productive labour and the wages paid to him for the labour.
Less often discussed is Marx's view of social alienation in this context: i.e., workers used to take pride in who they are based on their occupation. 'I am the best blacksmith in town.'
Automation destroyed that for workers, and it'll come for you/us, too.
>The tradeoff of higher velocity for less enjoyment may feel less welcome when it becomes the new baseline and the expectation of employers / customers.
This is precisely the question that scares me now. It is always so satisfying when a revolution occurs to hold hands and hug each other in the streets and shout "death to the old order". But what happens the next morning? Will we capture this monumental gain for labor or will we cede it to capital? My money is on the latter. Why wouldn't it be? Did they let us go home early when the punch card looms weaved months worth of hand work in a day? No, they made us work twice as hard for half the pay.
Short-term, automated tech debt creation will yield gains.
Long term the craftsperson writing excellent code will win. It is now easier than ever to write excellent code, for those that are able to choose their pace.
Given it's 2025 and companies saddled with tech debt continue to prioritize speed of delivery over quality, I doubt the craftperson will win.
If anything we'll see disposable systems (or parts) and the job of an SE will become even more like a plumber, connecting prebuilt business logic to prebuilt systems libraries. When one of those fails, have AI whip up a brand new one instead of troubleshooting the existing one(s). After all, for business leader it's the output that matters, not the code.
For 20+ years business leaders have been eager to shed the high overhead of developers via any means necessary while ignoring their most expensive employees' input. Anyone remember Dilbert? It was funny as a kid, and is now tragic in its timeless accuracy a generation later.
> The tradeoff of higher velocity for less enjoyment may feel less welcome when it becomes the new baseline and the expectation of employers / customers. The excitement of getting a day's work done in an hour* (for example) is likely to fade once the expectation is to produce 8 of such old-days output per day.
That's why we should be against it but hey, we can provide more value to shareholders!
> The excitement of getting a day's work done in an hour* (for example) is likely to fade once the expectation is to produce 8 of such old-days output per day.
It's not really about excitement or enjoyment of work.
It's the fear about the __8x output__ being considered as __8x productivity__.
The increase in `output/productivity` factor has various negative implications. I would not say everything out loud. But the wise can read between the lines.
> The tradeoff of higher velocity for less enjoyment may feel less welcome when it becomes the new baseline and the expectation of employers / customers
This is what happens with big technological advancements. Technology that enables productivity won’t free people time, but only set higher expectations of getting more work done in a day.
If there are 8 days per day worth of work to be done (which I doubt), why wouldn’t you want to have it done ASAP? You’re going to have to do it eventually, so why not just do it now? Doesn’t make sense. You act like they’re just making up new work for you to do when previously there wouldn’t have been any.
Yes, work will expand to fill all your available hours due to unaligned incentives between who does the work (the SWE in this example) and who decide the quantity, timeline, and cost of work.
If the SWE can finish his work faster, 8x faster in this case, then backlogs will also be pushed to complete 8x faster by the project manager. If there are no backlogs, new features will be required at 8x faster / more by sales team / clients. If no new features are needed, pressures will be made until costs are 8x lower by finance. If there are no legal, moral, competitive, or physical constraints, the process should continue until either there’s only a single dev working on all his available time, or less time but for considerably less salary.
> The tradeoff of higher velocity for less enjoyment
I'm enjoying exactly what the author describes, so it's different strokes for different folks.
I always found the "code monkey" aspect of software development utterly boring and tedious, and have spent a lot of my career automating that away with various kinds of code generators, DSLs, and so on. \
Using an LLM as a general-purpose automated code monkey is a near-ideal scenario for me, and I enjoy the work more. It's especially useful for languages like Go or Java where repetitive boilerplate is endemic.
I also find it helps with procrastination, because I can just ask an LLM to start something for me and get over the initial hump.
> whether we're actually trading away more quality that we realise.
Even if the developer is keeping the quality of the LLM generated code high (by constant close reading of the output, rejecting low quality work and steering with prompts) does this mean the project as a whole is improving? I have my doubts! I'm also skeptical that this developer has increased their velocity as much as they believe, IMHO this has long been a difficult thing to measure.
Overall, is this even a good thing? With this increase in output, I suspect we'll need to apply more pressure to people requesting features to ensure those requests are high quality. When each feature takes half the time to implement, I bet it's easy to agree to more features without spending as much time evaluating their worth.
For me, on small personal projects, I can get a project to a point in about 4 hours where previous to new AI tools it would’ve taken about 40. At work, there is a huge difference due to the complexity of the code base and services. Using agents to code for me in these cases as 100% been the loop of iterating on something so often, I would’ve been better off with a more hands on approach, essentially just reviewing PRs written by AI.
In the previous scenario, programmers were still writing the code themselves. The compilers, if they were any good, generated deterministic code.
In our current scenario, programmers are merely describing what they think the code should do, and another program takes their description and then stochastically generates code based on it.
To some degree you're correct -- LLMs can be viewed as the kind of "sufficiently advanced" compiler we've always dreamed of. Our dreams didn't include the compiler lying to us though, so we have not achieved utopia.
LLMs are more like DNA transcription, where some percentage of the time it just injects a random mutation into the transcript, either causing am evolutionary advantage, or a terminal disease.
This whole AI industry right now is trying to figure out how to always get the good mutation, and I don't think it's something that can be controlled that way. It will probably turn out that on a long enough timescale, left unattended, LLMs are guaranteed to give your codebase cancer.
It's not. And people are realizing that, which is causing them to bring back and reinvent aspects of software engineering to AI coding to make it more tolerable. People once questioned whether AI will replace all programming languages with natural language interfaces, it now looks like programming languages will be reinvented in the context of AI to make their natural language interface more tool-like.
It's a change in mindset. AI is like having your own junior developer. If you've never had direct reports before where you have to give them detailed instruction and validate their code then you're right, it might end up more exhausting than just doing the work yourself.
In my experience, listening to music engages the creative part of your brain and severely limits what you can do, but this is not readily apparent.
If I listen to music, I can spend an hour CODING YEAH! and be all smug and satisfied, until I turn the music off and discover that everything I've coded is unnecessary and there is an easier way to achieve the same goal. I just didn't see it, because the creative part of my brain was busy listening to music.
From the post, it sounds like the author discovered the same thing: if you use AI to perform menial tasks (like coding), all that is left is thinking creatively, and you can't do that while listening to music.
There is no “creative part of the brain” and even if there was listening to music would have nothing to do with it.
You may be experiencing getting to different understanding of sth when you switch context. Similar to when you are stack it may be better to go for a walk than keep your head on top of a piece of paper or screen. I have had many of my breakthroughs while taking a shit in the toilet in the midst of working. Others experience similar with showers and whatever.
Afaik most ppl listen to music during certain tasks because it helps focusing. Esp when working in a busy office it really helps me to listen to certain kinds of predictable music to keep me from getting distracted. It creates a sort of entrainment that helps with attention.
Some people find music itself distracting, I myself find some kinds of music distracting, or during certain types of tasks. Then it obviously it does not fill its purpose.
I describe it slightly differently. Similar to what the author described, I'll first plan and solve the problem in my head, lay out a broad action plan, and then put on music to implement it.
But, for me the music serves something akin to clocks in microcontrollers (and even CPUs), it provides a flow that my brain syncs to. I'm not even paying attention to the music itself, but it stops me from getting distracted and focus on the task at hand.
I just think it's distracting. I get caught up listening to the lyrics and kind of mentally singing along, stuff like that which disrupts my thought and distracts from what I actually want to be thinking about.
I think this is individual, I have the same problem in social settings - if I'm having a conversation and a song I like is playing in the background I some times stop listening to the conversation and focus on the music instead, unintentionally.
My solution is to listen to music without vocals when I need to focus. I've had phases where I listen to classical music, electronic stuff, and lately I've been using an app I found called brain.fm which I think just plays AI generated lo-fi or whatever and there's some binaural beats thing going on as well that's supposed to enhance focus, creativity etc. I like it but some times I go back to regular music just because I miss listening to something I actually like.
Same here on all fronts about distractions. I can't tell whether when people talk about listening to music while working/studying: (1) they mean music with no lyrics, (2) they are unserious and okay with their work being constantly interrupted, or (3) they can resist thinking about the lyrics.
Some work may allow for seamless pivoting between work vs. enjoyable distraction, e.g., a clerk, but I often hear about people listening to music in other contexts.
> discover that everything I've coded is unnecessary and there is an easier way to achieve the same goal
In my experience, there is no good shortcut to this realization. Doing it wrong first is part of the journey. You might as well enjoy the necessary mistakes along the way. The third time’s the charm!
I'm sorry but that's nonsense. Listening to music is not a creative process, it does not at all take away creativity from somewhere else.
I've never, ever, ever once in 40 years of coding listened to music while coding and later found the code "unnecessary" or anything of the sort.
I engage in many creative pursuits outside of coding, always while listening to music, and I can confidently say that music has never once interfered in the process or limited the result in any way.
I don't think that's down to music per se, but a more generalized thing. Software developers love being in a flow state, some of them pursue it all the time (...guilty) and get frustrated when their job changes (e.g. moving towards management) so they spend less time in that flow state.
But also, this can create waste, in that people write the Best Code Ever in their flow state (while listening music or not), but... it wasn't necessary in the first place and the time spent was a waste. This can waste anything from an hour to six months of work (honestly, I had a "CTO" once who led a team of three dozen people or so who actually went into his batcave at home for six months to write a C# framework of sorts that the whole company should use. He then quit and became self-employed so the company had to re-hire him to make sense of the framework he wrote. I'm sure he enjoyed it very much though.)
This was actually studied at some point (at least 15-20 decades ago, as I remember learning about this in college): they gave the same programming problem to a bunch of developers and had some listen to music while they did the task while others worked in silence. There was no real difference between how long it took to do the task between the same group... but the people who listened to music were much much less likely to realize that the entire task was a red herring and the code reduced down to return 0;.
From the masterpiece, Tragedy of the Man, describing the future where everything is done in the name of efficiency:
THE GREYBEARD
You left your workroom in great disarray.
MICHELANGELO
Because I had to fabricate the chair-legs
To the quality as poor as it can be.
I appeal’d for long, let me modificate,
Let me engrave some ornaments on it.
They did not permit. I wanted as a chance
The chair-back to change but all was in vain.
I was very close to be a madman
And I left the pains and my workroom, too. (stands back)
THE GREYBEARD
You get house arrest for this disorder
And will not enjoy this nice and warm day.
I’d probably drop GenAI before I dropped the music that allows me to focus. Also, at this stage of my career, I mainly code for fun, and blasting music across the house is part of it.
I use Zoom rather than Teams, but have no problems playing background music with Spotify. Just have to make sure that “share computer audio” is not enabled when sharing your screen. Also, when I was using the mic of my bluetooth headphones, any music played would be mono and lower quality due to bluetooth bandwidth. Since moving to using a dedicated mic on my desk, the bluetooth headphones are output only and back to good quality stereo (MacOSX and Bose QC35).
> writing a blurb that contains the same mental model
Good nugget. Effective prompting, aside from context curation, is about providing the LLM with an approximation of your world model and theory, not just a local task description. This includes all your unstated assumptions, interaction between system and world, open questions, edge cases, intents, best practices, and so on. Basically distill the shape of the problem from all possible perspectives, so there's an all-domain robustness to the understanding of what you want. A simple stream of thoughts in xml tags that you type out in a quasi-delirium over 2 minutes can be sufficient. I find this especially important with gpt-5, which is good at following instructions to the point of pedantry. Without it, the model can tunnel vision on a particular part of the task request.
It's not parody. I'm trying to provide the LLM with what's missing, which is a theory of how the system fits into the world: https://pages.cs.wisc.edu/~remzi/Naur.pdf
Without this it defaults to being ignorant about the trade-offs that you care about, or the relevant assumptions you're making which you think are obvious but really aren't.
The "simple stream" aspect is that each task I give to the LLM is narrowly scoped, and I don't want to put all aspects of the relevant theory that pertains just to that one narrow task into a more formal centralized doc. It's better off as an ephemeral part of the prompt that I can delete after the task is done. But I also do have more formal docs that describe the shared parts of the theory that every prompt will need access to, which is fed in as part of the normal context.
Whenever I need some sort of quick data pipeline to modify some sort of file into another format, or do some batch transformation, or transform some sort of interface description into another syntax, or things like that, that would normally require me to craft a grep, awk, tr, etc pipeline, I can normally simply paste a sample of the data and with a human language description get what I need. If it’s not working well I can break up the steps in smaller steps.
In my experience, it seems the people who have bad results have been trying to get the AI to do the reasoning. I feel like if I do the reasoning, I can offload menial tasks to the AI, and little annoying things that would take one or two hours start to take a few minutes.
The ones who know what they want to do, how it should be done, but can't really be arsed to read the man pages or API docs of all the tools required.
These people can craft a prompt (prompt engineering :P) for the LLM that gets good results pretty much directly.
LLMs are garbage in garbage out. Sometimes the statistical average is enough, sometimes you need to give it more details to use the available tools correctly.
Like the fact that `fd` has the `-exec` and `--exec-batch` parameters, there's no need to use xargs or pipes with it.
Every kind of project is faster with AI, because it writes the code faster.
Then you have to QA it for ages to discover the bugs it wrote, but the initial perception of speed never leaves you.
I think I'm overall slower with AI, but I could be faster if I had it write simple functions that I could review one by one, and have the AI compose them the way I wanted. Unfortunately, I'm too lazy to be faster.
Pretty much what somebody else said: AI takes over simple tasks, the "fluff" around the business logic, error handling, stuff like that, so I can focus on doing the harder stuff at the core.
I'm slowed down (but perhaps sped up overall due to lower rewrites/maintenance costs) on important bits because the space of possibilities/capabilities is expanded, and I'm choosing to make use of that for some load bearing pieces that need to be durable and high quality (along the metrics that I care about). It takes extra time to search that space properly rather than accept the first thing that compiles and passes tests. So arguably equal or even lower velocity, but definitely improved results compared to what I used to be capable of, and I'm making that trade-off consciously for certain bits. However that's the current state of affairs, who knows what it'll look like in 1-2 years.
Where I work there are like 2x as many front end developers as there is need for. They spend an insane amount of time doing meetings, they require approval of 2 different people for every simple CSS change.
Their job is to do meetings, and occasionally add a couple of items to the HTML, which has been mostly unchanged for the past 10 years, save for changing the CSS and updating the js framework they use.
i just had it do a "set up the company styled auth, following a few wikis and a lot of trial and error until you get to the right thing"
in the olden days, id imagine getting that right to take about a week and a half and something everyone hated about spinning up a new service
with the LLM, i gave it a feedback loop of being able to do an initial sign in, integration test running steps with log reading on the client side, and a deploy and log reading mechanism for the server side.
i was going to write out an over-seer-y script for another LLM to trigger the trial and error script, but i ended up just doing that myself. What i skipped was the needing to run any one of the steps, and instead i got nicely parsed errors, so i could go look for wikis on what parts of the auth process i was missing and feed in those wiki links and such to the trial and error bot. i skipped all the log reading/parsing to get to what the next actionable chunk is, and instead, i got to hang around in the sun for a bit while the LLM churned on test calls and edits.
im now on a cleanup step to turn the working code into nicely written code that id actually want commited, but getting to the working code stage took very little of my own effort; only the problem solving and learning about how the auth works
I’m building a moderately complex system with FastAPI + PG + Prefect executing stuff on Cloud Run, and so long as I invest in getting the architecture and specs right, it’s really a dream how much of the heavy lifting and grunt work I can leave to Claude Code. And thank god I don’t have to manage Alembic by myself.
Greenfield development of small web apps. I’m familiar enough with everything that I can get something up and running on my own, but I don’t do it regularly so I need to read a lot of docs to be up to date. I can describe the basic design and requirements of an app and have something like Claude Code spit out a prototype in a couple of hours
I set up a model in DBT that has 100 columns. I need to generate a schema for it (old tools could do this) with appropriate tests and likely data types (old tools struggled with this). AI is really good at this sort of thing.
There's a local website that sells actual physical Blu-rays. Their webshite is a horror show of Javascript.
I had Claude Code build me a Playwright+python -based scraper that goes through their movie section and stores the data locally to an sqlite database + a web UI for me to watchlist specific movies + add price ranges to be alerted when it changes.
Took me maybe a total of 30 minutes of "active" time (4-5 hours real-time, I was doing other shit at the same time) to get it to a point where I can actually use it.
Basically small utilities for limited release (personal, team, company-internal) is what AI coding excels at.
Like grabbing results from a survey tool, adding them to a google sheet, summarising the data to another tab with formulas. Maybe calling an LLM for sentiment analysis on the free text fields.
Half a day max from zero to Good Enough. I didn't even have to open the API docs.
Is it perfect? Of course not. But the previous state was one person spending half a day for _each_ survey doing that manually. Now the automation runs in a minute or so, depending on whether Google Sheets API is having a day or not =)
I suspect it doesn't matter how we feel about it mind you. If it's going to happen it will, whether we enjoy the gains first or not.
* setting aside whether this is currently possible, or whether we're actually trading away more quality that we realise.
That dumb attitude (which I understand you’re criticising) of “more more more” always reminds me of Lenny from the Simpsons moving fast through the yellow light, with nowhere to go.
https://www.youtube.com/watch?v=QR10t-B9nYY
> I suspect it doesn't matter how we feel about it mind you. If it's going to happen it will, whether we enjoy the gains first or not.
That is quite the defeatist attitude. Society becoming shittier isn’t inevitable, though inaction and giving up certainly helps that along.
You're right in general, but I don't think that'll save you/us from OP's statement. This is simple economic incentives at play. If AI-coding is even marginally more economically efficient (i.e. more for less) the "old way" will be swept aside at breathtaking pace (as we're seeing). The "from my cold dead hands" hand-coding crowd may be right, but they're a transitional historical footnote. Coding was always blue-collar white-collar work. No-one outside of coders will weep for what was lost.
If you're a salaried or hourly employee, you aren't paid for your output, you are paid for your time, with minimum expectations of productivity.
If you complete all your work in an hour... you still owe seven hours based on the expectations of your employment agreement, in order to earn your salary and benefits.
If you'd rather work in an output based capacity, you'll want to move to running your own contacting business in a fixed-bid type capacity.
If the structures and systems that are in-place only facilitate life getting more difficult in some way, then it probably will, unless it doesn't.
Housing getting nearly unownable is a symptom of that. Climate change is another.
My company has been preparing for this for a while now, I guess, as my backlog clearly has years' worth of work in it and positions of people who have left the org remain unfilled. My colleagues at other companies are in a similar situation. Considering round after round of layoffs, if I got ahead a little bit and found that I had nothing else to do, I'd be worried for my job.
Society becoming shittier isn’t inevitable
Yes, I agree, but the deck is usually stacked against the worker, especially in America. I doubt this will be the issue that promotes any sort of collectivism.
Correct. But it becoming shittier is the strong default, with forces that you constantly have to fight against.
And the reason is very simple: Someone profits from it being shittier and they have a lot of patience and resources.
Realizing that attitude in myself at times has given me so much more peace of mind. Just in general, not everything needs to be as fast and efficient as possible.
Not to mention the times where in the end I spend a lot of time and energy in trying to be faster only to end up with this xkcd: https://xkcd.com/1319/
As far as LLM use goes, I don't need moar velocity! so I don't try to min max my agentic workflow just to squeeze out X amount more lines code.
In fact, I don't really work with agentic workflows to begin with. I more or less still use LLMs as tools external to the process. Using them as interactive rubber duckies. Things like deciphering spaghetti code, do a sanity check on code I wrote (and being very critical of the suggestions they come up with), getting a quick jump start on stuff I haven't used again (how do I get started with X of Y again?), that sort of stuff.
Using LLMs in the IDE and other agentic use is something I have worked with. But to me it falls under what I call "lazy use" where you are further and further removed from the creation of code, the reasoning behind it, etc. I know it is a highly popular approach with many people on HN. But in my experience, it is an approach that makes skills of experienced developers atrophy and makes sure junior developers are less likely to pick them up. Making both overly reliant on tools that have been shown to be less than reliable when the output isn't properly reviewed.
I get the firm feeling that velocity crowd works in environments where they are judged by the amount of tickets closed. Basically "feature complete, test green, merged, move on!". In that context, it isn't really "important" that the tests that are green are also refactored by the thing itself, just that they are green. It is a symptom of a corporate environment where the focus is on these "productivity" metrics. From that perspective I can fully see the appeal for LLM heavy workflows as they most certainly will have an impact on metrics like "tickets closed" or "lines of code written".
This feels like kicking someone when they’re down! Given the current state of corporate and political America, it doesn’t look likely there will be any pressure for anything but enshittification to me. Telling people at the coal face to stay cheerful seems unlikely to help. What mechanism do you see for not giving up to actually change the experience of people in 10 ish years time?
The hypothetical that we're 8x as productive but the work isn't as fun isn't "society becoming shittier".
That's stupid and detrimental to your mental health.
You do it in an hour, spend maybe 1-2 hours to make it even better and prettier and then relax. Do all that menial shit you've got lined up anyway.
I wish that the hype crowd would do that. It would make a for a much more enjoyable and sane experience on platforms like this. It's extremely difficult to have actual conversations subjects when there are crowds of fans involved who don't want to hear anything negative.
Yes, I also do realize there are people completely on the other side as well. But to honest, I can see why they are annoyed by the fan crowd.
Thing is though, many people don't know how to do that (user stories / acceptance criteria) properly, and it's been up to software developers to poke holes and fill in the blanks that the writer didn't think about.
>In the capitalist mode of production, the generation of products (goods and services) is accomplished with an endless sequence of discrete, repetitive motions that offer the worker little psychological satisfaction for "a job well done." By means of commodification, the labour power of the worker is reduced to wages (an exchange value); the psychological estrangement (Entfremdung) of the worker results from the unmediated relation between his productive labour and the wages paid to him for the labour.
Less often discussed is Marx's view of social alienation in this context: i.e., workers used to take pride in who they are based on their occupation. 'I am the best blacksmith in town.' Automation destroyed that for workers, and it'll come for you/us, too.
This is precisely the question that scares me now. It is always so satisfying when a revolution occurs to hold hands and hug each other in the streets and shout "death to the old order". But what happens the next morning? Will we capture this monumental gain for labor or will we cede it to capital? My money is on the latter. Why wouldn't it be? Did they let us go home early when the punch card looms weaved months worth of hand work in a day? No, they made us work twice as hard for half the pay.
Long term the craftsperson writing excellent code will win. It is now easier than ever to write excellent code, for those that are able to choose their pace.
If anything we'll see disposable systems (or parts) and the job of an SE will become even more like a plumber, connecting prebuilt business logic to prebuilt systems libraries. When one of those fails, have AI whip up a brand new one instead of troubleshooting the existing one(s). After all, for business leader it's the output that matters, not the code.
For 20+ years business leaders have been eager to shed the high overhead of developers via any means necessary while ignoring their most expensive employees' input. Anyone remember Dilbert? It was funny as a kid, and is now tragic in its timeless accuracy a generation later.
That's why we should be against it but hey, we can provide more value to shareholders!
It's not really about excitement or enjoyment of work.
It's the fear about the __8x output__ being considered as __8x productivity__.
The increase in `output/productivity` factor has various negative implications. I would not say everything out loud. But the wise can read between the lines.
This is what happens with big technological advancements. Technology that enables productivity won’t free people time, but only set higher expectations of getting more work done in a day.
If the SWE can finish his work faster, 8x faster in this case, then backlogs will also be pushed to complete 8x faster by the project manager. If there are no backlogs, new features will be required at 8x faster / more by sales team / clients. If no new features are needed, pressures will be made until costs are 8x lower by finance. If there are no legal, moral, competitive, or physical constraints, the process should continue until either there’s only a single dev working on all his available time, or less time but for considerably less salary.
I'm enjoying exactly what the author describes, so it's different strokes for different folks.
I always found the "code monkey" aspect of software development utterly boring and tedious, and have spent a lot of my career automating that away with various kinds of code generators, DSLs, and so on. \
Using an LLM as a general-purpose automated code monkey is a near-ideal scenario for me, and I enjoy the work more. It's especially useful for languages like Go or Java where repetitive boilerplate is endemic.
I also find it helps with procrastination, because I can just ask an LLM to start something for me and get over the initial hump.
> whether we're actually trading away more quality that we realise.
This is completely up to the people using it.
Overall, is this even a good thing? With this increase in output, I suspect we'll need to apply more pressure to people requesting features to ensure those requests are high quality. When each feature takes half the time to implement, I bet it's easy to agree to more features without spending as much time evaluating their worth.
While letting the AI write some code can be cool and fascinating, I really can't undersand how:
- write the prompt(and you need do be precise and think and express carefully what you have in mind)
- check/try the code
- repeat
is better than writing the code by myself. AI coding like this feels like a nightmare to me and it's 100x more exhausting.
In our current scenario, programmers are merely describing what they think the code should do, and another program takes their description and then stochastically generates code based on it.
LLMs are more like DNA transcription, where some percentage of the time it just injects a random mutation into the transcript, either causing am evolutionary advantage, or a terminal disease.
This whole AI industry right now is trying to figure out how to always get the good mutation, and I don't think it's something that can be controlled that way. It will probably turn out that on a long enough timescale, left unattended, LLMs are guaranteed to give your codebase cancer.
It's a perfectly cromulent approach and skillset - but it's a wildly different one.
If I listen to music, I can spend an hour CODING YEAH! and be all smug and satisfied, until I turn the music off and discover that everything I've coded is unnecessary and there is an easier way to achieve the same goal. I just didn't see it, because the creative part of my brain was busy listening to music.
From the post, it sounds like the author discovered the same thing: if you use AI to perform menial tasks (like coding), all that is left is thinking creatively, and you can't do that while listening to music.
You may be experiencing getting to different understanding of sth when you switch context. Similar to when you are stack it may be better to go for a walk than keep your head on top of a piece of paper or screen. I have had many of my breakthroughs while taking a shit in the toilet in the midst of working. Others experience similar with showers and whatever.
Afaik most ppl listen to music during certain tasks because it helps focusing. Esp when working in a busy office it really helps me to listen to certain kinds of predictable music to keep me from getting distracted. It creates a sort of entrainment that helps with attention.
Some people find music itself distracting, I myself find some kinds of music distracting, or during certain types of tasks. Then it obviously it does not fill its purpose.
I think this is individual, I have the same problem in social settings - if I'm having a conversation and a song I like is playing in the background I some times stop listening to the conversation and focus on the music instead, unintentionally.
My solution is to listen to music without vocals when I need to focus. I've had phases where I listen to classical music, electronic stuff, and lately I've been using an app I found called brain.fm which I think just plays AI generated lo-fi or whatever and there's some binaural beats thing going on as well that's supposed to enhance focus, creativity etc. I like it but some times I go back to regular music just because I miss listening to something I actually like.
Some work may allow for seamless pivoting between work vs. enjoyable distraction, e.g., a clerk, but I often hear about people listening to music in other contexts.
In my experience, there is no good shortcut to this realization. Doing it wrong first is part of the journey. You might as well enjoy the necessary mistakes along the way. The third time’s the charm!
I've never, ever, ever once in 40 years of coding listened to music while coding and later found the code "unnecessary" or anything of the sort.
I engage in many creative pursuits outside of coding, always while listening to music, and I can confidently say that music has never once interfered in the process or limited the result in any way.
But also, this can create waste, in that people write the Best Code Ever in their flow state (while listening music or not), but... it wasn't necessary in the first place and the time spent was a waste. This can waste anything from an hour to six months of work (honestly, I had a "CTO" once who led a team of three dozen people or so who actually went into his batcave at home for six months to write a C# framework of sorts that the whole company should use. He then quit and became self-employed so the company had to re-hire him to make sense of the framework he wrote. I'm sure he enjoyed it very much though.)
THE GREYBEARD You left your workroom in great disarray.
MICHELANGELO Because I had to fabricate the chair-legs To the quality as poor as it can be. I appeal’d for long, let me modificate, Let me engrave some ornaments on it.
They did not permit. I wanted as a chance The chair-back to change but all was in vain. I was very close to be a madman And I left the pains and my workroom, too. (stands back)
THE GREYBEARD You get house arrest for this disorder And will not enjoy this nice and warm day.
https://www.youtube.com/watch?v=DrA8Pi6nol8
Deleted Comment
Good nugget. Effective prompting, aside from context curation, is about providing the LLM with an approximation of your world model and theory, not just a local task description. This includes all your unstated assumptions, interaction between system and world, open questions, edge cases, intents, best practices, and so on. Basically distill the shape of the problem from all possible perspectives, so there's an all-domain robustness to the understanding of what you want. A simple stream of thoughts in xml tags that you type out in a quasi-delirium over 2 minutes can be sufficient. I find this especially important with gpt-5, which is good at following instructions to the point of pedantry. Without it, the model can tunnel vision on a particular part of the task request.
Without this it defaults to being ignorant about the trade-offs that you care about, or the relevant assumptions you're making which you think are obvious but really aren't.
The "simple stream" aspect is that each task I give to the LLM is narrowly scoped, and I don't want to put all aspects of the relevant theory that pertains just to that one narrow task into a more formal centralized doc. It's better off as an ephemeral part of the prompt that I can delete after the task is done. But I also do have more formal docs that describe the shared parts of the theory that every prompt will need access to, which is fed in as part of the normal context.
Deleted Comment
In my experience, it seems the people who have bad results have been trying to get the AI to do the reasoning. I feel like if I do the reasoning, I can offload menial tasks to the AI, and little annoying things that would take one or two hours start to take a few minutes.
That very quickly adds up to some real savings.
The ones who know what they want to do, how it should be done, but can't really be arsed to read the man pages or API docs of all the tools required.
These people can craft a prompt (prompt engineering :P) for the LLM that gets good results pretty much directly.
LLMs are garbage in garbage out. Sometimes the statistical average is enough, sometimes you need to give it more details to use the available tools correctly.
Like the fact that `fd` has the `-exec` and `--exec-batch` parameters, there's no need to use xargs or pipes with it.
Then you have to QA it for ages to discover the bugs it wrote, but the initial perception of speed never leaves you.
I think I'm overall slower with AI, but I could be faster if I had it write simple functions that I could review one by one, and have the AI compose them the way I wanted. Unfortunately, I'm too lazy to be faster.
Of course you need to check their work, but also the better your initial project plan and specifications are, the better the result.
For stuff with deterministic outputs it's easy to verify without reading every single line of code.
90% of what the average (or median) coder does isn't in any way novel or innovative. It's just API Glue in one form or another.
The AI knows the patterns and can replicate the same endpoints and simple queries easily.
Now you have more time to focus on the 10% that isn't just rehashing the same CRUD pattern.
Their job is to do meetings, and occasionally add a couple of items to the HTML, which has been mostly unchanged for the past 10 years, save for changing the CSS and updating the js framework they use.
in the olden days, id imagine getting that right to take about a week and a half and something everyone hated about spinning up a new service
with the LLM, i gave it a feedback loop of being able to do an initial sign in, integration test running steps with log reading on the client side, and a deploy and log reading mechanism for the server side.
i was going to write out an over-seer-y script for another LLM to trigger the trial and error script, but i ended up just doing that myself. What i skipped was the needing to run any one of the steps, and instead i got nicely parsed errors, so i could go look for wikis on what parts of the auth process i was missing and feed in those wiki links and such to the trial and error bot. i skipped all the log reading/parsing to get to what the next actionable chunk is, and instead, i got to hang around in the sun for a bit while the LLM churned on test calls and edits.
im now on a cleanup step to turn the working code into nicely written code that id actually want commited, but getting to the working code stage took very little of my own effort; only the problem solving and learning about how the auth works
I set up a model in DBT that has 100 columns. I need to generate a schema for it (old tools could do this) with appropriate tests and likely data types (old tools struggled with this). AI is really good at this sort of thing.
I had Claude Code build me a Playwright+python -based scraper that goes through their movie section and stores the data locally to an sqlite database + a web UI for me to watchlist specific movies + add price ranges to be alerted when it changes.
Took me maybe a total of 30 minutes of "active" time (4-5 hours real-time, I was doing other shit at the same time) to get it to a point where I can actually use it.
Basically small utilities for limited release (personal, team, company-internal) is what AI coding excels at.
Like grabbing results from a survey tool, adding them to a google sheet, summarising the data to another tab with formulas. Maybe calling an LLM for sentiment analysis on the free text fields.
Half a day max from zero to Good Enough. I didn't even have to open the API docs.
Is it perfect? Of course not. But the previous state was one person spending half a day for _each_ survey doing that manually. Now the automation runs in a minute or so, depending on whether Google Sheets API is having a day or not =)