I can't find the article anymore but I remember reading almost 10 years ago an article on the economist saying that the result of automation was not removal of jobs but more work + less junior employment positions.
The example they gave was search engine + digital documents removed the junior lawyer headcount by a lot. Prior to digital documents, a fairly common junior lawyer task was: "we have a upcoming court case. Go to the (physical) archive and find past cases relevant to current case. Here's things to check for:" and this task would be assigned to a team of junior (3-10 people). But now one junior with a laptop suffice. As a result the firm can also manage more cases.
Dwarkesh had a good interview with Zuck the other week. And in it, Zuck had an interesting example (that I'm going to butcher):
FB has long wanted to have a call center for its ~3.5B users. But that call center would automatically be the largest in history and cost ~15B/yr to run. Something that is cost ineffective in the extreme. But, with FB's internal AIs, they're starting to think that a call center may be feasible. Most of the calls are going to be 'I forgot my password' and 'it's broken' anyways. So having a robot guide people along the FAQs in the 50+ languages is perfectly fine for ~90% (Zuck's number here) of the calls. Then, with the harder calls, you can actually route it to a human.
So, to me, this is a great example of how the interaction of new tech and labor is a fractal not a hierarchy. In that, with each new tech that your specific labor sector finds, you get this fractalization of the labor in the end. Zuck would have never thought of a call center, denying the labor of many people. But this new tech allows for a call center that looks a lot like the old one, just with only the hard problems. It's smaller, yes, but it looks the same and yet is slightly different (hence a fractal).
Look, I'm not going to argue that tech is disruptive. But what I am arguing is that tech makes new jobs (most of the time), it's just that these new jobs tend to be dealing with much harder problems. Like, we''re pushing the boundaries here, and that boundary gets more fractal-y, and it's a more niche and harder working environment for your brain. The issue, of course, is that, like a grad student, you have to trust in the person working at the boundary is actually doing work and not just blowing smoke. That issue, the one of trust, I think is the key issue to 'solve'. Cal Newport talks a lot about this now and how these knowledge worker tasks really don't do much for a long time, and then they have these spats of genius. It's a tough one, and not an intellectual enterprise, but an emotional one.
I worked in automated customer support, and I agree with you. By default, we automated 40% of all requests. It becomes harder after that, but not because the problems the next 40% face are any different, but because they are unnecessarily complex.
A customer who wants to track the status of their order will tell you a story about how their niece is visiting from Vermont and they wanted to surprise her for her 16th birthday. It's hard because her parents don't get along as they used to after the divorce, but they are hoping that this will at the very least put a smile on her face.
The AI will classify the message as order tracking correctly, and provide all the tracking info and timeline. But because of the quick response, the customer will write back to say they'd rather talk to a human and ask for a phone number they can call.
The remaining 20% can't be resolved by neither human nor robot.
> Most of the calls are going to be 'I forgot my password' and 'it's broken' anyways. So having a robot guide people along the FAQs in the 50+ languages is perfectly fine for ~90% (Zuck's number here) of the calls.
No it isn't. Attempts to do this are why I mash 0 repeatedly and chant "talk to an agent" after being in a phone tree for longer than a minute.
Zuck also said that AI is going to start replacing senior software engineers at Meta in 2025. His job isn’t to state objective facts but hype up his company’s products and share price.
But there's also consolidation happening: Not every branch that is initially explored is still meaningful a few years later.
(At least that's what I got from reading old mathematical texts: People really delved deeply into some topics that are nowadays just subsumed by more convenient - or maybe trendy - machinery)
This is like a mini parallel of the industrial revolution.
A lot of places starting with a large and unskilled workforce, getting into e.g. textile industry (which brings better RoI than farming). Then the automation arrives but it leaves a lot of people jobless (still being unskilled) while there's new jobs in maintaining the machinery etc.
Isn't this literally just "productivity growth". You (and I think the article) are describing the ability to do more work with the same number of people, which seems like the economic definition of productivity.
I don't know about lawyering, but with engineering research, I can now ask ChatGPT's Deep Research to do a literature review on any topic. This used to take time and effort.
If you don't know about lawyering, then how do you know if the literature review is any good? It's the same thing as a non-programmer asking an LLM to vibe code an application. They have no idea about the quality.
Definitely. When computers came out, jobs increased. When the Internet became widely used, jobs increased. AI is simply another tool.
The sad part is, do you think we'll see this productivity gain as an opportunity to stop the culture of over working? I don't think so. I think people will expect more from others because of AI.
If AI makes employees twice as efficient, do you think companies will decrease working hours or cut their employment in half? I don't think so. It's human nature to want more. If 2 is good, 4 is surely better.
So instead of reducing employment, companies will keep the same number of employees because that's already factored into their budget. Now they get more output to better compete with their competitors. To reduce staff would be to be at a disadvantage.
So why do we hear stories about people being let go? AI is currently a scapegoat for companies that were operating inefficiently and over-hired. It was already going to happen. AI just gave some of these larger tech companies a really good excuse. They weren't exactly going to admit their make a mistake and over-hired, now were they? Nope. AI was the perfect excuse.
As all things, it's cyclical. Hiring will go up again. AI boom will bust. On to the next thing. One thing is for certain though, we all now have a fancy new calculator.
Well, I think productivity gains should correlate with stock price growth.
If we want stock prices to increase exponentially, sales must also grow exponentially, which means we need to become exponentially more productive.
We can stop that — or we can stop tying company profitability to stock prices, which is already happening to some extent.
And when we talk about 'greedy shareholders,' remember that often means pension funds - essentially our savings and our hope for a decent retirement, assuming productivity continues to grow exponentially.
I think this is a feature rather than a bug for AI advocates who ultimately dream of a day when they can do away with pesky things like labor forces and be a CEO of their own rentier-capitalist empire without hassle. Employees are just a vestigial obstacle holding down the brilliant visionaries at the top.
I think this idea generally doesn't bear out, at least as described in the book. For the most part, the kinds of jobs that are potentially actually mostly bullshit, are generally not considered it by those who do them, while those that he would characterise as just "shit jobs" will generally have a high percentage of percieved bullshit by those who work them.
I am amenable to the idea that there is a lot of wasted pointless work, but not to the idea that there's some kayfabe arrangement where everyone involved thinks it's pointless but pretends otherwise, I think generally most people around such work have actually convinced themselves it's important.
I feel like people in the comments are misunderstanding the findings in the article. It’s not that people save time with AI and then turn that time to novel tasks; it’s that perceived savings from using AI are nullified by new work which is created by the usage of AI: verification of outputs, prompt crafting, cheat detection, debugging, whatever.
This seems observationally true in the tech industry, where the world’s best programmers and technologists are tied up fiddling with transformers and datasets and evals so that the world’s worst programmers can slap together temperature converters and insecure twitter clones, and meanwhile the quality of the consumer software that people actually use is in a nosedive.
The other night I was too tired to code so I decided to try vibe coding a test framework for the C/C++ API I help maintain. I've tried this a couple times so far with poor results but I wanted to try again. I used Claude 3.5 IIRC.
The AI was surprisingly good at filling in some holes in my specification. It generated a ton of valid C++ code that actually compiled(except it omitted the necessary #includes). I built and ran it and... the output was completely wrong.
OK, great. Now I have a few hundred lines of C++ I need to read through and completely understand to see why it's incorrect.
I don't think it will be a complete waste of time because the exercise spurred my thinking and showed me some interesting ways to solve the problem, but as far as saving me a bunch of time, no. In fact it may actually cost me more time trying to figure out what it's doing.
With all due respect to folks working on web and phone apps, I keep getting the feeling that AI is great for high level, routine sorts of problems and still mostly useless for systems programming.
> With all due respect to folks working on web and phone apps, I keep getting the feeling that AI is great for high level, routine sorts of problems and still mostly useless for systems programming.
As one of those folks, no it's pretty bad in that world as well. For menial crap it's a great time saver, but I'd never in a million years do the "vibe coding" thing, especially not with user-facing things or especially not for tests. I don't mind it as a rubber duck though.
I think the problem is that there's 2 groups of users, the technical ones like us and then the managers and C-levels etc. They see it spit out a hundred lines of code in a second and as far as they know (and care) it looks good, not realizing that someone now has to spend their time reviewing the 100 lines of code, plus having the burden of maintenance of those 100 lines going into the future. But, all they see is a way to get the pesky, expensive devs replaced or at least a chance squeeze more out of them. The system is so flashy and impressive looking, and you can't even blame them for falling for the marketing and hype, after all that's what all the AIs are being sold as, omnipotent and omniscient worker replacers.
Watching my non-technical CEO "build" things with AI was enlightening. He prompts it for something fairly simple, like a TODO List application. What it spits out works for the most part, but the only real "testing" he does is clicking on things once or twice and he's done and satisfied, now convinced that AI can solve literally everything you throw at it.
However if he were testing the solution as a proper dev would, he'd see that the state updates break after a certain amount of clicks, and that the list was glitching out sometimes, and that adding things breaks on scroll and overflows the viewport, and so on. These are all real examples of an "app" he made by vibe coding, and after playing around with it myself for all of 3 minutes I noticed all these issues and more in his app.
> With all due respect to folks working on web and phone apps, I keep getting the feeling that AI is great for high level, routine sorts of problems and still mostly useless for systems programming.
As someone working on routine problems in mainstream languages where training data is abundant, LLMs are not even great for that. Sure, they can output a bunch of code really quickly that on the surface appears correct, but on closer inspection it often uses nonexistent APIs, the logic is subtly wrong or convoluted for no reason, it does things you didn't tell it to do or ignores things you did, it has security issues and other difficult to spot bugs, and so on.
The experience is pretty much what you summed up. I've also used Claude 3.5 the most, though all other SOTA model have the same issues.
From there, you can go into the loop of copy/pasting errors to the LLM or describing the issues you did see in the hopes that subsequent iterations will fix them, but this often results in more and different issues, and it's usually a complete waste of time.
You can also go in and fix the issues yourself, but if you're working with an unfamiliar API in an unfamiliar domain, then you still have to do the traditional task of reading the documentation and web searching, which defeats the purpose of using an LLM to begin with.
To be clear: I don't think LLMs are a useless technology. I've found them helpful at debugging specific issues, and implementing small and specific functionality (i.e. as a glorified autocomplete). But any attempts of implementing large chunks of functionality, having them follow specifications, etc., have resulted in much more time and effort spent on my part than if I had done the work the traditional way.
The idea of "vibe coding" seems completely unrealistic to me. I suspect that all developers doing this are not even checking whether the code does what they want to, let alone reviewing the code for any issues. As long as it compiles they consider it a success. Which is an insane way of working that will lead to a flood of buggy and incomplete applications, increasing the dissatisfaction of end users in our industry, and possibly causing larger effects not unlike the video game crash of 1983 or the dot-com bubble.
> With all due respect to folks working on web and phone apps, I keep getting the feeling that AI is great for high level, routine sorts of problems and still mostly useless for systems programming.
I agree. AI is great for stuff that's hard to figure out but easy to verify.
For example, I wanted to know how to lay out something a certain way in SwiftUI and asked Gemini. I copied what it suggested, ran it and the layout was correct. I would have spent a lot more time searching and reading stuff compared to this.
I wish I had an ongoing counter for the amount of times I've asked chatgpt to "generate me python code that will output x data similar to xxd".
Its a snippet I've written a few times before to debug data streams, but it's always annoying to get alignment just right.
I feel like that is the sweet spot for AI, to generate actual snippets of routine code that has no bearing on security or functionality, but lets you keep thinking about the problem at hand while it does that 10 minutes of busy work.
Yeah, I similarly have not had great success for creating entire systems / applications for exactly this reason. I have had no success at all in not needing to go in and understand what it wrote, and when I do that, I find it largely needs to be rewritten. But I have a lot more success when I'm integrating it into the work I'm doing.
I do know people who seem to be having more success with the "vibecoding" workflow on the front end though.
> OK, great. Now I have a few hundred lines of C++ I need to read through and completely understand to see why it's incorrect.
For a time, we can justify this kind of extra work by imagining that it is an upfront investment. I think that is what a lot of people are doing right now. It remains to be seen when AI-assisted labor is still a net positive after we stop giving it special grace as something that will pay off a lot later if we spend a lot of time on it now.
> OK, great. Now I have a few hundred lines of C++ I need to read through and completely understand to see why it's incorrect.
I think it's often better to just skip this and delete the code. The cool thing about those agents is that the cost of trying this out is extremely cheap, so you don't have to overthink it and if it looks incorrect, just revert it and try something else.
I've been experimenting with Junie for past few days, and had very positive experience. It wrote a bunch of tests for me that I've been postponing for quite some time it was mostly correct from a single sentence prompt. Sometimes it does something incorrect, but I usually just revert it and move on, try something else later. There's definitely a sweet spot for things tasks it does well and you have to experiment a bit to find it out.
Personally, having worked in professional enterprise software for ~7 years now I've come to a pretty hard conclusion.
Most software should not exist.
That's not even meant in the tasteful "Its a mess" way. From a purely money making efficiency standpoint upwards of 90% of the code I've written in this time has not meaningfully contributed back to the enterprise, and I've tried really hard to get that number lower. Mind you, this is professional software. If you consider the vibe coder guys, I'll estimate that number MUCH higher.
It just feels like the whole way we've fit computing into the world is misaligned. We spent days building UIs that dont help the people we serve and that break at the first change to the process, and because of the support burden of that UI we never get to actually automate anything.
I still think computers are very useful to humanity, but we have forgot how to use them.
And not only that, but most >>changes<< to software shouldn't happen, especially if it's user facing. Half my dread in visiting support web sites is that they've completely rearranged yet again, and the same thing I've wanted five times requires a fifth 30 minutes figuring out where they put it.
> "Personally, having worked in professional enterprise software for ~7 years now I've come to a pretty hard conclusion.
Most software should not exist.
That's not even meant in the tasteful "Its a mess" way. From a purely money making efficiency standpoint upwards of 90% of the code I've written in this time has not meaningfully contributed back to the enterprise, and I've tried really hard to get that number lower. Mind you, this is professional software. If you consider the vibe coder guys, I'll estimate that number MUCH higher."
I've worked on countless projects at this point that seemed to serve no purpose, even at the outset, and had no plan to even project cost savings/profit, except, at best some hand-waving approximation.
Even worse, many companies are completely uninterested in even conceptualizing operating costs for a given solution. They get sold on some cloud thing cause "OpEx" or whatever, and then spend 100s of hours a month troubleshooting intricate convoluted architectures that accomplish nothing more than a simple relational database and web server would.
Sure, the cloud bill is a lower number, but if your staff is burning hours every week fighting `npm audit` issues, and digging through CloudWatch for errors between 13 Lambda functions, what did you "save"?
I've even worked on more than one project that existed specifically to remove manual processes (think printing and inspecting documents) to "save time." Sure, now shop floor workers/assembly workers inspect less papers manually, but now you need a whole other growth of technical staff to troubleshoot crap constantly.
Oh and the company(ies) don't have in-house staff to maintain the thing, and have no interest in actually hiring so they write huge checks to a consulting company to "maintain" the stuff at a cost often orders of magnitude higher than it'd cost to hire staff that would actually own the project(s). And these people have a conflict of interest to maximize profit, so they want to "fix" things and etc etc.
I think a lot of this is the outgrowth of the 2010s where every company was going to be a "tech company" and cargo-culted processes without understanding the purpose or rationale, and lacking competent people to properly scope and deliver solutions that work, are on time and under budget, and tangibly deliver value.
> where the world’s best programmers and technologists are tied up fiddling with transformers and datasets and evals so that the world’s worst programmers can slap together temperature converters and insecure twitter clones
> slap together temperature converters and insecure twitter clones
because those "best programmers" don't want to be making temperature converters nor twitter clones (unless they're paid mega bucks). This enables the low paid "worst" programmers to do those jobs for peanuts.
I think the software quality nosedive significantly predates generative AI.
I think it's too early to say whether AI is exacerbating the problem (though I'm sympathetic to the view that it is) or improving it, or just maintaining the status quo.
>it’s that perceived savings from using AI are nullified by new work which is created by the usage of AI:
I mean, isn't that obvious looking at economic output and growth? The Shopify CEO recently published a memo in which he claimed that high achievers saw "100x growth". Odd that this isn't visible in the Spotify market cap. Did they fire 99% of their engineers instead? Maybe the memo was AI written too.
Are there any 5 man software companies that do the work of 50? I haven't seen them. I wonder how long this can go on with the real world macro data so divorced from what people have talked themselves into.
the state of consumer software is already so bad & LLMs are trained on a good chunk of that so their output can possible produce worse software right? /s
Modern AI tools are amazing, but they’re amazing like spell check was amazing when it came out. Does it help with menial tasks? Yes, but it creates a new baseline that everyone has and just moves the bar. Theres scant evidence that we’re all going to just sit on a beach while AI runs your company anytime soon.
There’s little sign of any AI company managing to build something that doesn’t just turn into a new baseline commodity. Most of these AI products are also horribly unprofitable, which is another reality that will need to be faced sooner rather than later.
It's got me wondering: do any of my hard work actually matter? Or is it all just pointless busy-work invented since the industrial revolution to create jobs for everyone, when in reality we would be fine if like 5% of society worked while the rest slacked off? Don't think we'd have as many videogames, but then again, we would have time to play, which I would argue is more valuable than games.
To paraphrase Lee Iacocca:
We must stop and ask ourselves, how much videogames do we really need?
> It's got me wondering: do any of my hard work actually matter?
I recently retired from 40 years in software-based R&D and have been wondering the same thing. Wasn't it true that 95% of my life's work was thrown away after a single demo or a disappointingly short period of use?
And I think the answer is yes, but this is just the cost of working in an information economy. Ideas are explored and adopted only until the next idea replaces it or the surrounding business landscape shifts yet again. Unless your job is in building products like houses or hammers (which evolve very slowly or are too expensive to replace), the cost of doing of business today is a short lifetime for any product; they're replaced in increasingly fast cycles, useful only until they're no longer competitive. And this evanescent lifetime is especially the case for virtual products like software.
The essence of software is to prototype an idea for info processing that has utility only until the needs of business change. Prototypes famously don't last, and increasingly today, they no longer live long enough even to work out the bugs before they're replaced with yet another idea and its prototype that serves a new or evolved mission.
Will AI help with this? Only if it speeds up the cycle time or reduces development cost, and both of those have a theoretical minimum, given the time needed to design and review any software product has an irreducible minimum cost. If a human must use the software to implement a business idea then humans must be used to validate the app's utility, and that takes time that can't be diminished beyond some point (just as there's an inescapable need to test new drugs on animals since biology is a black box too complex to be simulated even by AI). Until AI can simulate the user, feedback from the user of new/revised software will remain the choke point on the rate at which new business ideas can be prototyped by software.
Yes... basically in life, you have to find the definition of "to matter" that you can strongly believe in. Otherwise everything feels aimless, the very life itself.
The rest of what you ponder in your comment is the same. And I'd like to add that baselines shifted a lot over the years of civilization. I like to think about one specific example: painkillers. Painkillers were not used during medical procedures in a widespread manner until some 150 years ago, maybe even later. Now, it's much less horrible to participate in those procedures, for everyone involved really, and also the outcomes are better just for this factor - because the patients moves around less while anesthetized.
But even this is up for debate. All in all, it really boils down to what the individual feels like it's a worthy life. Philosophy is not done yet.
Mine doesn't, and I am fine with that, never needed such validation. I derive fulfillment from my personal life and achievements and passions there, more than enough. With that optics, office politics and promotion rat race and what people do in them just makes me smile. Seeing how otherwise smart folks ruin (or miss out) their actual lives and families in pursuit of excellence in a very narrow direction, often hard underappreciated by employers and not rewarded adequately. I mean, at certain point you either grok the game and optimize, or you don't.
The work brings over time modest wealth, allows me and my family to live in long term safe place (Switzerland) and builds a small reserve for bad times (or inheritance, early retirement etc. this is Europe, no need to save up for kids education or potentially massive healthcare bills). Don't need more from life.
Unless you propose slaves how are you going to choose the 5%?
Who in their right mind would work when 95 out of 100 people around them are slacking off all day? Unless you pay them really well. So well that they prefer to work than to slack off. But then the slackers will want nicer things to do in their free time that only the workers can afford. And then you'd end up at the start.
>It's got me wondering: do any of my hard work actually matter?
It mattered enough for someone to pay you money to do it, and that money put food on the table and clothes on your body and a roof over your head and allowed you to contribute to larger society through paying taxes.
Is it the same as discovering that E = MC2 or Jonas Salk's contributions? No, but it's not nothing either.
> Don't think we'd have as many videogames, but then again, we would have time to play, which I would argue is more valuable than games.
Would we have fewer video games? If all our basic needs were met and we had a lot of free time, more people might come together to create games together for free.
I mean, look at how much free content (games, stories, videos, etc) is created now, when people have to spend more than half their waking hours working for a living. If people had more free time, some of them would want to make video games, and if they weren’t constrained by having to make money, they would be open source, which would make it even easier for someone else to make their own game based on the work.
Nope. The current system may be misdirecting 95% of labor, but until we have sufficiently modeled all of nature to provide perfect health and brought world peace, there is work to do.
I've been thinking similarly. Bertrand Russell once said: "there are two types of work. One, moving objects on or close to the surface of the Earth. Two, telling other people to do so". Most of us work in buildings that don't actually manufacture, process or anything. Instead, we process information that describes manufacturing and transport. Or we create information for people to consume when they are not working (entertainment). Only a small faction of human beings are actually producing things that are necessary for physiological survival. Rest of us are at best, helping them optimize that process, or at worst, leeching off of them in the name of "management" of their work.
Most work is redundant and unnecessary. Take for example the classic gas station on every corner situation that often emerges. This turf war between gas providers (or their franchisees by proxy they granted a license to this location for) is not because three or four gas stations are operating at maximum capacity. No, this is 3 or 4 fisherman with a line in the river, made possible solely because inputs (real estate, gas, labor, merchandise) are cheap enough where the gas station need not ever run even close to capacity and still return a profit for the fisherman.
Who benefits from the situation? You or I who don’t have to make a u turn to get gas at this intersection, perhaps, but that is not much benefit in comparison for the opportunity cost of not having 3 prime corner lots squandered on the same single use. The clerk at the gas station for having a job available? Perhaps although maybe their labor in aggregate would have been employed in other less redundant uses that could benefit out society otherwise than selling smokes and putting $20 on 4 at 3am. The real beneficiary of this entire arrangement is the fisherman, the owner or shareholder who ultimately skims from all the pots thanks to having what is effectively a modern version of a plantation sharecropper, spending all their money in the company store and on company housing with a fig leaf of being able to choose from any number of minimum wage jobs, spend their wages in any number of national chain stores, and rent any number of increasingly investor owned property. Quite literally all owned by the same shareholders when you consider how people diversify their investments into these multiple sectors.
Its why executive types are all hyped about AI. Being able to code 2x more will mean they get 2x more things (roughly speaking), but the workers aren’t going to get 2x the compensation.
Indeed. And AI does its work without those productivity-hindering things like need for recreation and sleep, ethical treatment, and a myriad of others. It's a new resource to exploit, and that makes everyone excited who is building on some resource.
AI can’t do our jobs today, but we’re only 2.5 years from the release of chatGPT. The performance of these models might plateau today, but we simply don’t know. If they continue to improve at the current rate for 3-5 more years, it’s hard for me to see how human input would be useful at all in engineering.
The cost, in money or time, for getting certain types of work done decreases. People ramp up demand to fill the gap, "full utilization" of the workers.
Its a very old claim that the next technology will lead to a utopia where we don't have to work, or we work drastically less often. Time and again we prove that we don't actually want that.
My hypothesis (I'm sure its not novel or unique) is that very few people know what to do with idle hands. We tend to keep stress levels high as a distraction, and tend to freak out in various ways if we find ourselves with low stress and nothing that "needs" to be done.
> Its a very old claim that the next technology will lead to a utopia where we don't have to work, or we work drastically less often. Time and again we prove that we don't actually want that.
It actually does but due to wrong distribution of reward gained from that tech(automation) it does not work for the common folks.
Lets take a simple example, you, me and 8 other HN users work in Bezos’ warehouse. We each work 8h/day. Suddenly a new tech comes in which can now do the same task we do and each unit of that machine can do 2-4 of our work alone. If Bezos buys 4 of the units and setting each unit to work at x2 capacity, then 8 of us now have 8h/day x 5 days x 4 weeks = 160h leisure.
Problem is, now 8 of us still need money to survive(food, rent, utilities, healthcare etc). So, according to tech utopians, 8 of us now can use 160h of free time to focus on more important and rewarding works.(See in context of all the AI peddlers, how using AI will free us to do more important and rewarding works!). But to survive my rewarding work is to do gig work or something of same effort or more hours.
So in theory, the owner controlling the automation gets more free time to attend interviews and political/social events. The people getting automated away fall downward and has to work harder to maintain their survivality. Of course, I hope our over enthusiastic brethren who are paying LLM provider for the priviledge of training their own replacements figure the equation soon and don’t get sold by the “free time to do more meaningful work” same way the Bezos warehouse gave some of us some leisure while the automation were coming online and needed some failsafe for a while. :)
In summary, the Luddites had a point. It doesn't mean they were ultimately correct, just that their concerns were valid.
Regardless of anyone's thoughts on genAI in particular, it's important for us as a society to consider what our economic model looks like in a future where technology breaks the assumption of near-universal employment. Maybe that's UBI. Maybe it's a system of universally accessible educational stipends and pumping public funds into venture capital. Maybe it's something else entirely.
I think a lot of people would be fine being idle if they had a guaranteed standard of living. When I was unemployed for a while, I was pretty happy in general but stressed about money running out. Without the money issue the last thing I would want to do is to sell my time to a soulless corporation. I have enough interests to keep me busy. Work just sucks up time I would love to spend on better things.
Food production is a class case where once productivity is high enough you simply get fewer farmers.
We are currently a long way from that kind of change as current AI tools suck by comparison to literally 1,000x increases in productivity. So, in well under 100 years programming could become extremely niche.
We are seeing an interesting limit in the food case though.
We increased production and needed fewer farmers, but we now have so few farmers that most people have very little idea of what food really is, where it comes from, or what it takes to run our food system.
Higher productivity is good to a point, but eventually it risks becoming too fragile.
> Food production is a class case where once productivity is high enough you simply get fewer farmers.
Yes, but.
There are more jobs in other fields that are adjacent to food production, particularly in distribution. Middle class does not existed and retail workers are now a large percentage of workers in most parts of the world.
I don’t think it’s the consequence of most individuals’ preferences. I think it’s just the result of disproportionate political influence held by the wealthy, who are heavily incentivized to maximize working hours. Since employers mostly have that incentive, and since the political system doesn’t explicitly forbid it, there aren’t a ton of good options for workers seeking shorter hours.
> there aren’t a ton of good options for workers seeking shorter hours.
But you do have that option, right? Work 20 hours a week instead of 40. You just aren't paid for the hours that you don't work. In a world where workers are exchanging their labor for wages, that's how it's supposed to work.
For there to be a "better option" (as in, you're paid money for not working more hours) what are you actually being paid to do?
For all the thoughts that come to mind when I say "work 20 hours a week instead of 40" -- that's where the individual's preference comes in. I work more hours because I want the money. Nobody pays me to not work.
> there aren’t a ton of good options for workers seeking shorter hours.
Is that true? Most trades can work fewer hours, medical workers like nurses can, hairdressers, plenty of writers are freelance, the entire gig economy.
It seems like big companies don't provide the option, for software at least. I always chocked that up to more bureaucratic processes which add some fixed cost for each employed person.
>technology will lead to a utopia where we don't have to work
I'm kind of ok with doing more work in the same time, though if I'm becoming way more effective I'll probably start pushing harder on my existing discussions with management about 4 day work weeks (I'm looking to do 4x10s, but I might start looking to negotiate it to "instead of a pay increase, let's keep it the same but a 4x8 week").
If AI lets me get more done in the same time, I'm ok with that. Though, on the other hand, my work is budgeting $30/mo for the AI tools, so I'm kind of figuring that any time that personally-purchased AI tools are saving me, I deduct from my work week. ;-)
>very few people know what to do with idle hands
"Millions long for immortality that don't know what to do with themselves on a rainy Sunday afternoon." -- Susan Ertz
Thank you! I didn't know this had a name. I remember thinking something along these lines in seventh grade social studies when we learned that Eli Whitney's cotton gin didn't actually end up improving conditions for enslaved people.
I suspected this would be the case with AI too. A lot of people said things like "there won't be enough work anymore" and I thought, "are you kidding? Do you use the same software I use? Do you play the same games I've played? There's never enough time to add all of the features and all of the richness and complexity and all of the unit tests and all of the documentation that we want to add! Most of us are happy if we can ship a half-baked anything!"
The only real question I had was whether the tech sector would go through a prolonged, destructive famine before realizing that.
Econ 101: supply is finite, demand infinite. Increased efficiency of production means that demand will meet the new price point, not that demand will cease to exist.
There are probably plenty of goods that are counter examples, but time utilization isn't one of them, I don't think.
I don't think we can so easily pin it on capitalism. Capitalism brings incentives that drive work hours and expectations up for sure, but that's not the only thing in play.
Workers are often looking to make more money, take more responsibility, or build some kind of name or reputation for themselves. There's absolutely nothing wrong with that, but that goal also incentivizes to work harder and longer.
There's no one size fits all description for workers, everyone's different. The same is true for the whole system though, it doesn't roll up to any one cause.
Unions had early wins that mostly either didn't go anywhere, or the companies worked around. The real win that normalized it was for capitalistic reasons, when Henry Ford shortened the workday/week because he wanted his workers to buy (and have reason to buy) his cars. Combined with other changes, he figured he'd retain workers better and reduce mistakes from fatigue, and when he remained competitive others followed suit.
I wish people could handle an idle mind, I expect we'd all be better off. But yeah, realistically most people when idle would do a lot of damage.
Its always possible that risk would be transitional. Anyone alive today, at least in western style societies, likely doesn't know a life without high levels of stress and distraction. It makes sense that change would cause people to lash out, maybe people growing up in that new system would handle it better (if they had the chance).
I don’t think people would be idle. They’d just be concerned with different things, like social dynamics, games/competition/sports, raising family etc.
What an absurd straw man. Moving the needle away from “large portions of the population are a few paychecks away from being homeless” does not constitute “the devil’s playground”.
Where’s all of the articles that HN loves about kids these days not being bored anymore? What about google’s famous 20% time?
It’s Solow’s paradox: “You can see the computer age everywhere, except in productivity statistics.”
— Nobel Prize-winning American economist Robert Solow, in 1987
When it comes to programming, I would say AI has about doubled my productivity so far.
Yes, I spend time on writing prompts. Like "Never do this. Never do that. Always do this. Make sure to check that.". To tell the AI my coding preferences. Bot those prompts are forever. And I have written most of them months ago, so that now I just capitalize on them.
I'm always a little bit skeptical whenever people say that AI has resulted in anything more than a personal 50% increase in productivity.
Like, just stop and think about it for a second. You're saying that AI has doubled your productivity. So, you're actually getting twice as much done as you were before? Can you back this up with metrics?
I can believe AI can make you waaaaaaay more productive in selective tasks, like writing test conditions, making quick disposable prototypes, etc, but as a whole saying you get twice as much done as you did before is a huge claim.
It seems more likely that people feel more productive than they did before, which is why you have this discrepancy between people saying they're 2x-10x more productive vs workplace studies where the productivity gain is around 25% on the high end.
I'm surprised there are developers who seem to not get twice as much done with AI than they did without.
I see it happening right in front of my eyes. I tell the AI to implement a feature that would take me an hour or more to implement and after one or two tries with different prompts, I get a solution that is almost perfect. All I need to do is fine-tune some lines to my liking, as I am very picky when it comes to code. So the implementation time goes down from an hour to 10 minutes. That is something I see happening on a daily basis.
Have you actually tried? Spend some time to write good prompts, use state of the art models (o3 or gemini-2.5 pro) and let AI implement features for you?
Would you be comfortable sharing a bit about the kind of work you do? I’m asking because I mostly write iOS code in Swift, and I feel like AI hasn’t been all that helpful in that area. It tends to confidently spit out incorrect code that, even when it compiles, usually produces bad results and doesn’t really solve the problem I’m trying to fix.
That said, when I had to write a Terraform project for a backend earlier this year, that’s when generative AI really shined for me.
For ios/swift the results reflect the quality of the information available to the LLM.
There is a lack of training data; Apple docs arent great or really thorough, much documentation is buried in WWDC videos and requires an understanding of how the APIs evolved over time to avoid confusion when following stackoverflow posts, which confused newcomers as well as code generators. Stackoverflow is also littered with incorrect or outdated solutions to iOS/Swift coding questions.
Cannot comment on swift but I presume training data for it might be less avaialble online. Whereas Python, what I use and in my anecdotal experience, it can produce quite decent code, and some sparks of brilliance here and there. But I use it for boilerplate code I find boring, not the core stuff. I would say as time progresses and these models get more data it may help with Swift too (though this issue may take a while cause I remember a convo with another person online who said the swift code GPT3.5 produced was bad, referencing libraries that did not exist.)
Which LLMs have you used? Everything from o3-mini has been very useful to me. Currently I use o3 and gemini-2.5 pro.
I do full stack projects, mostly Python, HTML, CSS, Javascript.
I have two decades of experience. Not just my work time during these two decades but also much of my free time. As coding is not just my work but also my passion.
So seeing my productivity double over the course of a few months is quite something.
My feeling is that it will continue to double every few months from now on. In a few years we can probably tell the AI to code full projects from scratch, no matter how complex they are.
GG, you do twice the work, twice the mental strain for same wage. And you spend time on writing prompts instead of mastering your skills, thus becoming less competitive as a professional (as anyone can use ai, thats a given level now)
Sounds like a total win.
> When it comes to programming, I would say AI has about doubled my productivity so far
For me it’s been up to 10-100x for some things, especially starting from scratch
Just yesterday, I did a big overhaul of some scrapers, that would have taken me at least a week to get done manually (maybe doing 2-4 hrs/day for 5 days ~ 15hrs). With the help of ChatGPT, I was done in less than 2 hours
So not only it was less work, it was a way shorter delivery time
Have you tested them across different models? It seems to me that even if you manage to cajole one particular model into behaving a particular way, a different model would end up in a different state with the same input, so it might need a completely different prompt. So all the prompts would become useless whenever the vendor updates the model.
What is it like to maintain the code? How long have they been in production? How many iterations (enhancements, refactoring, ...) cycles have you seen with this type of code?
The real problem is with lower skilled positions. Either people in easier roles or more junior people. We will end up with a significant percent of the population who are unemployable because we lack positions commensurate with their skills.
Yep, I'm talking about non-office jobs, such as in warehouses and retail. Why do you need sales associates when you can just ask an AI associate that knows everything.
But, the study is also about LLMs currently impacting wages and hours. We're still in the process of creating targeted models for many domains. It's entirely possible the customer representatives and clerks will start to be replaced in part by AI tools. It also seems that the current increase in work could mean that headcount is kept flat, which is great for a business, but bad for employment.
there’s was some podcast ezra klein had a couple months ago, but a point his guest made about education that sticks with me is the next generation of students that will be successful are the ones that can use AI as a tool not as a dependency - outcomes which may largely depend on how the education system changes with the times. i wonder there very well
That's the story of all technology and the argument AI won't take jobs pmarca etc has been predicting for a while now. Our focus will be able to shift into ever narrower areas. Cinema was barely a thing 100 years ago. A hundred years from now we'll get some totally new industry thanks to freeing up labor.
Also the nature of software is that the more software is written the more software needs to be written to manage, integrate, and make use of all the software that has been written.
AI automating software production could hugely increase demand for software.
The same thing happened as higher level languages replaced manual coding in assembly. It allowed vastly more software and more complex and interesting software to be built, which enlarged the industry.
> AI automating software production could hugely increase demand for software
Let's think this through
1: AI automates software production
2: Demand for software goes through the roof
3: AI has lowered the skill ceiling required to make software, so many more can do it with a 'good-enough' degree of success
4: People are making software for cheap because the supply of 'good enough' AI prompters still dwarfs the rising demand for software
5: The value of being a skilled software engineer plummets
6: The rich get richer, the middle class shrinks even further, and the poor continue to get poorer
This isn't just some kind of wild speculation. Look at any industry over the history of mankind. Look at Textiles
People used to make a good living crafting clothing, because it was a skill that took time to learn and master. Automation makes it so anyone can do it. Nowadays, automation has made it so people who make clothes are really just operating machines. Throughout my life, clothes have always been made by the cheapest overseas labour that capital could find. Sometimes it has even turned out that companies were using literal slaves or child labour.
Meanwhile the rich who own the factories have gotten insanely wealthy, the middle class has shrunk substantially, and the poor have gotten poorer
Do people really not see that this will probably be the outcome of "AI automates literally everything"?
Yes, there will be "more work" for people. Yes, overall society will produce more software than ever
McDonalds also produces more hamburgers than ever. The company makes tons of money from that. The people making the burgers usually earn the least they can legally be paid
The agricultural revolution did in fact reduce the amount of work in society by a lot though. That's why we can have week-ends, vacation, retirement and study instead of working from non stop 12yo to death like we did 150 years earlier.
Reducing the amount of work done by humans is a good thing actually, though the institutional structures must change to help spread this reduction to society as a whole instead of having mass unemployment + no retirement before 70 and 50 hours work week for those who work.
AI isn't a problem, unchecked capitalism can be one.
That's not really why (at least in the U.S.) - it was due to strong labor laws otherwise post industrial revolution you'd still have people working 12 hours a day 7 days a week - though with minimum wage stagnation one could argue that many people have to do this anyway just to make ends meet.
The agricultural revolution has been very beneficial for feeding more people with less labor inputs, but I'm kind of skeptical of the claim that it led to weekends (and the 40hr workweek). Those seem to have come from the efforts of the labor movement on the manufacturing side of things (late 19th, early 20th century). Business interests would have continued to work people 12hrs a day 7 days a week (plus child labor) to maximize profits regardless of increasing agricultural efficiency.
Agricultural work is seasonal. For most of the year you aren't working in the fields. Yes planting and harvesting can require longer hours because you need the planting and harvest done as fast as possible in order to maximize yield and reduce spoilage, but you aren't harvesting and planting the fields for the entire year working non-stop. And even then most people worked at their own pace, not every farm was as labor productive as another or even had to be as productive. Some people valued their time and health and comfort, some people valued being able to brew more beer with their 5% higher yield, some valued leisure time more, but it was a personal choice that people made. The industrial revolution is the outlier point in making people work long non-stop hours all the time. Living a subsidence farming lifestyle doesn't mean you are just hanging on a bare thread of survival the entire time like a lot of pop-media likes to portray.
Is there any evidence that AGI is a meaningful concept? I don't want to call it "obviously" a fantasy, but it's difficult to paint the path towards AGI without also employing "fantasize".
The example they gave was search engine + digital documents removed the junior lawyer headcount by a lot. Prior to digital documents, a fairly common junior lawyer task was: "we have a upcoming court case. Go to the (physical) archive and find past cases relevant to current case. Here's things to check for:" and this task would be assigned to a team of junior (3-10 people). But now one junior with a laptop suffice. As a result the firm can also manage more cases.
Seems like a pretty general pattern.
FB has long wanted to have a call center for its ~3.5B users. But that call center would automatically be the largest in history and cost ~15B/yr to run. Something that is cost ineffective in the extreme. But, with FB's internal AIs, they're starting to think that a call center may be feasible. Most of the calls are going to be 'I forgot my password' and 'it's broken' anyways. So having a robot guide people along the FAQs in the 50+ languages is perfectly fine for ~90% (Zuck's number here) of the calls. Then, with the harder calls, you can actually route it to a human.
So, to me, this is a great example of how the interaction of new tech and labor is a fractal not a hierarchy. In that, with each new tech that your specific labor sector finds, you get this fractalization of the labor in the end. Zuck would have never thought of a call center, denying the labor of many people. But this new tech allows for a call center that looks a lot like the old one, just with only the hard problems. It's smaller, yes, but it looks the same and yet is slightly different (hence a fractal).
Look, I'm not going to argue that tech is disruptive. But what I am arguing is that tech makes new jobs (most of the time), it's just that these new jobs tend to be dealing with much harder problems. Like, we''re pushing the boundaries here, and that boundary gets more fractal-y, and it's a more niche and harder working environment for your brain. The issue, of course, is that, like a grad student, you have to trust in the person working at the boundary is actually doing work and not just blowing smoke. That issue, the one of trust, I think is the key issue to 'solve'. Cal Newport talks a lot about this now and how these knowledge worker tasks really don't do much for a long time, and then they have these spats of genius. It's a tough one, and not an intellectual enterprise, but an emotional one.
A customer who wants to track the status of their order will tell you a story about how their niece is visiting from Vermont and they wanted to surprise her for her 16th birthday. It's hard because her parents don't get along as they used to after the divorce, but they are hoping that this will at the very least put a smile on her face.
The AI will classify the message as order tracking correctly, and provide all the tracking info and timeline. But because of the quick response, the customer will write back to say they'd rather talk to a human and ask for a phone number they can call.
The remaining 20% can't be resolved by neither human nor robot.
There is zero chance he wants to pay even a single person to sit and take calls from users.
He would eliminate every employee at Facebook it it were technically possible to automate what they do.
No it isn't. Attempts to do this are why I mash 0 repeatedly and chant "talk to an agent" after being in a phone tree for longer than a minute.
But there's also consolidation happening: Not every branch that is initially explored is still meaningful a few years later.
(At least that's what I got from reading old mathematical texts: People really delved deeply into some topics that are nowadays just subsumed by more convenient - or maybe trendy - machinery)
A lot of places starting with a large and unskilled workforce, getting into e.g. textile industry (which brings better RoI than farming). Then the automation arrives but it leaves a lot of people jobless (still being unskilled) while there's new jobs in maintaining the machinery etc.
Pfiefdoms and empires will be maintained.
https://en.m.wikipedia.org/wiki/Productivity_paradox
The sad part is, do you think we'll see this productivity gain as an opportunity to stop the culture of over working? I don't think so. I think people will expect more from others because of AI.
If AI makes employees twice as efficient, do you think companies will decrease working hours or cut their employment in half? I don't think so. It's human nature to want more. If 2 is good, 4 is surely better.
So instead of reducing employment, companies will keep the same number of employees because that's already factored into their budget. Now they get more output to better compete with their competitors. To reduce staff would be to be at a disadvantage.
So why do we hear stories about people being let go? AI is currently a scapegoat for companies that were operating inefficiently and over-hired. It was already going to happen. AI just gave some of these larger tech companies a really good excuse. They weren't exactly going to admit their make a mistake and over-hired, now were they? Nope. AI was the perfect excuse.
As all things, it's cyclical. Hiring will go up again. AI boom will bust. On to the next thing. One thing is for certain though, we all now have a fancy new calculator.
https://impact.economist.com/projects/responsible-innovation...
Automation is one way to do that.
I skipped over junior positions for the most part
I don’t see that not working now
https://libcom.org/article/phenomenon-bullshit-jobs-david-gr...
I am amenable to the idea that there is a lot of wasted pointless work, but not to the idea that there's some kayfabe arrangement where everyone involved thinks it's pointless but pretends otherwise, I think generally most people around such work have actually convinced themselves it's important.
This seems observationally true in the tech industry, where the world’s best programmers and technologists are tied up fiddling with transformers and datasets and evals so that the world’s worst programmers can slap together temperature converters and insecure twitter clones, and meanwhile the quality of the consumer software that people actually use is in a nosedive.
The AI was surprisingly good at filling in some holes in my specification. It generated a ton of valid C++ code that actually compiled(except it omitted the necessary #includes). I built and ran it and... the output was completely wrong.
OK, great. Now I have a few hundred lines of C++ I need to read through and completely understand to see why it's incorrect.
I don't think it will be a complete waste of time because the exercise spurred my thinking and showed me some interesting ways to solve the problem, but as far as saving me a bunch of time, no. In fact it may actually cost me more time trying to figure out what it's doing.
With all due respect to folks working on web and phone apps, I keep getting the feeling that AI is great for high level, routine sorts of problems and still mostly useless for systems programming.
As one of those folks, no it's pretty bad in that world as well. For menial crap it's a great time saver, but I'd never in a million years do the "vibe coding" thing, especially not with user-facing things or especially not for tests. I don't mind it as a rubber duck though.
I think the problem is that there's 2 groups of users, the technical ones like us and then the managers and C-levels etc. They see it spit out a hundred lines of code in a second and as far as they know (and care) it looks good, not realizing that someone now has to spend their time reviewing the 100 lines of code, plus having the burden of maintenance of those 100 lines going into the future. But, all they see is a way to get the pesky, expensive devs replaced or at least a chance squeeze more out of them. The system is so flashy and impressive looking, and you can't even blame them for falling for the marketing and hype, after all that's what all the AIs are being sold as, omnipotent and omniscient worker replacers.
Watching my non-technical CEO "build" things with AI was enlightening. He prompts it for something fairly simple, like a TODO List application. What it spits out works for the most part, but the only real "testing" he does is clicking on things once or twice and he's done and satisfied, now convinced that AI can solve literally everything you throw at it.
However if he were testing the solution as a proper dev would, he'd see that the state updates break after a certain amount of clicks, and that the list was glitching out sometimes, and that adding things breaks on scroll and overflows the viewport, and so on. These are all real examples of an "app" he made by vibe coding, and after playing around with it myself for all of 3 minutes I noticed all these issues and more in his app.
As someone working on routine problems in mainstream languages where training data is abundant, LLMs are not even great for that. Sure, they can output a bunch of code really quickly that on the surface appears correct, but on closer inspection it often uses nonexistent APIs, the logic is subtly wrong or convoluted for no reason, it does things you didn't tell it to do or ignores things you did, it has security issues and other difficult to spot bugs, and so on.
The experience is pretty much what you summed up. I've also used Claude 3.5 the most, though all other SOTA model have the same issues.
From there, you can go into the loop of copy/pasting errors to the LLM or describing the issues you did see in the hopes that subsequent iterations will fix them, but this often results in more and different issues, and it's usually a complete waste of time.
You can also go in and fix the issues yourself, but if you're working with an unfamiliar API in an unfamiliar domain, then you still have to do the traditional task of reading the documentation and web searching, which defeats the purpose of using an LLM to begin with.
To be clear: I don't think LLMs are a useless technology. I've found them helpful at debugging specific issues, and implementing small and specific functionality (i.e. as a glorified autocomplete). But any attempts of implementing large chunks of functionality, having them follow specifications, etc., have resulted in much more time and effort spent on my part than if I had done the work the traditional way.
The idea of "vibe coding" seems completely unrealistic to me. I suspect that all developers doing this are not even checking whether the code does what they want to, let alone reviewing the code for any issues. As long as it compiles they consider it a success. Which is an insane way of working that will lead to a flood of buggy and incomplete applications, increasing the dissatisfaction of end users in our industry, and possibly causing larger effects not unlike the video game crash of 1983 or the dot-com bubble.
I agree. AI is great for stuff that's hard to figure out but easy to verify.
For example, I wanted to know how to lay out something a certain way in SwiftUI and asked Gemini. I copied what it suggested, ran it and the layout was correct. I would have spent a lot more time searching and reading stuff compared to this.
Its a snippet I've written a few times before to debug data streams, but it's always annoying to get alignment just right.
I feel like that is the sweet spot for AI, to generate actual snippets of routine code that has no bearing on security or functionality, but lets you keep thinking about the problem at hand while it does that 10 minutes of busy work.
I do know people who seem to be having more success with the "vibecoding" workflow on the front end though.
For a time, we can justify this kind of extra work by imagining that it is an upfront investment. I think that is what a lot of people are doing right now. It remains to be seen when AI-assisted labor is still a net positive after we stop giving it special grace as something that will pay off a lot later if we spend a lot of time on it now.
I think it's often better to just skip this and delete the code. The cool thing about those agents is that the cost of trying this out is extremely cheap, so you don't have to overthink it and if it looks incorrect, just revert it and try something else.
I've been experimenting with Junie for past few days, and had very positive experience. It wrote a bunch of tests for me that I've been postponing for quite some time it was mostly correct from a single sentence prompt. Sometimes it does something incorrect, but I usually just revert it and move on, try something else later. There's definitely a sweet spot for things tasks it does well and you have to experiment a bit to find it out.
Most software should not exist.
That's not even meant in the tasteful "Its a mess" way. From a purely money making efficiency standpoint upwards of 90% of the code I've written in this time has not meaningfully contributed back to the enterprise, and I've tried really hard to get that number lower. Mind you, this is professional software. If you consider the vibe coder guys, I'll estimate that number MUCH higher.
It just feels like the whole way we've fit computing into the world is misaligned. We spent days building UIs that dont help the people we serve and that break at the first change to the process, and because of the support burden of that UI we never get to actually automate anything.
I still think computers are very useful to humanity, but we have forgot how to use them.
This is Sturgeon's law. (1)
And yes, but it's hard or impossible to identify the useful 10% ahead of time. It emerges after the fact.
1) https://en.wikipedia.org/wiki/Sturgeon%27s_law
Most software should not exist.
That's not even meant in the tasteful "Its a mess" way. From a purely money making efficiency standpoint upwards of 90% of the code I've written in this time has not meaningfully contributed back to the enterprise, and I've tried really hard to get that number lower. Mind you, this is professional software. If you consider the vibe coder guys, I'll estimate that number MUCH higher."
I've worked on countless projects at this point that seemed to serve no purpose, even at the outset, and had no plan to even project cost savings/profit, except, at best some hand-waving approximation.
Even worse, many companies are completely uninterested in even conceptualizing operating costs for a given solution. They get sold on some cloud thing cause "OpEx" or whatever, and then spend 100s of hours a month troubleshooting intricate convoluted architectures that accomplish nothing more than a simple relational database and web server would.
Sure, the cloud bill is a lower number, but if your staff is burning hours every week fighting `npm audit` issues, and digging through CloudWatch for errors between 13 Lambda functions, what did you "save"?
I've even worked on more than one project that existed specifically to remove manual processes (think printing and inspecting documents) to "save time." Sure, now shop floor workers/assembly workers inspect less papers manually, but now you need a whole other growth of technical staff to troubleshoot crap constantly.
Oh and the company(ies) don't have in-house staff to maintain the thing, and have no interest in actually hiring so they write huge checks to a consulting company to "maintain" the stuff at a cost often orders of magnitude higher than it'd cost to hire staff that would actually own the project(s). And these people have a conflict of interest to maximize profit, so they want to "fix" things and etc etc.
I think a lot of this is the outgrowth of the 2010s where every company was going to be a "tech company" and cargo-culted processes without understanding the purpose or rationale, and lacking competent people to properly scope and deliver solutions that work, are on time and under budget, and tangibly deliver value.
This statement is incredibly accurate
because those "best programmers" don't want to be making temperature converters nor twitter clones (unless they're paid mega bucks). This enables the low paid "worst" programmers to do those jobs for peanuts.
It's an acceptable outcome imho.
I think it's too early to say whether AI is exacerbating the problem (though I'm sympathetic to the view that it is) or improving it, or just maintaining the status quo.
I mean, isn't that obvious looking at economic output and growth? The Shopify CEO recently published a memo in which he claimed that high achievers saw "100x growth". Odd that this isn't visible in the Spotify market cap. Did they fire 99% of their engineers instead? Maybe the memo was AI written too.
Are there any 5 man software companies that do the work of 50? I haven't seen them. I wonder how long this can go on with the real world macro data so divorced from what people have talked themselves into.
There’s little sign of any AI company managing to build something that doesn’t just turn into a new baseline commodity. Most of these AI products are also horribly unprofitable, which is another reality that will need to be faced sooner rather than later.
To paraphrase Lee Iacocca: We must stop and ask ourselves, how much videogames do we really need?
I recently retired from 40 years in software-based R&D and have been wondering the same thing. Wasn't it true that 95% of my life's work was thrown away after a single demo or a disappointingly short period of use?
And I think the answer is yes, but this is just the cost of working in an information economy. Ideas are explored and adopted only until the next idea replaces it or the surrounding business landscape shifts yet again. Unless your job is in building products like houses or hammers (which evolve very slowly or are too expensive to replace), the cost of doing of business today is a short lifetime for any product; they're replaced in increasingly fast cycles, useful only until they're no longer competitive. And this evanescent lifetime is especially the case for virtual products like software.
The essence of software is to prototype an idea for info processing that has utility only until the needs of business change. Prototypes famously don't last, and increasingly today, they no longer live long enough even to work out the bugs before they're replaced with yet another idea and its prototype that serves a new or evolved mission.
Will AI help with this? Only if it speeds up the cycle time or reduces development cost, and both of those have a theoretical minimum, given the time needed to design and review any software product has an irreducible minimum cost. If a human must use the software to implement a business idea then humans must be used to validate the app's utility, and that takes time that can't be diminished beyond some point (just as there's an inescapable need to test new drugs on animals since biology is a black box too complex to be simulated even by AI). Until AI can simulate the user, feedback from the user of new/revised software will remain the choke point on the rate at which new business ideas can be prototyped by software.
Yes... basically in life, you have to find the definition of "to matter" that you can strongly believe in. Otherwise everything feels aimless, the very life itself.
The rest of what you ponder in your comment is the same. And I'd like to add that baselines shifted a lot over the years of civilization. I like to think about one specific example: painkillers. Painkillers were not used during medical procedures in a widespread manner until some 150 years ago, maybe even later. Now, it's much less horrible to participate in those procedures, for everyone involved really, and also the outcomes are better just for this factor - because the patients moves around less while anesthetized.
But even this is up for debate. All in all, it really boils down to what the individual feels like it's a worthy life. Philosophy is not done yet.
The work brings over time modest wealth, allows me and my family to live in long term safe place (Switzerland) and builds a small reserve for bad times (or inheritance, early retirement etc. this is Europe, no need to save up for kids education or potentially massive healthcare bills). Don't need more from life.
Who in their right mind would work when 95 out of 100 people around them are slacking off all day? Unless you pay them really well. So well that they prefer to work than to slack off. But then the slackers will want nicer things to do in their free time that only the workers can afford. And then you'd end up at the start.
if that were really true, who gets to decide who those 5% that gets to do work, while the rest leeches off them?
Coz i certainly would not want to be in that 5%.
It mattered enough for someone to pay you money to do it, and that money put food on the table and clothes on your body and a roof over your head and allowed you to contribute to larger society through paying taxes.
Is it the same as discovering that E = MC2 or Jonas Salk's contributions? No, but it's not nothing either.
Would we have fewer video games? If all our basic needs were met and we had a lot of free time, more people might come together to create games together for free.
I mean, look at how much free content (games, stories, videos, etc) is created now, when people have to spend more than half their waking hours working for a living. If people had more free time, some of them would want to make video games, and if they weren’t constrained by having to make money, they would be open source, which would make it even easier for someone else to make their own game based on the work.
http://youtube.com/watch?v=9lDTdLQnSQo
Who benefits from the situation? You or I who don’t have to make a u turn to get gas at this intersection, perhaps, but that is not much benefit in comparison for the opportunity cost of not having 3 prime corner lots squandered on the same single use. The clerk at the gas station for having a job available? Perhaps although maybe their labor in aggregate would have been employed in other less redundant uses that could benefit out society otherwise than selling smokes and putting $20 on 4 at 3am. The real beneficiary of this entire arrangement is the fisherman, the owner or shareholder who ultimately skims from all the pots thanks to having what is effectively a modern version of a plantation sharecropper, spending all their money in the company store and on company housing with a fig leaf of being able to choose from any number of minimum wage jobs, spend their wages in any number of national chain stores, and rent any number of increasingly investor owned property. Quite literally all owned by the same shareholders when you consider how people diversify their investments into these multiple sectors.
Now instead of misspelled words (which still happens all the time) we have incorrect words substituted in place of the correct ones.
Look at any long form article on any website these days and it will likely be riddled with errors, even on traditional news websites!
The cost, in money or time, for getting certain types of work done decreases. People ramp up demand to fill the gap, "full utilization" of the workers.
Its a very old claim that the next technology will lead to a utopia where we don't have to work, or we work drastically less often. Time and again we prove that we don't actually want that.
My hypothesis (I'm sure its not novel or unique) is that very few people know what to do with idle hands. We tend to keep stress levels high as a distraction, and tend to freak out in various ways if we find ourselves with low stress and nothing that "needs" to be done.
[1] https://en.m.wikipedia.org/wiki/Jevons_paradox
It actually does but due to wrong distribution of reward gained from that tech(automation) it does not work for the common folks.
Lets take a simple example, you, me and 8 other HN users work in Bezos’ warehouse. We each work 8h/day. Suddenly a new tech comes in which can now do the same task we do and each unit of that machine can do 2-4 of our work alone. If Bezos buys 4 of the units and setting each unit to work at x2 capacity, then 8 of us now have 8h/day x 5 days x 4 weeks = 160h leisure.
Problem is, now 8 of us still need money to survive(food, rent, utilities, healthcare etc). So, according to tech utopians, 8 of us now can use 160h of free time to focus on more important and rewarding works.(See in context of all the AI peddlers, how using AI will free us to do more important and rewarding works!). But to survive my rewarding work is to do gig work or something of same effort or more hours.
So in theory, the owner controlling the automation gets more free time to attend interviews and political/social events. The people getting automated away fall downward and has to work harder to maintain their survivality. Of course, I hope our over enthusiastic brethren who are paying LLM provider for the priviledge of training their own replacements figure the equation soon and don’t get sold by the “free time to do more meaningful work” same way the Bezos warehouse gave some of us some leisure while the automation were coming online and needed some failsafe for a while. :)
Regardless of anyone's thoughts on genAI in particular, it's important for us as a society to consider what our economic model looks like in a future where technology breaks the assumption of near-universal employment. Maybe that's UBI. Maybe it's a system of universally accessible educational stipends and pumping public funds into venture capital. Maybe it's something else entirely.
Deleted Comment
just a lot of words for "lazy" - it's built in to living organisms.
The whole economic system today is constructed to ensure that one would suffer from being "lazy". And this would be the case until post-scarcity.
We are currently a long way from that kind of change as current AI tools suck by comparison to literally 1,000x increases in productivity. So, in well under 100 years programming could become extremely niche.
We increased production and needed fewer farmers, but we now have so few farmers that most people have very little idea of what food really is, where it comes from, or what it takes to run our food system.
Higher productivity is good to a point, but eventually it risks becoming too fragile.
Yes, but.
There are more jobs in other fields that are adjacent to food production, particularly in distribution. Middle class does not existed and retail workers are now a large percentage of workers in most parts of the world.
But you do have that option, right? Work 20 hours a week instead of 40. You just aren't paid for the hours that you don't work. In a world where workers are exchanging their labor for wages, that's how it's supposed to work.
For there to be a "better option" (as in, you're paid money for not working more hours) what are you actually being paid to do?
For all the thoughts that come to mind when I say "work 20 hours a week instead of 40" -- that's where the individual's preference comes in. I work more hours because I want the money. Nobody pays me to not work.
Is that true? Most trades can work fewer hours, medical workers like nurses can, hairdressers, plenty of writers are freelance, the entire gig economy.
It seems like big companies don't provide the option, for software at least. I always chocked that up to more bureaucratic processes which add some fixed cost for each employed person.
I'm kind of ok with doing more work in the same time, though if I'm becoming way more effective I'll probably start pushing harder on my existing discussions with management about 4 day work weeks (I'm looking to do 4x10s, but I might start looking to negotiate it to "instead of a pay increase, let's keep it the same but a 4x8 week").
If AI lets me get more done in the same time, I'm ok with that. Though, on the other hand, my work is budgeting $30/mo for the AI tools, so I'm kind of figuring that any time that personally-purchased AI tools are saving me, I deduct from my work week. ;-)
>very few people know what to do with idle hands
"Millions long for immortality that don't know what to do with themselves on a rainy Sunday afternoon." -- Susan Ertz
I suspected this would be the case with AI too. A lot of people said things like "there won't be enough work anymore" and I thought, "are you kidding? Do you use the same software I use? Do you play the same games I've played? There's never enough time to add all of the features and all of the richness and complexity and all of the unit tests and all of the documentation that we want to add! Most of us are happy if we can ship a half-baked anything!"
The only real question I had was whether the tech sector would go through a prolonged, destructive famine before realizing that.
There are probably plenty of goods that are counter examples, but time utilization isn't one of them, I don't think.
That's the capitalist system. Unions successfully fought to decrease the working day to 8 hrs.
Workers are often looking to make more money, take more responsibility, or build some kind of name or reputation for themselves. There's absolutely nothing wrong with that, but that goal also incentivizes to work harder and longer.
There's no one size fits all description for workers, everyone's different. The same is true for the whole system though, it doesn't roll up to any one cause.
I worry more that an idle humanity will cause a lot more conflict. “An idle mind’s the devil’s playground” and all.
Its always possible that risk would be transitional. Anyone alive today, at least in western style societies, likely doesn't know a life without high levels of stress and distraction. It makes sense that change would cause people to lash out, maybe people growing up in that new system would handle it better (if they had the chance).
Where’s all of the articles that HN loves about kids these days not being bored anymore? What about google’s famous 20% time?
Idle time isn’t just important, it’s the point.
"In the 1970s when office computers started to come out we were told:
'Computers will save you SO much effort you won't know what to do with all of your free time'.
We just ended up doing more things per day thanks to computers."
"In the early 1900s, 25% of the US population worked in agriculture.
Today it's 2%.
I would imagine that economists back then would be astounded by that change.
I should point out: there were also no pediatric oncologists back then."
Yes, I spend time on writing prompts. Like "Never do this. Never do that. Always do this. Make sure to check that.". To tell the AI my coding preferences. Bot those prompts are forever. And I have written most of them months ago, so that now I just capitalize on them.
Like, just stop and think about it for a second. You're saying that AI has doubled your productivity. So, you're actually getting twice as much done as you were before? Can you back this up with metrics?
I can believe AI can make you waaaaaaay more productive in selective tasks, like writing test conditions, making quick disposable prototypes, etc, but as a whole saying you get twice as much done as you did before is a huge claim.
It seems more likely that people feel more productive than they did before, which is why you have this discrepancy between people saying they're 2x-10x more productive vs workplace studies where the productivity gain is around 25% on the high end.
I see it happening right in front of my eyes. I tell the AI to implement a feature that would take me an hour or more to implement and after one or two tries with different prompts, I get a solution that is almost perfect. All I need to do is fine-tune some lines to my liking, as I am very picky when it comes to code. So the implementation time goes down from an hour to 10 minutes. That is something I see happening on a daily basis.
Have you actually tried? Spend some time to write good prompts, use state of the art models (o3 or gemini-2.5 pro) and let AI implement features for you?
That said, when I had to write a Terraform project for a backend earlier this year, that’s when generative AI really shined for me.
There is a lack of training data; Apple docs arent great or really thorough, much documentation is buried in WWDC videos and requires an understanding of how the APIs evolved over time to avoid confusion when following stackoverflow posts, which confused newcomers as well as code generators. Stackoverflow is also littered with incorrect or outdated solutions to iOS/Swift coding questions.
I do full stack projects, mostly Python, HTML, CSS, Javascript.
I have two decades of experience. Not just my work time during these two decades but also much of my free time. As coding is not just my work but also my passion.
So seeing my productivity double over the course of a few months is quite something.
My feeling is that it will continue to double every few months from now on. In a few years we can probably tell the AI to code full projects from scratch, no matter how complex they are.
Deleted Comment
With swift it was somewhat helpful but not nearly as much. Eventually stopped using it for swift.
For me it’s been up to 10-100x for some things, especially starting from scratch
Just yesterday, I did a big overhaul of some scrapers, that would have taken me at least a week to get done manually (maybe doing 2-4 hrs/day for 5 days ~ 15hrs). With the help of ChatGPT, I was done in less than 2 hours
So not only it was less work, it was a way shorter delivery time
And a lot less stress
Have you tested them across different models? It seems to me that even if you manage to cajole one particular model into behaving a particular way, a different model would end up in a different state with the same input, so it might need a completely different prompt. So all the prompts would become useless whenever the vendor updates the model.
Deleted Comment
I read each line of the commit diff and change it, if it is not how I would have done it myself.
Deleted Comment
But, the study is also about LLMs currently impacting wages and hours. We're still in the process of creating targeted models for many domains. It's entirely possible the customer representatives and clerks will start to be replaced in part by AI tools. It also seems that the current increase in work could mean that headcount is kept flat, which is great for a business, but bad for employment.
I think skills in using ai to augment work will become just a new form of literacy.
AI automating software production could hugely increase demand for software.
The same thing happened as higher level languages replaced manual coding in assembly. It allowed vastly more software and more complex and interesting software to be built, which enlarged the industry.
Let's think this through
1: AI automates software production
2: Demand for software goes through the roof
3: AI has lowered the skill ceiling required to make software, so many more can do it with a 'good-enough' degree of success
4: People are making software for cheap because the supply of 'good enough' AI prompters still dwarfs the rising demand for software
5: The value of being a skilled software engineer plummets
6: The rich get richer, the middle class shrinks even further, and the poor continue to get poorer
This isn't just some kind of wild speculation. Look at any industry over the history of mankind. Look at Textiles
People used to make a good living crafting clothing, because it was a skill that took time to learn and master. Automation makes it so anyone can do it. Nowadays, automation has made it so people who make clothes are really just operating machines. Throughout my life, clothes have always been made by the cheapest overseas labour that capital could find. Sometimes it has even turned out that companies were using literal slaves or child labour.
Meanwhile the rich who own the factories have gotten insanely wealthy, the middle class has shrunk substantially, and the poor have gotten poorer
Do people really not see that this will probably be the outcome of "AI automates literally everything"?
Yes, there will be "more work" for people. Yes, overall society will produce more software than ever
McDonalds also produces more hamburgers than ever. The company makes tons of money from that. The people making the burgers usually earn the least they can legally be paid
Is it that straightforward? What about theater jobs? Vaudeville?
Reducing the amount of work done by humans is a good thing actually, though the institutional structures must change to help spread this reduction to society as a whole instead of having mass unemployment + no retirement before 70 and 50 hours work week for those who work.
AI isn't a problem, unchecked capitalism can be one.
https://firmspace.com/theproworker/from-strikes-to-labor-law...
Obesity, mineral depletion, pesticides, etc.
So in a way automation did make more work.
We won’t need jobs so we would be just fine.
- Assuming god comes to earth tomorrow, earth will be heaven
- Assuming an asteroid strikes earth in the future we need settlements on mars
etc, pointless discussion, gossip, and bs required for human bonding like on this forum or in a bierhauz