As is often the case, the title is a bit misleading and implies a universality that I don’t think the author intended.
He is specifically talking about AI, and saying (in my understanding) that you shouldn’t worry too much about whether your specific thing will be overwritten by AI, as long as you focus on actually creating true, real value with your work.
I agree with that completely and can see it happening in my own field (marketing/tech writing.)
Yes, theoretically AI can replace every writer and marketer. The functionality is there.
No, this isn’t actually happening, because what’s mattered all along isn’t a generic marketing skill set; but the mental effort to actually provide value. No one wants to read a blog post by an AI because it’s boring, and the writers that actually have something of value in their writing are doing just fine.
That still seems super naive, at least if its extended to software engineering. Companies already treat software engineers like disposable cogs to be burnt out and discarded. I don't see how AI will improve that.
>Companies already treat software engineers like disposable cogs to be burnt out and discarded.
I'm sorry but this is simply false. Nothing has really changed on the ground for SWEs. Nothing at all. Compensation is still sky high. Interviews are still the same. Career progression is the same. Everything takes the same amount of time. It's all business as usual for software engineering as a profession. AI is just one more tool in the toolkit; some devs use it, some don't. No engineering org has fallen for any hype in a meaningful way. Ultimately SWE teams on the frontline are busy doing what they always do, mostly in the way that they've always been doing.
The only thing that's changed in a remarkable way is how disconnected the rhetoric on social media is from ground reality. Anyone telling you AI is even replacing 5% of SWEs in the industry is a jobless hack.
There used to be many site aggregators curated by people for different categories - kind of like sub-reddits. At the same time, there were purely algorithmic search engines (yahoo, google, etc.).
The algorithmic approach won, but aggregators still exist.
The demand for authorised translators (approved in country X to correctly translate into country X’s native language from a particular language) has been steadily growing with globalisation and international trade. Contracts, court documents, financial records, medical records, birth and death certificates, technical reports: all of these must be translated and stamped by someone with the authority to do so, someone the government has confirmed to be competent.
The professional guilds are sounding the alarms, because there aren’t enough translators qualifying.
That's just factually untrue though. And there are high profile examples I would expect any well read person to know.
And we're at the point where you have to essentially choose the factually wrong point of view so there's no worth in even listing the counter examples.
I think that the friction lies somewhere in between what you’re observing and what ~safety1st addresses below.
The author has a specific issue in mind. Today the author chooses joy and refuses to evoke the woe and worries of the audience thus omitting their concerns; the audience fails to inherit the author’s optimism, likely due to some kind of asymmetry in sociopolitical outlook and status between the two parties.
HN is succumbing to the discordant trends in common discourse found elsewhere online. Demographic changes may have something to do with this.
I don’t think it’s demographic changes as much as it is fear of displacement.
Until 2 years ago, software engineering appeared to be an ideal career: strong demand for talent combined with high salaries. But with the productivity gains promised (and often achieved) with coding agents, people are understandably afraid. And people who are afraid take defensive measures: denial, anger, excessive criticism, etc. AI becomes, in some sense, “the enemy.”
I think that better explains the shift in overall tone.
I mean it's a title. Titles are under no obligation to condense the entire content of the article into one sentence. People who want to comment on the article should read it first, and then write in good faith.
The problem lies in the HN comments which have taken that title and interpreted it through the lens of unrelated political arguments: class warfare, anti-offshoring, etc. etc. I don't think any title would be immune from these people. They're just angry because the Internet has its hooks in their brain, and they're going to post about it.
His points are good and people would be wise to read the article and take them to heart. His key points are:
1) If you're a rent seeker, current trends will probably see you lose out to a bigger and more powerful rent seeker. He's probably right about that.
2) Creating more value than you consume is a great form of self-preservation, when you do this no one wants to get rid of you.
None of it's political. It's just good advice for life. I hereby forbid the masses from responding to these points with political rage bait.
HN has better moderation than a lot of places but from my vantage point the entire Internet is sinking into this garbage - we're more aware of the problem these days, at least, but everything and everywhere is more consumed by political hot takes than ever before.
If there was tech that forced commenters to read the article before they could comment on it - now THAT would be a valuable innovation!
> 1) If you're a rent seeker, current trends will probably see you lose out to a bigger and more powerful rent seeker. He's probably right about that.
> 2) Creating more value than you consume is a great form of self-preservation, when you do this no one wants to get rid of you.
> None of it's political. It's just good advice for life. I hereby forbid the masses from responding to these points with political rage bait.
They’re both tautologies. No new or useful info to glean. I didn’t need some highly intelligent security researcher to explain these things that are explained by intuition by anyone with an above room temp IQ.
There must surely be more to this, and given how many of his other recent blogs are a mix of political rant and a screed against da haterz. I suspect it’s a lot more political on his side than you think.
> If there was tech that forced commenters to read the article before they could comment on it - now THAT would be a valuable innovation!
lol, gotta love people who whine about HN quality and then just write pointless crybaby paragraphs like this. If you can’t beat em, join em I guess.
> No one wants to read a blog post by an AI because it’s boring
Nah. Humans can be boring too. No one wants to consume AI art in any form because art isn't just about what it is, but also how it came to be. We care about art and history because those things involved humans. And we like understanding the takes of our fellow humans. We don't care about the take of a statistical model on the topics of art and creativity.
The problem is that with generative AI, I have no means of protecting my work from being stolen.
It does not matter what license I put up. It doesn't even matter if I make it publicly available or not. LLMs have been trained on pirated material, they don't even have the decency to buy a copy. Even if I show my project to no one and just have a private repo on Github the code might still be used to train LLMs.
Your GPL licensed library? Yeah, we used claude to rewrite it and released it under MIT.
Now that wouldn't be so bad. One could argue copyright has long held back progress in certain areas. The problem is, the rules only apply one way. The rent seeking oligarchs of the tech industry can steal everything but I can't.
They can just eat the cost of a lawsuit, I can't. They can just decide to make a special deal with Disney to use their copyrighted material, I can't.
Sure the days of free markets capitalism are long gone. A few monopolists controlling the market has long been the norm. But AI makes it even worse. So much worse.
AI will be the straw that breaks the camel's back on copyright Imo. We've known since 90s Napster music piracy that copyright is broken in the information age, and its just a flimsy set of unprincipled edicts meant to protect those with power and money.
Nowadays AI companies have more money and lawyers than most movie studios, so
I predict that there will be a billion dollar company/ies (probably exist even now in stealth mode), whose business model will be to slopfork existing software - after all AI has proven to be very capable at that.
With trillions of dollars both supporting and opposing this business model, something will probably change in some way wrt copyright, and hopefully in a way that's an improvement to the average person.
> Your GPL licensed library? Yeah, we used claude to rewrite it and released it under MIT.
Good. "You" made the world a better place at the expense of "me", the rent-seeking GPL author. I can go suck it if nobody wants my equivalent product with a worse license anymore. How do people not get this? If AI enables somebody to reproduce somebody else's hard labor much more cheaply, then it should do so instead of holding everyone else to ransom just because some self-entitled programmer wants to prevent competition when he's falling behind the market.
If you're a programmer, you have no moral right to complain about that because it's also the whole point of computer software. To do things more cheaply - ie to make somebody else's work worthless.
You will get laid off but you won't if you create value and you will create value if you don't care about money and it will go recursive but it won't go recursive because there are limits. It won't change anything but it will change everything.
No one knows what he is actually saying (see comments) but at least he managed to compress the entire discourse on AI impact into a blog post.
I liked how it read. Not as a perfectly thought out post but more an ongoing conversation.
These are confusing times for engineers as the automators can now automate themselves away at even greater speed. Reminding ourselves to play positive sum games seems relevant.
The cake is too small to divide with humans and AI. We all feel that. Time to make more cakes :)
>You will get laid off but you won't if you create value
From my experience this isnt in the slightest bit true.
The professional managerial class not only sucks at identifying who creates value, they often feel threatened by those who do and try to knock them down a peg, disenfranchise them or commoditize them somehow.
You might assume that that the profit incentive would override this tendency towards shredding economic value but it doesnt, because of the principal-agent problem. The PMC always prioritizes their power within the organization over the organization's maximization of profit.
At least half of the hype about AI is about trying to gaslight developers into believing that theyre now worthless so that they can be more easily exploited by the PMC.
It was the same thing back when outsourcing to Actual Indians was in fashion in the 2000s.
He deletes posts after they are no longer relevant. Given how people dig dirt up on people and take them out of context long after that context is forgotten, more people should do that (or delete social media altogether).
Some bigco jobs have felt that way to me: I don't know if I'm actually creating anything valuable, but I'm getting paid. I think the people who are most anxious right now are the ones who suspect they're not really creating anything of real-world value, and they're terrified that they're about to stop getting paid as well.
It's often way easier to capture value than to get compensated for creating it.
It's definitely indicative of an unhealthy organization or society when this happens but generally I've still found this to be the norm.
Indeed, maybe one of the reasons why free market capitalism functions is because it has a built in check (bankruptcy) against this natural human organizational tendency.
I think a large part of why software devs were so well compensated in the last decade was because we were helping build the systems which made the capture of value more efficient (whether from taxi drivers, smbs, property rentals or whatever), not because we were facilitating its creation.
Maybe in the first 10 years of your career, after that you totally have the skills needed to create value from nothing - something no value extracting actor will ever be able to learn.
Might take a while but the milk surely becomes butter. His point is valid, maybe your pov is a bit clouded because his baseline is quite high (fame, money) but its not that different at a lower baseline. You bring 1.x to the world that fights over a deemed finite set with 0.x tools.
Who creates value in the art market? Is it the artist who creates the work? Or the dealer who persuades the buyers that the work has value? As a builder I’m attracted to the fantasy that I can create value with my bare hands just by writing code (or telling the AI to write the code), without needing any of those horrible slimy people in suits to build a business around it. Rock n roll man. If you build it, they will come. Is that the reality though? Or just survival bias based on the fact that a few geeks got lucky during the original dotcom boom when they had no competition from actual businessmen?
So you think that engineers that maintain and write the FOSS that runs most of the world IT infrastructure ( Linux, Curl, GIT etc. ) do it for the returns ?
They don't, and as a result most don't get much if any.
For them to survive, they have to have got returns from somewhere - maybe welfare, inheritance, a day job. Someone has to have worried about the returns so they can be free from thinking about it.
And if you don't worry about returns, you will let someone extract it ruthlessly from you, that you contribute millions of value to a company that gives you nothing back. This may be fine to you at some level, but many of the people who you allow to exploit you use the resources they gain as leverage to further their selfish ends, like a certain richest man in the world who helped a certain politician buy an election at the most powerful country in the world.
No, that's exactly parent's point. The premise of the title can be read as "just create value, don't worry about monetizing, things will work out (financially)". Which is invalidated by FOSS
> If you don't worry about the returns, you won't get any.
He was focusing on value, not returns.
That being said, his take is still a dumb take - if you focus on creating value you may not capture any of that value for yourself. If you don't capture that value, someone else certainly will.
The age of creating value for the public good is well and truly over - any value you create for the public good in the form of intellectual output is immediately captured by profit-maximising companies for training your replacement.
It's not just a case of having your value captured by someone else, the AI corps are actually taking your captured value and then using it against you.
Well yeah, business has literally always extracted value from open source software, that’s one of the main benefits of it… (although license violations have been unprecedented with AI)
“Creating value” in open source has never been about capturing value at all, it’s always been about volunteering and giving back, and recognising the unfathomable amount of open-source software that runs the modern world we live in
“Capturing value” is the opposite of this, wall-gardens, proprietary API’s, vendor lock-in, closed-source code… it’s almost antithetical to the idea of open source
> The age of creating value for the public good is well and truly over
It's not a zero sum game. Someone putting my open-source contributions (for example) in their dataset isn't subtracting value from me, or the rest of society.
I just do my job to the best of my ability. If I can help a colleague I do. I don't expect to get explicitly credited for everything I do.
If my employer can't see or don't care about the value I bring, I simply go to one who values me higher. I refuse to participate in office politics and that kind of BS.
Haha that's not what the post (or the post it links to) says. Every CS student should know there's no free lunch in search and optimization. There's tradeoffs between random search, evolutionary algorithms, and convex optimization. https://en.wikipedia.org/wiki/No_free_lunch_in_search_and_op...
There's an AI "smell" to things that are generated. Why is that? Mode collapse is impossible to see from a small number of samples. Are we mode collapsing society? How would we know if we were?
Also, will computers surpass humans has such an implicit bias in it. Have humans surpassed ants? Have ants surpassed rocks? Have jet planes surpassed teletubbies?
> Every CS student should know there's no free lunch in search and optimization
The no free lunch theorem is so absurdly limited because of the constraints that it's IMO a tautology and fundamentally irrelevant outside of exceptionally tiny areas. You can't have one search algorithm that's better than others on average when searching entirely random things with no structure? 1. Yes, obviously. Nice to have a formulation but it's not exactly a surprise and 2. That's not what we deal with in the real world.
It's easy to create value for others and not worry about returns when you have enough money to not worry.
Unfortunately for most people, there's plenty of companies willing to take the returns and leave you paycheck to paycheck. That's literally what they are optimized to do.
I don't even disagree with the ideal, but I think a prerequisite step to this philosophy is UBI.
Geohot is a smart dude. But here I think he misses the forest for the trees.
He has a point, certainly. But while he is harping about the U part of ubi, he's completely ignoring the B part. UBI is meant to provide some basic income so people don't starve. It's just an optimization of welfare programs where you have a ton of bureaucracy and make people jump through endless hoops and cause them endless amounts of stress (which is known to make people work less, not more). And replace it by just giving all citizens the same amount.
Yes, that's a bit stupid for the people first paying taxes and then getting them right back again minus overhead costs, but if you think about it: that's what happens now too, only less efficient (in the netherlands, that is) so you pay even more overhead.
On top of that comes the other realization: If the current trend of automating everything continues,we'll ultimately end up with (hyperbole) 1 person owning all the machines doing all the work. That 1 person earning all the money, and (in an ideal case) paying his taxes to give everybody else welfare. Which just is the same as UBI.
In a certain way this already happens now. Most not-too-smart people that used to be gainfully employed as laborer somewhere are now on welfare, and the threshold for not-too-smart could go up rather steeply with the current AI trends.
> What do you plan to buy with your free government dollars? Want to buy eggs? Sorry, the egg people stopped making eggs, they are living free on UBI. Want to buy a house? Who built it? Nobody, because they all were getting UBI and didn’t want to build houses anymore. They write poems now. There’s still old houses available, but the price for them has 20xed, well outside of what you can afford.
In my country the people that are producing and selling eggs do it almost for free, they could do something else and get much more money, but they choose to do eggs. My theory is that people choose to do stuff not just because of the money. Narrowing all the interests to just money doesn't capture the complex reality. When you cancel the money thing, you let people choose what to do based on their real ambitions and aspirations, removing the alien interest (money) that skews the world so much that even geohotz got confused.
Apparently if we, the poorer ones, win the war of attrition, the problematic ones that own everything will resign to golf. Or something. Getting financial planning from a lottery winner.
I think it’s a bad idea for about the same reasons, but that’s assuming we’re implementing it right now in the current economy. If automation means that in the future there’s not much for all these people to do that creates value then it makes sense.
Which of course ignores the obvious point that UBI is all about taking existing resource redistribution and making it less costly and more efficient. Practically all Western countries redistribute income on a massive scale (compared to the default outcomes of a completely free market capitalism) in order to ensure everyone can provide for their basic needs, and that could all be gradually replaced by UBI.
This is broadly in line with OP's suggested ethic "create value for others, don't play zero sum games" since capitalism is based on rewarding those who create the most value, whereas zero-sum games are largely political in nature.
Not necessarily UBI; one just needs an adequate day job. Then the hobby could be creating value with no expectation of any direct return: writing a blog, writing and giving away music, writing open-source software, doing any volunteer work, etc.
There's something more than just an adequate day job (which is perhaps necessary in more ways than just "get the money get the cheddar") - because we can find pages and pages of examples of "well paid" (doctor, lawyer, tech) people who are drowning in debt, living paycheck to paycheck, and perpetually unhappy.
I think there’s a strong bias towards hacking and cool side projects from the hackernews crowd. But I’m not so sure much of the general population would use their free time afforded by UBI for productive and useful endeavors. At least from my observations there’s a significant portion of the population that uses their free time to be idle and veg in front of the TV and/or get wasted. My concern with UBI, even if it was financially tenable as it would underwrite a whole lot of that - including the more criminal, antisocial sub-population.
Wouldn't convincing the criminal part of the population to just stay home be a net win? Policing and prisons are both notably more expensive then welfare.
> when you have enough money to not worry. Unfortunately for most people ... paycheck to paycheck
This is some truth to this argument, but the frequency with which it's brought out as an excuse to just dismiss any argument one doesn't like is too high in North America.
Simply bashing every argument with, "but some people are in a bad situation" doesn't really further discussion all that much.
Did you RTA? The author is predicting that those employees (at least in software dev) will get laid off; so they should get out and find some way to create real value (or make some other change) for their own sake, because they’re about to lose even “paycheck to paycheck”. You should debate this instead, because if true, it makes your point irrelevant.
As long as the global population is still rising, they will be carnage between competitions. The author and many others might be foresee the (near) future where the global population start declining, maybe then, we can do things just because we can.
No matter how much resources a society has, natural selection pushes everyone to keep trying hard to get more, as those that don't end up without resources.
In a society, the fastest way to get resources is to provide something in exchange to other members of the society. The most common thing we have to exchange for resources is work.
From those two things we can see that no matter what society you have or how wealthy it is, people will work as much as they can, or else they get behind in the rat race.
Not just if you already have enough money, but it's easy to say if you're as smart as Geohot. For those who aren't, (I'm not), creating that kind of value isn't just hard, it's impossible!
Indeed, what is worse is expectation created by rich people that whatever little value you did create should be given away for free! I see it frequently on HN with product launches where people are demanding product to be opensource with liberal license which effectively means it should be free.
money is a judgement of value to society and a motivator to only allocate work in a useful way.. wouldn't UBI, even if coupled to actually producing _something_ will lead to a lot of useless stuff being made?
> wouldn't UBI, even if coupled to actually producing _something_ will lead to a lot of useless stuff being made?
The general premise of a UBI is that it's unconditional.
If you tried to say someone is required to produce something without specifying what it is, they'll produce whatever is the easiest thing to produce, which will naturally be useless if they otherwise wouldn't have produced anything because the only reason they're doing it is to satisfy the demand of someone not imposing any specific requirements on the output.
But if it's actually unconditional then the things produced would only be the things someone wants to produce, i.e. the things worth their time to produce when they're not actually required to spend their time producing it. Those things would tend to be useful because at least the author found them to be and there's a decent chance they're not unique in the world. If you e.g. make an app just because you want to use it yourself, maybe someone else wants to use it too.
Would be great if true, but that doesn't really correspond in reality truly, especially in intellectual products. Compare even Linus Torvalds fortune with e.g. snapchat founder. Not even talking about thousands of 0 profit open source projects with millions of installations versus some saas hustler - usually the former provide much more value to society than some guy who is just good at selling stuff.
UBI might fuel some useless work, but it also might provide a way to people to be more into creative side of things rather than selling and marketing rat race.
Also in less developed countries money even less corresponds to value. It almost always has some kind of mafia and corruption that extracts huge portions of value from the economy and basically net negative, though profitable.
I'd like to live in the world where money are always allocated fairly, but we see that in IT, for example, predating, stealing data, spying on people bring more money than the honest work due to misaligned incentives, when bad actors pay more money than actual consumer.
We as a society would profit from not categorizing everything in terms of its usefulness. Things can and should be allowed to just be.
That being said, UBI would probably result in more useful things not less. There are so many cases of jobs and things that seem to just be busywork or outright scams. There are also a lot of things that only appear useful if you never take the time to think about them. A plastic straw that will pollute the environment for thousands of years just so i can have a drink for two minutes? That is useless.
Every street in every city being lined by cars that don't move for 95% of the time? That is useless and insane. Imagine what marvelous machines we could have built instead.
Also, I find the online discussion around UBI to be quite weird. I don't think anyone serious is advocating for it to be particularly high. In my opinion, UBI should cover your necessities plus some so you can participate in society.
This gives everyone the opportunity to take it slow or focus on personal projects without fear. Everything luxurious can not, and should not, be affordable with UBI. This will leave ample opportunity for people to still care about and want to work.
Humans will always do. It is in our nature. But not letting people get homeless or starve to death might enable those of us that don't want to do what our overlords deem useful to do the things our society so desperately needs.
I don't need some poor fool to cook my burger for me. I'd rather take turns with my friends that now have free time.
"Don't worry about money" is something a lot of companies do. They can just try to create value first, then look for profits later (albeit often though "enshitification").
This bias towards creating value makes them more moral than mere mortals, creating huge amounts of innovation and surplus value.
> You have a right to perform your prescribed duty, but you are not entitled to the fruits of action. Never consider yourself the cause of the results of your activities, and never be attached to not doing your duty.
I have a hard time interpreting that as what geohotz is saying. If anything it seems to promote rent seekers by telling you - stick to your lane and don't complain. I.e. the caste system
> If anything it seems to promote rent seekers by telling you - stick to your lane and don't complain. I.e. the caste system
I was wondering if that would come up and HN delivers without fail. Anyway, you are free to interpret it as you see fit.
The guidance was for someone who was struggling with a moral dillema on facing relatives in war and undecided over action. It is not a diktat to work or provide unquestion labor.
For anyone who understood the whole story and backdrop of the situation, a reasonable interpretation is
- you are responsible for your actions but you cannot control the consequences of your actions due to many factors.
- When you detach yourselves from results, you can do your job without anxiety.
- do not let the fear over results be an excuse for inaction.
Give it a read and decide for yourselves if you are not convinced. Even without the teachings part, the whole story of Gita is actually an epic story/novel with some strong and conflicted characters with elaborate back stories.
> You have a right to perform your prescribed duty, but you are not entitled to the fruits of action.
> stick to your lane and don't complain. I.e. the caste system
That verse is quite famous and the general interpretation as I understand is this.
You have control on your actions but not on its results. The results depend not only on your actions but on many other factors outside of your control.
Now, one can interpret that it is instruction to "stay in your lane", but I have not seen that interpretation so far in my life in India.
You need to understand the context. The quote in Gita was to motivate the best warrior of the time at the battlefront facing opponents who were mainly his cousins and uncles.
In that context the quote is about performing the duties you were born to do without overthinking the consequences.
It’s a tricky philosophy to put into practice. I have oscillated between this approach (“owning” the effort) and “owning” the outcome. I have found that taking ownership of the outcome leads to better results because I have a personal stake in the outcome and I tend to think through the problem more deeply, but I am almost always left feeling more stressed and “empty” when the work is finished. When I focus on doing the best I can and let go of the outcome, the end result is almost always subpar which leaves me feeling frustrated, because I know it could have been better had I taken on more responsibility.
Even if your goal is to go out and create value for others, your contribution is proportional to what everyone else can offer. If others with AI will deliver that value cheaper, or if what I am good at can be easily automated, it's getting harder and harder to deliver more value than I consume.
Only if you're stuck in the comparison trap. The point isn't to compete about who can offer more value - the point is simply to offer more value (or create more value) than you consume. That's it.
What others do is actually irrelevant to the argument.
If what you are good at can be easily automated... be curious, grow, and get good at other things you can provide more value in. These are usually adjacent to what you're already good at.
Also, the timeline isn't 'the next few years' or 'the past', but 'your entire life.'
> What others do is actually irrelevant to the argument.
If I used to provide some value X in a day, and that was enough to cover my consumption for the day, but now others are providing the same value X in 5 minutes, it will not be enough to cover my consumption for the day anymore
Is it? If "others with AI" deliver what you consume, it should also make it easier to deliver more than you consume because what you consume becomes cheaper.
Maybe a part of the anxiety is the realization that much if what was delivered by well-paid people before AI is actually not something the very same people want to consume?
>> If others with AI will deliver that value cheaper...
That's the most interesting thing - in 99.9% they don't.
All their value is negated by lowering code base quality, pushing slop to prod ("but code reviews..." - don't help sorry, unless you spent a long time getting to understand a problem - simply reading a solution gives only false confidence that you understood it - you didn't, not fully). E.g. see all the outages at amazon, cloudflare, etc.
Quick short term wins lead to big longer term losses - and this is already happening.
The issue is - its basically impossible to make decision makers see this as this requires many years of expertise in tech, and it is very not obvious, and sounds like you just don't want to rely on AI to replace you etc etc.
While selling AI is easy - "look! it did this feature in 5 minutes! so much productivity".
The world very much is a Red Queen Race if your country has a program to import Indian tech workers. The only way to leave the Red Queen race in such countries is to abandon your career field and work in food service or retail instead.
Why not become a skilled craftsman and make a much better wage than in food service or retail? Every western country I've ever been in seems to have everlasting shortages of skilled plumbers, electricians and welders and prices have risen to match.
The skilled trades are better than unskilled work. Better than retail or food service, but they certainly aren’t a replacement for white collar jobs.
The median salaries for skilled trades aren’t great. You can make good money if you are willing to work a ton of overtime, or if you can manage to get one of the very limited union spots in the right city. Or if you become a business owner (and accept the corresponding risk) and mostly manage other skilled employees.
It’s also not a viable solution for more than a small percent of the population. Let’s say AI comes along and forces 25% of the white collar workers out of a job, there is only enough room in the skilled trades to handle a tiny fraction of those displaced workers.
That’s ignoring what massive unemployment does to salaries in the trades. And the fact that to make decent money in the trades you need years of working for peanuts first. And if you think age discrimination is a problem in tech, try breaking into the trades as a gray beard. The entry level jobs are built on the assumption that you are 20 years old and can do 12 hours of hard physical labor without needing a week off to recover.
Again it’s not impossible, it’s just not a solution at any kind of scale.
Putting aside the obvious distinction of labor vs office work, are you sure about this? Like, sure as in you have tried to find a stable, well paying job in any of those trades?
The un-politically correct answer is that many college educated people perceive those working in the trades to be socially beneath them. And they often have different opinions on social issues than the typical tradesperson, which is apparently really important if you’re a plumber but not if you work at BigCo.
He is specifically talking about AI, and saying (in my understanding) that you shouldn’t worry too much about whether your specific thing will be overwritten by AI, as long as you focus on actually creating true, real value with your work.
I agree with that completely and can see it happening in my own field (marketing/tech writing.)
Yes, theoretically AI can replace every writer and marketer. The functionality is there.
No, this isn’t actually happening, because what’s mattered all along isn’t a generic marketing skill set; but the mental effort to actually provide value. No one wants to read a blog post by an AI because it’s boring, and the writers that actually have something of value in their writing are doing just fine.
If you are but still get canned, then you’re just dealing with irrational management, and that predates AI.
But employers think it is, and are falling for the hype and are affecting engineers left and right regardless.
Whether I'm wrong or they're wrong is immaterial.
They're still replacing SWEs with AI.
I'm sorry but this is simply false. Nothing has really changed on the ground for SWEs. Nothing at all. Compensation is still sky high. Interviews are still the same. Career progression is the same. Everything takes the same amount of time. It's all business as usual for software engineering as a profession. AI is just one more tool in the toolkit; some devs use it, some don't. No engineering org has fallen for any hype in a meaningful way. Ultimately SWE teams on the frontline are busy doing what they always do, mostly in the way that they've always been doing.
The only thing that's changed in a remarkable way is how disconnected the rhetoric on social media is from ground reality. Anyone telling you AI is even replacing 5% of SWEs in the industry is a jobless hack.
There used to be many site aggregators curated by people for different categories - kind of like sub-reddits. At the same time, there were purely algorithmic search engines (yahoo, google, etc.).
The algorithmic approach won, but aggregators still exist.
The professional guilds are sounding the alarms, because there aren’t enough translators qualifying.
And we're at the point where you have to essentially choose the factually wrong point of view so there's no worth in even listing the counter examples.
That’s also close to the YC motto: “make something people want”.
Or as Paul Graham puts it: be good. https://paulgraham.com/good.html
The author has a specific issue in mind. Today the author chooses joy and refuses to evoke the woe and worries of the audience thus omitting their concerns; the audience fails to inherit the author’s optimism, likely due to some kind of asymmetry in sociopolitical outlook and status between the two parties.
HN is succumbing to the discordant trends in common discourse found elsewhere online. Demographic changes may have something to do with this.
Until 2 years ago, software engineering appeared to be an ideal career: strong demand for talent combined with high salaries. But with the productivity gains promised (and often achieved) with coding agents, people are understandably afraid. And people who are afraid take defensive measures: denial, anger, excessive criticism, etc. AI becomes, in some sense, “the enemy.”
I think that better explains the shift in overall tone.
The problem lies in the HN comments which have taken that title and interpreted it through the lens of unrelated political arguments: class warfare, anti-offshoring, etc. etc. I don't think any title would be immune from these people. They're just angry because the Internet has its hooks in their brain, and they're going to post about it.
His points are good and people would be wise to read the article and take them to heart. His key points are:
1) If you're a rent seeker, current trends will probably see you lose out to a bigger and more powerful rent seeker. He's probably right about that.
2) Creating more value than you consume is a great form of self-preservation, when you do this no one wants to get rid of you.
None of it's political. It's just good advice for life. I hereby forbid the masses from responding to these points with political rage bait.
HN has better moderation than a lot of places but from my vantage point the entire Internet is sinking into this garbage - we're more aware of the problem these days, at least, but everything and everywhere is more consumed by political hot takes than ever before.
If there was tech that forced commenters to read the article before they could comment on it - now THAT would be a valuable innovation!
Deleted Comment
Deleted Comment
> 2) Creating more value than you consume is a great form of self-preservation, when you do this no one wants to get rid of you.
> None of it's political. It's just good advice for life. I hereby forbid the masses from responding to these points with political rage bait.
They’re both tautologies. No new or useful info to glean. I didn’t need some highly intelligent security researcher to explain these things that are explained by intuition by anyone with an above room temp IQ.
There must surely be more to this, and given how many of his other recent blogs are a mix of political rant and a screed against da haterz. I suspect it’s a lot more political on his side than you think.
> If there was tech that forced commenters to read the article before they could comment on it - now THAT would be a valuable innovation!
lol, gotta love people who whine about HN quality and then just write pointless crybaby paragraphs like this. If you can’t beat em, join em I guess.
Nah. Humans can be boring too. No one wants to consume AI art in any form because art isn't just about what it is, but also how it came to be. We care about art and history because those things involved humans. And we like understanding the takes of our fellow humans. We don't care about the take of a statistical model on the topics of art and creativity.
It does not matter what license I put up. It doesn't even matter if I make it publicly available or not. LLMs have been trained on pirated material, they don't even have the decency to buy a copy. Even if I show my project to no one and just have a private repo on Github the code might still be used to train LLMs.
Your GPL licensed library? Yeah, we used claude to rewrite it and released it under MIT.
Now that wouldn't be so bad. One could argue copyright has long held back progress in certain areas. The problem is, the rules only apply one way. The rent seeking oligarchs of the tech industry can steal everything but I can't.
They can just eat the cost of a lawsuit, I can't. They can just decide to make a special deal with Disney to use their copyrighted material, I can't.
Sure the days of free markets capitalism are long gone. A few monopolists controlling the market has long been the norm. But AI makes it even worse. So much worse.
Nowadays AI companies have more money and lawyers than most movie studios, so
I predict that there will be a billion dollar company/ies (probably exist even now in stealth mode), whose business model will be to slopfork existing software - after all AI has proven to be very capable at that.
With trillions of dollars both supporting and opposing this business model, something will probably change in some way wrt copyright, and hopefully in a way that's an improvement to the average person.
Delete your github repos and operate your own gitolite instance. Feed vibecode to GitHub so the LLMs coprophagically train on their own slop.
Good. "You" made the world a better place at the expense of "me", the rent-seeking GPL author. I can go suck it if nobody wants my equivalent product with a worse license anymore. How do people not get this? If AI enables somebody to reproduce somebody else's hard labor much more cheaply, then it should do so instead of holding everyone else to ransom just because some self-entitled programmer wants to prevent competition when he's falling behind the market.
If you're a programmer, you have no moral right to complain about that because it's also the whole point of computer software. To do things more cheaply - ie to make somebody else's work worthless.
No one knows what he is actually saying (see comments) but at least he managed to compress the entire discourse on AI impact into a blog post.
These are confusing times for engineers as the automators can now automate themselves away at even greater speed. Reminding ourselves to play positive sum games seems relevant.
The cake is too small to divide with humans and AI. We all feel that. Time to make more cakes :)
From my experience this isnt in the slightest bit true.
The professional managerial class not only sucks at identifying who creates value, they often feel threatened by those who do and try to knock them down a peg, disenfranchise them or commoditize them somehow.
You might assume that that the profit incentive would override this tendency towards shredding economic value but it doesnt, because of the principal-agent problem. The PMC always prioritizes their power within the organization over the organization's maximization of profit.
At least half of the hype about AI is about trying to gaslight developers into believing that theyre now worthless so that they can be more easily exploited by the PMC.
It was the same thing back when outsourcing to Actual Indians was in fashion in the 2000s.
Deleted Comment
Dead Comment
Do we really think these billions of debt generatjon is anything else?
0 posts
Is this guy just paying bots to upvote and promote his stuff?
https://en.wikipedia.org/wiki/George_Hotz
If you don't worry about the returns, you won't get any.
There are circumstances where that is fine. Be sure you're in one of them first.
1. create value, then
2. capture some of that created value.
Some people want to skip step 1.
Some bigco jobs have felt that way to me: I don't know if I'm actually creating anything valuable, but I'm getting paid. I think the people who are most anxious right now are the ones who suspect they're not really creating anything of real-world value, and they're terrified that they're about to stop getting paid as well.
It's definitely indicative of an unhealthy organization or society when this happens but generally I've still found this to be the norm.
Indeed, maybe one of the reasons why free market capitalism functions is because it has a built in check (bankruptcy) against this natural human organizational tendency.
I think a large part of why software devs were so well compensated in the last decade was because we were helping build the systems which made the capture of value more efficient (whether from taxi drivers, smbs, property rentals or whatever), not because we were facilitating its creation.
Geohot seems to be telling people to do the opposite. Maximise value and don't consider returns.
Is it hyperbolic yes? Is it perfectly acceptable opinion to have and post on your own blog? Yes.
I think sometimes we all get caught in the I don't agree with them entirely. get him!! Online.
Might take a while but the milk surely becomes butter. His point is valid, maybe your pov is a bit clouded because his baseline is quite high (fame, money) but its not that different at a lower baseline. You bring 1.x to the world that fights over a deemed finite set with 0.x tools.
For them to survive, they have to have got returns from somewhere - maybe welfare, inheritance, a day job. Someone has to have worried about the returns so they can be free from thinking about it.
And if you don't worry about returns, you will let someone extract it ruthlessly from you, that you contribute millions of value to a company that gives you nothing back. This may be fine to you at some level, but many of the people who you allow to exploit you use the resources they gain as leverage to further their selfish ends, like a certain richest man in the world who helped a certain politician buy an election at the most powerful country in the world.
He was focusing on value, not returns.
That being said, his take is still a dumb take - if you focus on creating value you may not capture any of that value for yourself. If you don't capture that value, someone else certainly will.
The age of creating value for the public good is well and truly over - any value you create for the public good in the form of intellectual output is immediately captured by profit-maximising companies for training your replacement.
It's not just a case of having your value captured by someone else, the AI corps are actually taking your captured value and then using it against you.
“Creating value” in open source has never been about capturing value at all, it’s always been about volunteering and giving back, and recognising the unfathomable amount of open-source software that runs the modern world we live in
“Capturing value” is the opposite of this, wall-gardens, proprietary API’s, vendor lock-in, closed-source code… it’s almost antithetical to the idea of open source
It's not a zero sum game. Someone putting my open-source contributions (for example) in their dataset isn't subtracting value from me, or the rest of society.
If my employer can't see or don't care about the value I bring, I simply go to one who values me higher. I refuse to participate in office politics and that kind of BS.
I don't remember ever learning a theorem stating that computers cannot surpass humans.
There's an AI "smell" to things that are generated. Why is that? Mode collapse is impossible to see from a small number of samples. Are we mode collapsing society? How would we know if we were?
Also, will computers surpass humans has such an implicit bias in it. Have humans surpassed ants? Have ants surpassed rocks? Have jet planes surpassed teletubbies?
The no free lunch theorem is so absurdly limited because of the constraints that it's IMO a tautology and fundamentally irrelevant outside of exceptionally tiny areas. You can't have one search algorithm that's better than others on average when searching entirely random things with no structure? 1. Yes, obviously. Nice to have a formulation but it's not exactly a surprise and 2. That's not what we deal with in the real world.
Have ____ surpassed teletubbies?
Can always be answered in the affirmative.
Unfortunately for most people, there's plenty of companies willing to take the returns and leave you paycheck to paycheck. That's literally what they are optimized to do.
I don't even disagree with the ideal, but I think a prerequisite step to this philosophy is UBI.
from the same author
He has a point, certainly. But while he is harping about the U part of ubi, he's completely ignoring the B part. UBI is meant to provide some basic income so people don't starve. It's just an optimization of welfare programs where you have a ton of bureaucracy and make people jump through endless hoops and cause them endless amounts of stress (which is known to make people work less, not more). And replace it by just giving all citizens the same amount.
Yes, that's a bit stupid for the people first paying taxes and then getting them right back again minus overhead costs, but if you think about it: that's what happens now too, only less efficient (in the netherlands, that is) so you pay even more overhead.
On top of that comes the other realization: If the current trend of automating everything continues,we'll ultimately end up with (hyperbole) 1 person owning all the machines doing all the work. That 1 person earning all the money, and (in an ideal case) paying his taxes to give everybody else welfare. Which just is the same as UBI.
In a certain way this already happens now. Most not-too-smart people that used to be gainfully employed as laborer somewhere are now on welfare, and the threshold for not-too-smart could go up rather steeply with the current AI trends.
In my country the people that are producing and selling eggs do it almost for free, they could do something else and get much more money, but they choose to do eggs. My theory is that people choose to do stuff not just because of the money. Narrowing all the interests to just money doesn't capture the complex reality. When you cancel the money thing, you let people choose what to do based on their real ambitions and aspirations, removing the alien interest (money) that skews the world so much that even geohotz got confused.
Apparently if we, the poorer ones, win the war of attrition, the problematic ones that own everything will resign to golf. Or something. Getting financial planning from a lottery winner.
This is broadly in line with OP's suggested ethic "create value for others, don't play zero sum games" since capitalism is based on rewarding those who create the most value, whereas zero-sum games are largely political in nature.
Deleted Comment
This is some truth to this argument, but the frequency with which it's brought out as an excuse to just dismiss any argument one doesn't like is too high in North America.
Simply bashing every argument with, "but some people are in a bad situation" doesn't really further discussion all that much.
No matter how much resources a society has, natural selection pushes everyone to keep trying hard to get more, as those that don't end up without resources.
In a society, the fastest way to get resources is to provide something in exchange to other members of the society. The most common thing we have to exchange for resources is work.
From those two things we can see that no matter what society you have or how wealthy it is, people will work as much as they can, or else they get behind in the rat race.
Unless for those who can afford not worrying about money, of course.
The general premise of a UBI is that it's unconditional.
If you tried to say someone is required to produce something without specifying what it is, they'll produce whatever is the easiest thing to produce, which will naturally be useless if they otherwise wouldn't have produced anything because the only reason they're doing it is to satisfy the demand of someone not imposing any specific requirements on the output.
But if it's actually unconditional then the things produced would only be the things someone wants to produce, i.e. the things worth their time to produce when they're not actually required to spend their time producing it. Those things would tend to be useful because at least the author found them to be and there's a decent chance they're not unique in the world. If you e.g. make an app just because you want to use it yourself, maybe someone else wants to use it too.
UBI might fuel some useless work, but it also might provide a way to people to be more into creative side of things rather than selling and marketing rat race.
Also in less developed countries money even less corresponds to value. It almost always has some kind of mafia and corruption that extracts huge portions of value from the economy and basically net negative, though profitable.
I'd like to live in the world where money are always allocated fairly, but we see that in IT, for example, predating, stealing data, spying on people bring more money than the honest work due to misaligned incentives, when bad actors pay more money than actual consumer.
It is easy to find examples of money not being a judgement of value in practice: think about thief or extortion for example, or pushing drugs.
Also, I find the online discussion around UBI to be quite weird. I don't think anyone serious is advocating for it to be particularly high. In my opinion, UBI should cover your necessities plus some so you can participate in society. This gives everyone the opportunity to take it slow or focus on personal projects without fear. Everything luxurious can not, and should not, be affordable with UBI. This will leave ample opportunity for people to still care about and want to work.
Humans will always do. It is in our nature. But not letting people get homeless or starve to death might enable those of us that don't want to do what our overlords deem useful to do the things our society so desperately needs. I don't need some poor fool to cook my burger for me. I'd rather take turns with my friends that now have free time.
This bias towards creating value makes them more moral than mere mortals, creating huge amounts of innovation and surplus value.
https://vedabase.io/en/library/bg/2/47/
I have a hard time interpreting that as what geohotz is saying. If anything it seems to promote rent seekers by telling you - stick to your lane and don't complain. I.e. the caste system
I was wondering if that would come up and HN delivers without fail. Anyway, you are free to interpret it as you see fit.
The guidance was for someone who was struggling with a moral dillema on facing relatives in war and undecided over action. It is not a diktat to work or provide unquestion labor.
For anyone who understood the whole story and backdrop of the situation, a reasonable interpretation is
- you are responsible for your actions but you cannot control the consequences of your actions due to many factors.
- When you detach yourselves from results, you can do your job without anxiety.
- do not let the fear over results be an excuse for inaction.
Give it a read and decide for yourselves if you are not convinced. Even without the teachings part, the whole story of Gita is actually an epic story/novel with some strong and conflicted characters with elaborate back stories.
> stick to your lane and don't complain. I.e. the caste system
That verse is quite famous and the general interpretation as I understand is this.
You have control on your actions but not on its results. The results depend not only on your actions but on many other factors outside of your control.
Now, one can interpret that it is instruction to "stay in your lane", but I have not seen that interpretation so far in my life in India.
In that context the quote is about performing the duties you were born to do without overthinking the consequences.
i completely agree with you and the post you are replying too. both are correct.
What others do is actually irrelevant to the argument.
If what you are good at can be easily automated... be curious, grow, and get good at other things you can provide more value in. These are usually adjacent to what you're already good at.
Also, the timeline isn't 'the next few years' or 'the past', but 'your entire life.'
If I used to provide some value X in a day, and that was enough to cover my consumption for the day, but now others are providing the same value X in 5 minutes, it will not be enough to cover my consumption for the day anymore
Maybe a part of the anxiety is the realization that much if what was delivered by well-paid people before AI is actually not something the very same people want to consume?
They're just producing what I produce, i.e software.
That's the most interesting thing - in 99.9% they don't. All their value is negated by lowering code base quality, pushing slop to prod ("but code reviews..." - don't help sorry, unless you spent a long time getting to understand a problem - simply reading a solution gives only false confidence that you understood it - you didn't, not fully). E.g. see all the outages at amazon, cloudflare, etc.
Quick short term wins lead to big longer term losses - and this is already happening.
The issue is - its basically impossible to make decision makers see this as this requires many years of expertise in tech, and it is very not obvious, and sounds like you just don't want to rely on AI to replace you etc etc.
While selling AI is easy - "look! it did this feature in 5 minutes! so much productivity".
The median salaries for skilled trades aren’t great. You can make good money if you are willing to work a ton of overtime, or if you can manage to get one of the very limited union spots in the right city. Or if you become a business owner (and accept the corresponding risk) and mostly manage other skilled employees.
It’s also not a viable solution for more than a small percent of the population. Let’s say AI comes along and forces 25% of the white collar workers out of a job, there is only enough room in the skilled trades to handle a tiny fraction of those displaced workers.
That’s ignoring what massive unemployment does to salaries in the trades. And the fact that to make decent money in the trades you need years of working for peanuts first. And if you think age discrimination is a problem in tech, try breaking into the trades as a gray beard. The entry level jobs are built on the assumption that you are 20 years old and can do 12 hours of hard physical labor without needing a week off to recover.
Again it’s not impossible, it’s just not a solution at any kind of scale.
Deleted Comment