Readit News logoReadit News
TechSquidTV · 2 months ago
Personal belief, but robots coming for your jobs is not a valid argument against robots. If robots can do a job better and/or faster, they should be the ones doing the jobs. Specialization is how we got to the future.

So the problem isn't robots, it's the structure of how we humans rely on jobs for income. I don't necessarily feel like it's the AI company's problem to fix either.

This is what government is for, and not to stifle innovation by banning AI but by preparing society to move forward.

didibus · 2 months ago
> Once men turned their thinking over to machines in the hope that this would set them free. But that only permitted other men with machines to enslave them.

-- Frank Herbert, Dune

The "government" is just the set of people who hold power over others.

Those who will own machines that can match humans will have unprecedented power over others. In effect they'll grow more and more to be the government.

Even now, companies hold more and more of the power over others and are more part of the government than ever before.

So it confuses me when you say it's what the government is for? Who would that be? If we pretend it would still be a democracy then I guess you're saying it's everyone's problem?

So here we are, let's discuss the solution and vote for it?

jimnotgym · 2 months ago
Voting for it has become really difficult in countries with a First Past The Post voting systems, where the only parties that could win are comprehensively captured by the elite
bittercynic · 2 months ago
>The "government" is just the set of people who hold power over others.

Often, yes, but in a more functional society it would be the mechanism we collectively use to prevent a few people from amassing excessive wealth and power.

kldg · 2 months ago
a democracy would be neat; we have a representative democracy here, so I can only vote for one of two candidates with a plausible chance of being elected, neither of who(m, if you must) have a coherent policy on AI or general disbursement of product, and even if they did, would be unable to convince the existing power structures in legislature to do something bold. probably better for mental health to accept a lot of these things (progression of AI or regulation of it, healthcare, etc) as variables in the environment we just live with, and focus on local/personal things.

we do actually have real democracy in this state, where we have binding referendums, but legislature is able to act faster than we're allowed to, to work around and nullify the policy we vote for. -so voting is fine; nothing wrong with it; but I guess I just worry, oftentimes, people get too involved in it and attached to movements which can accomplish something one day only for it to be reversed by the end of the decade. feels like the two sides are getting angrier and angrier, spinning their wheels in dysfunctional politics, and we can't have a functional government in this environment; one side guts government to replace with loyalists, then the other guts it again in a few years to replace the partisans, to replace with their own partisans. meanwhile, national debt just keeps climbing as people swarm into gold.

my compost piles, though -- not directly, but I can eat that; I can feed people with that. you know, you want to solve hunger -- you can contribute directly to food pantries. it's more work than voting, but something actually happens. -and almost all the regulation government cares about relates to capitalism; they don't care about my carrots because my carrots don't engage in capitalism. -and for some people in some circumstances, it doesn't take too much engagement with capitalism to be able to get $100k or whatever you need for a plot of land with electricity in a rural area if you plan out for it.

Herbert, as an aside, expressed a kind of appreciation for Nixon. His son wrote in a foreword to some edition of a Dune book I read mentioning this. He was glad the corruption was exposed and so blatant, because now, surely, voters would see how bad things became and will not let it happen again. Optimistic guy.

rpcope1 · 2 months ago
Butlerian Jihad fucking when? I'm ready.
JimDabell · 2 months ago
> robots coming for your jobs is not a valid argument against robots.

Taking work away from people is practically the definition of technology. We invent things to reduce the effort needed to do things. Eliminating work is a good thing, that’s why inventing things is so popular!

What ends up happening is the amount of work remains relatively constant, meaning we get more done for the same amount of effort performed by the same amount of people doing the same amount of jobs. That’s why standards of living have been rising for the past few millennia instead of everybody being out of work. We took work away from humans with technology, we then used that effort saved to get more done.

lamename · 2 months ago
I agree with most everything you said. The problem has always been the short-term job loss, particularly today where society as a whole has resources for safety nets, but hasnt implemented them.

Anger at companies who hold power in multiple places to prevent and worsen this situation for people is valid anger.

_DeadFred_ · 2 months ago
Past performance is not indicative of future results.

There is zero indication that there will be new jobs, new work. Just because there was lots of low hanging fruit historically does not mean we will stumble into some new job creators now. Waiving away concerns with 'jobs always have magically appeared when needed' is nonsense and a non-response to their concerns.

ori_b · 2 months ago
> What ends up happening is the amount of work remains relatively constant

That's a pretty hard bet against AGI becoming general. If the promises of many technologists come to pass, humans remaining in charge of any work (including which work should be done) would be a waste of resources.

Hopefully the AGI will remember to leave food in the cat bowls.

insane_dreamer · 2 months ago
> What ends up happening is the amount of work remains relatively constant,

The entire promise of AI is to negate that statement. So if AI is truly successful, then that will no longer be true.

tuatoru · 2 months ago
This time actually is different, though.

If everything that a human can do, a robot can do better and cheaper, then humans are completely shut out of the production function. Humans have a minimum level of consumption that they need to stay alive whether or not they earn a wage; robots do not.

Since most humans live off wages which they get from work, they are then shut out of life. The only humans left alive are those who fund their consumption from capital rents.

account42 · 2 months ago
Hi, let me be the first to welcome into our millennium. Things have changed significantly since the 70s you seem to be used to: https://anticap.wordpress.com/wp-content/uploads/2010/11/fig...

Dead Comment

Loughla · 2 months ago
The problem is that AI and advanced robotics (and matter synthesis and all that future stuff) must come with a post scarcity mindset. Maybe that mindset needs to happen before.

Either way, without that social pattern, I'm afraid all this does is enshrine a type of futuristic serfdom that is completely insurmountable.

citizenpaul · 2 months ago
> post scarcity mindset

A total shift of human mentality. Humans have shown time and again there is only one way we ever get there. A long winding road paved with bodies.

sambull · 2 months ago
I think it's going to come with eradicate the 'wastrel' mindset.
insane_dreamer · 2 months ago
We're already in a post-scarcity world. The only reason there is still scarcity is entirely because it is created by humans to acquire more wealth.

You think that's going to change just because many more people find themselves without?

Dead Comment

reactordev · 2 months ago
What you end up with is a dozen people owning all the wealth and everyone else owning nothing, resulting in the robots not doing anything because no one is buying anything, resulting in a complete collapse of the economic system the world uses to operate. Mass riots, hunger wars, political upheaval, world war 3. Nuke the opposition before they nuke you.
skybrian · 2 months ago
That’s one scenario, but there are others. There are lots of open-weight models. Why wouldn’t ownership of AI end up being widely distributed? Mabybe it's more like solar panels than nuclear power plants?
rstuart4133 · 2 months ago
> What you end up with is a dozen people owning all the wealth and everyone else owning nothing,

That may be where the USA ends up. We Australian's (and probably a few others, like the Swiss) have gone to some effort to ensure we don't end up there: https://www.abc.net.au/listen/programs/boyerlectures/1058675...

throwaway-0001 · 2 months ago
Robots will do stuff for rich people ecosystem.

The rest you know what’s going to happen

cloverich · 2 months ago
That is the system we have today, directionally. AI is an opportunity to accelerate it, but it is also an opportunity to do the opposite.

Deleted Comment

ipaddr · 2 months ago
Then government comes in and takes over. In the end we will end up with communism. Communism couldn't compete with the free market but in a world of six companies it can.
Ray20 · 2 months ago
> What you end up with is a dozen people owning all the wealth and everyone else owning nothing

Only if the socialists win. Capitalism operates on a completely different principle: people CREATE wealth and own everything they have created. Therefore, AI cannot reduce their wealth in any way, because AI does not impair people's ability to create wealth.

Dead Comment

strogonoff · 2 months ago
“Robots coming for your jobs” is a valid argument against robots even if they can do those jobs better and faster, under two assumptions: 1) humans benefit from having jobs and 2) human benefit is the end goal.

Both are fairly uncontroversial: many humans not only benefit from jobs but in fact often depend on jobs for their livelihoods, and (2) should be self-evident.

This can change if the socioeconomic system is quickly enough and quite substantially restructured to make humans not depend on being compensated for work that is now being done by robots (not only financially but also psychologically—feeling fulfilled—socially, etc.), but I don’t see that happening.

jhbadger · 2 months ago
This is valid only so far as "human benefit" is localized to the human doing the job. I'm a cancer researcher. Obviously. my job is of value to me because it pays my bills (and yes, I do get satisfaction from it in other ways). But if an AI can do cancer research better than me, then the human benefit (to every human except perhaps me) favors the AI over me.

But a lot of jobs aren't like that. I doubt many people who work in, say, public relations, really think their job has value other than paying their bills. They can't take solace in the fact that the AI can write press releases deflecting the blame for the massive oil spill that their former employer caused.

thunky · 2 months ago
> 1) humans benefit from having jobs and 2) human benefit is the end goal.

There's no law of nature saying that a human must work 40 hours per week or starve.

The current dependence on work is a consequence, not a goal.

harryf · 2 months ago
It’s worth (re)watching the 1985 movie Brazil in particular the character of Harry Tuttle, hearing engineer https://youtu.be/VRfoIyx8KfU

Neither government or corporations are going to “save us” simply because sheer short termism and incompetence. But the seem incompetence will make the coming dystopia ridiculous

exe34 · 2 months ago
I do wonder if somewhere like China might be better off - they might not have muh freedumb, but their government seems keen to look after the majority and fund things that corporations wouldn't.
leobg · 2 months ago
Is it just income that’s the issue? I’d rather say it’s purpose. Even more: What will happen to democracy in a world where 100% of the population are 27/7 consumers?
IHLayman · 2 months ago
“ What will happen to democracy in a world where 100% of the population are 27/7 consumers?”

…we’ll add three hours to our day?

Bu seriously, I support what you are saying. This is why the entire consumer system needs to change, because in a world with no jobs it is by definition unsustainable.

Gepsens · 2 months ago
Smaller cities, human size, humans closer to nature, robots bring stuff from factories by driving. Done
creer · 2 months ago
> What will happen to democracy in a world where 100% of the population are 27/7 consumers?

What does the one have to do with the other?

But even then, currently plenty of people find their fun in creating - when it's not their job. And they struggle with finding the time for that. Sometimes the materials and training and machines for that also. Meanwhile a majority of current jobs involve zero personal creativity or making or creating. Driving or staffing a retail outlet or even most cooking jobs can't really be what you are looking for on your argument?

Is the road to post-scarcity more likely with or without robots?

wizardforhire · 2 months ago
Gonna just play a little mad libs here with your argument…

Personal belief, but AI coming for your children is not a valid argument against AI. If AI can do a job better and/or faster, they should be the ones doing the parenting. Specialization is how we got to the future. So the problem isn't AI, it's the structure of how we humans rely on parenting for their children. I don't necessarily feel like it's the AI company's problem to fix either. This is what government is for, and not to stifle innovation by banning AI but by preparing society to move forward.

You’re right about one thing within reason… this is what a rationale government should be for… if the government was by the people and for the people.

Addendum for emphasis: …and if that government followed the very laws it portends to protect and enforce…

marnett · 2 months ago
The artist behind replacement.ai chose a very relevant first use case — everyone thinks of AI replacement in terms of labor, but the example in terms of parenting and child rearing, which is arguably the only true reason for humans to exist, is genius.

Procreation and progeny is our only true purpose — and one could make the argument AI would make better parents and teachers. Should we all capitulate our sole purpose in the name of efficiency?

yoyohello13 · 2 months ago
I also agree with this, but I think there is a need to slow the replacement, by a bit, to reduce the short term societal harm and allow society to catch up. Robots can’t do the jobs if society collapses due to unrest.

Progress is great obviously, but progress as fast as possible with no care about the consequences is more motivated by money, not the common good.

Deleted Comment

iwontberude · 2 months ago
What you mean you don’t want to take a Great Leap Forward?
notthemessiah · 2 months ago
The problem is that the AI companies are most interested in displacing the existing labor force more so than they are interested in developing new uses of AI in areas that humans are inherently bad at. They are more interested in replacing good jobs with AI rather than bad jobs. It's not that machines are doing jobs better, they are just doing them for cheaper and by cutting more corners.

Best summarized in this comic: https://x.com/9mmballpoint/status/1658163045502267428

philjackson · 2 months ago
> This is what government is for

They're busy selling watches whilst people can still afford them thanks to having jobs.

arrosenberg · 2 months ago
Kind of hard for the government to “prepare society to move forward” when the AI companies and their financiers lobby for conditions that worsen the ability of society to do so.
vlovich123 · 2 months ago
My reading of history is that human society is able to adjust to small changes like this over long periods of time as young people choose alternate paths looking at what changes are likely on the horizon. Rapid changes frequently lead to destabilization of adults who are unwilling or unable to retrain which then also screws up their next generation who start off on an economic back foot and a worldview of despair and decrepitude.

Not the AI company’s fault per se, but generally the US government does a very poor job of creating a safety net either intentionally, ineptitude or indifference.

By the way, attacks were also leveled against Chinese and Japanese California workers who were viewed as stealing the jobs of other “Americans”. So this viewpoint and tradition of behavior and capitalism is very long in US history.

zer00eyz · 2 months ago
> Personal belief, but robots coming for your jobs is not a valid argument against robots.

Replace the word robot with "automation" or "industrialization" and you have the last 200 years of human history covered.

The Ludites could have won and we would all have 1500$ shirts.

Do you know any lamp lighters? How about a town crier?

We could still all be farming.

Where are all the switch board operators? Where are all the draftsmen?

How many people had programing jobs in 1900? 1950?

We have an amazing ability to "make work for ourselves", and history indicates that we're going to keep doing that regardless of how automated we make society. We also keep traditional "arts" alive... Recording didnt replace live performances, TV/Film didnt replace Broadway... Photography didnt replace painting...

Deleted Comment

jaccola · 2 months ago
The way I think about this is either the job is done in the most efficient way possible or I am asking everyone else to pay more for that product/service (sometimes a worse product/service) just so I can have a job.

E.g. if I was a truck driver and autonomous trucks came along that were 2/3rds the price and reduced truck related deaths by 99% obviously I couldn't, in good faith, argue that the rest of the population should pay more and have higher risk of death even to save my job and thousands of others. Though somehow this is a serious argument in many quarters (and accounts for lots of government spending).

code4life · 2 months ago
This is what minimum wage deals with to some extent. Governments decide what jobs may exist based on how much a company can pay to do the job.
ipaddr · 2 months ago
What about replacing a screenplay writer or actor? Are dirty jobs only acceptable.
grafmax · 2 months ago
Problem is billionaires have co-opted our government. Their interest is in channeling money from the working class into their hands through rentier capitalism. That is contrary to widely structuring income.

Rent extraction hurts them in the long run. Because working class income gets absorbed by various forms of rent, they are more expensive to employ. Thus we fail to compete with, say, China, which socializes many costs and invests in productive industry. We are left with a top heavy society that as we can see is already starting to crumble.

rafaelmn · 2 months ago
The government and all social structures developed because your labour has value and division of labour/specialisation is so effective that it outperforms the alternatives. Cooperation beats violence, iterated prisoners dilemma, etc.

None of this holds if you don't have anything of value to offer and automation is concentrating power and value, AI is the extreme end of this - at some point the charade of democracy becomes too annoying to the ones at the top, and you get there faster by trying to reign them in.

wahnfrieden · 2 months ago
It's also what organized labor is for. Workers can't wait on government to offer aid without leverage. We would not have weekends off or other protections we now take for granted if we waited on government to govern for us as if it was a caring parent.

So that would mean it is in fact the responsibility of the people at robot/AI companies (and across industries). It's not something we can just delegate to role-based authorities to sort out on our behalf.

Dead Comment

zzgo · 2 months ago
> This is what government is for

In my home country, the people building the robots and job destroying AI have captured all three branches of government, and have been saying for over 40 years that they'd like to shrink government down to a size that they could drown it in a bathtub. The government can't be relied upon to do more than move its military into our cities to violently stifle dissent.

dotancohen · 2 months ago

  > This is what government is for, and not to stifle innovation by banning AI but by preparing society to move forward.
Many will argue that the purpose of government is not to steer or prepare society, but rather to reflect the values of society. Traditionally, the body that has steered (prepared or failed to prepare) society for impending changes was religion.

nwatson · 2 months ago
Enter the Dominionists, gaining steam now. Not a regime I want to live under. Here's a forty year old article describing the inception of those religious figures close to the current USA administration ... https://banner.org.uk/res/kt1.html
beeflet · 2 months ago
Firstly, they are not coming for my job, they're coming for all jobs.

Secondly, you assume in the first place that we can somehow build a stable post-scarcity society in which people with no leverage can control the super-intelligent agents with all of the power. The idea that "government will just fix it" is totally ignorant of what the government is or how it emerges. In the long run, you cannot have a ruling class that is removed from the keys to power.

Lastly, Who says we should all support this future? What if I disagree with the AI revolution and it's consequences?

It is kind of amazing how your path of reasoning is so dangerously misoriented and wrong. This is what happens when people grow up watching star trek, they just assume that once we live in a post scarcity future everything will be perfect, and that this is the natural endpoint for humanity.

derektank · 2 months ago
>Firstly, they are not coming for my job, they're coming for all jobs.

They're not coming for all jobs. There are many jobs that exist today that could be replaced by automation but haven't been because people will pay a premium for it to be done by a human. There are a lot of artisan products out there which are technically inferior to manufactured goods but people still buy them. Separately, there are many jobs which are entirely about physical and social engagement with a flesh and blood human being, sex work being the most obvious, but live performances (how has Broadway survived in an era of mass adoption of film and television), and personal care work like home health aids, nannies, and doulas are all at least partially about providing an emotional connection on top of their actual physical labor.

And there's also a question of things that can literally only be done by human beings, because by definition they can only be done by human beings. I imagine in the future, many people will be paid full time to be part of scientific studies that can't easily be done today, such as extended, large cohort diet and exercise studies of people in metabolic chambers.

birktj · 2 months ago
I strongly disagree and I am having trouble understanding what kind of world you envision, what will it look like?

The problem as I see it is not robots coming for my job and taking away my ability to earn a salary. That can be solved by societal structures like you are saying, even though I am somewhat pessimistic of our ability to do so in our current political climate.

The problem I see is robots coming for my mind and taking away any stakes and my ability to do anything that matters. If the robot is an expert in all fields why would you bother to learn anything? The fact that it takes time and energy to learn new skills and knowledge is what makes the world interesting. And this is exactly what happened before when machines took over a lot of human labour, luckily there were still plenty of things they couldn't do and thus ways to keep the world interesting. But if the machines start to think for us, what then is left for us to do?

jmpeax · 2 months ago
> If the robot is an expert in all fields why would you bother to learn anything?

Robots have been better at chess than humans for a quarter of a century. Yet, chess is still a delightful intellectual and social persuit.

overfeed · 2 months ago
Where would governments find the money to expand safety nets by 2-3 orders of magnitude, while losing most income tax inflows?
dragonwriter · 2 months ago
> Where would governments find the money to expand safety nets by 2-3 orders of magnitude, while losing most income tax inflows?

Well, it would start by not tax-favoring the (capital) income that remains and would have to have grown massively relatively to the overall economy for that to have occurred.

(In fact, it could start by doing that now, and the resulting tax burden shift would reduce the artificial tax incentive to shift from labor intensive to capital intensive production methods, which would, among other things, buy more time to deal with the broader transition if it is actually going to happen.)

risyachka · 2 months ago
You need income to: - buy house - get food - buy clothes - medical care - buy nice things

if robots are that advanced that can do most of the jobs - the cost of goods will be close to zero.

government will product and distribute most of the things above and you mostly won't need any money, but if you want extra to travel etc there will always be a bunch of work to do - and not 8 hours per day

Deleted Comment

ipaddr · 2 months ago
Wealth tax.
lwhi · 2 months ago
Government no longer has the power or authority to constrain private enterprise; especially in highly technical sectors.
StevePerkins · 2 months ago
Of course it does. Do you think the elites actually WANT massive tariffs putting a brake on GDP growth? Why are tech companies suddenly reversing course on content moderation and/or DEI, after years of pushing in the opposite directions?

Private enterprise will always have some level of corrupting influence over government. And perhaps it sees current leadership as the lesser of two evils in the grand scheme. But make no mistake, government DOES ultimately have the power, when it chooses to assert itself and use it. It's just a matter of political will, which waxes and wanes.

Going back a century, did the British aristocracy WANT to be virtually taxed out of existence, and confined to the historical dustbin of "Downton Abbey"?

xpe · 2 months ago
Of course government has the authority to represent the people; if not it, then who or what does?
jayd16 · 2 months ago
If wealth inequality was greatly reduced, we wouldn't have to worry about a lot of these topics, nearly as much.
1dom · 2 months ago
> This is what government is for, and not to stifle innovation by banning AI but by preparing society to move forward.

What if the issue isn't government failing to prepare society to move forward, but instead, AI businesses moving us in a direction that more and more people don't consider to be forward?

ivape · 2 months ago
it's the structure of how we humans rely on jobs for income

1. We don’t need everyone in society to be involved in trade.

2. We made it so that if you do not take part in trade (trade labor for income), you cannot live.

3. Thus, people will fear losing their ability to trade in society.

The question is, when did we make this shift? It used to just be slavery, and you would be able to survive so long as you slaved.

The fear is coming from something odd, the reality that you won’t have to trade anymore to live. Our society has convinced us you won’t have any value otherwise.

brap · 2 months ago
>We made it so that if you do not take part in trade (trade labor for income), you cannot live.

We did not make it so, this has been the natural state for as long as humans have existed, and in fact, it’s been this way for every other life form on Earth.

Maybe with post-scarcity (if it ever happens) there could be other ways of living. We can dream. But let’s not pretend that “life requires effort” is some sort of temporary unnatural abomination made by capitalists. It’s really just a fact.

xpe · 2 months ago
> This is what government is for, and not to stifle innovation

We should compare how anti-government politicians talk versus how trained, educated neoclassical economists talk. The latter readily recognize that a valid function of government is to steer, shape, and yes, regulate markets to some degree. This is why we don’t have (for example) legal, open markets for murder.

Markets do not define human values; they are a coordination mechanism given a diverse set of values.

Deleted Comment

thayne · 2 months ago
I have very little faith in the government to fix that problem.

And even if the government did institute something like universal basic income, if all jobs were replaced, that would almost certainly mean a lower standard of living for the middle class, and even less socioeconomic mobility than there is now.

exe34 · 2 months ago
> This is what government is for, and not to stifle innovation by banning AI but by preparing society to move forward.

The AI will belong to the parasite class who will capture all the profits - but you can't tax them on this, because they can afford to buy the government. So there isn't really a way to fund food and shelter for the population without taking something from the billionaires. Their plans for the future does not include us [0].

[0] https://www.theguardian.com/news/2022/sep/04/super-rich-prep...

81222222 · 2 months ago
On a few occasions as a teenager, I slipped into an unpleasant khole. The one thing that remained constant every time was that I felt like we were living in the apocalypse. Even though each of these experiences was precipitated by the dose taken, I somehow ended up with a 24 hour news channel on in the background and while I sat there motionless, images of air defense missiles launching on some grainy video from kosovo, I just felt like the world was coming to an end and that all doom was beginning to take place. It's weird because of the k I didn't feel panicky about the doom, like I was super calm and collected about it but nevertheless, it was still an hour or two of impending doom that was very unpleasant. I guess my point is that we need to restrict ketamine use based on income. If someone has enough money, they have far more than enough resources to deal with whatever medical issue justifies their k prescription using a different treatment. Allowing these people to have access to ketamine is the root cause of this problem.
lukev · 2 months ago
Agreed 100%, except that this is not a new development. This process has been ongoing, with automation has been taking over certain classes of jobs and doing them faster and better since the industrial revolution.

And capitalism has flourished during this time. There's no reason to believe even more automation is going to change that, on its own.

Sure, Musk and Altman can make noises and talk about the need for UBI "in the future" all they want, but their political actions clearly show which side they're actually on.

frogperson · 2 months ago
Robots might be the future, but humans are more important and should be peioritized first. lets make sure the humans needs are met, and then you can play with your robot toys, ok?
insane_dreamer · 2 months ago
> So the problem isn't robots, it's the structure of how we humans rely on jobs for income.

Humans have depended on their own labor for income since we stopped being hunters and gatherers or living in small tribes.

So it's not just a matter of "the gov will find a way", but it's basically destroying the way humanity as a whole has operated for the past 5000 years.

So yes, it's a huge problem. Everything done under the banner of "innovation" isn't necessarily a good thing. Slavery was pretty "innovative" as well, for those who were the slave owners.

kif · 2 months ago
In my opinion there is a problem when said robot relies on piracy to learn how to do stuff.

If you are going to use my work without permission to build such a robot, then said robot shouldn’t exist.

On the other hand a jack of all trades robot is very different from all the advancements we have had so far. If the robot can do anything, in the best case scenario we have billions of people with lots of free time. And that doesn’t seem like a great thing to me. Doubt that’s ever gonna happen, but still.

account42 · 2 months ago
The problem with that is that having all the robots also makes it easy to control the government.
everforward · 2 months ago
> This is what government is for, and not to stifle innovation by banning AI but by preparing society to move forward.

This requires faith that the government will actually step in to do something, which many people lack (at least in the US, can't speak for globally).

That's the sticking point for many of the people I've talked to about it. Some are diametrically opposed to AI, but most think there's a realistic chance AI takes jobs away and an unrealistic chance the government opposes the whims of capital causing people displaced from their jobs to dip into poverty.

I can't say I have a good counter-argument either. At least in the US, the government has largely sided with capital for my entire life. I wouldn't take a bet that government does the kind of wealth redistribution required if AI really takes off, and I would eat my hat if it happens in a timely manner that doesn't require an absolute crisis of ruined lives before something happens.

See the accumulation of wealth at the top income brackets while the middle and lower classes get left behind.

TLDR this is more of a crisis of faith in the government than opposition to AI taking over crap jobs that people don't want anyways.

EasyMark · 2 months ago
But the only government that could "fairly" operate such a government would be a rational AI with a limited set of rules meant to maximize human wellness. Humans will always fail at such a government as we have seen in the past. Greed always overwhelms good intentions in the long run
ssss11 · 2 months ago
But we all know what will happen, it’s the human condition of greed. The wealthy will control everything, governments will be no help, and all the newly unemployed will be in dire situations.

So while you’ve identified the real problem we need to identify a realistic solution.

KoolKat23 · 2 months ago
And if we know we can't fix it fast enough, is a delay acceptable?
pharos92 · 2 months ago
You wont have that opinion when its your job, life and freedom.
popalchemist · 2 months ago
Technology should make our lives better. Whether it's social media, AI, or nuclear power, if we introduce technology that even with its incredible benefits ends up causing harm or making life worse for many people (as all of the above have in various ways), we should reconsider. This should be self-evident. That doesn't mean we get rid of the technology, but we refine. Chernobyl didn't mean the world got rid of nuclear power. It meant we became more responsible with it.

Anyway there is a name for your kind of take. It is anti-humanist.

Deleted Comment

malloryerik · 2 months ago
Might want to read some Karl Polanyi.
franga2000 · 2 months ago
"The government" needs time to fix this and until then, we need to not automate everyone out of a job. If that means we don't "get to the future" until then, fine. "Fault" or not, the AI companies are putting people in danger now and unless we can implement a more proper solution extremely quickly, they just have to put up with being slowed down.

But it's not like "the government" (as if there is just one) simply doesn't want to fix things. There are many people who want to fix the way we distribute resources, but there are others who are working to stop them. The various millionaires behind these AI companies are part of the reason why the problem you identified exists in the first place.

brap · 2 months ago
So your government should pump the brakes, while other governments rush towards ASI. And you believe this will benefit you, long term? Or do you believe in “global cooperation”?
6r17 · 2 months ago
The problem is that we do not share the same values at all and I do not envision this truly benefiting people neither making it something I envy or feel OK with. You can make the best fakes and make your best to remove activity from people - but ultimately this is going to lead to a deteriorated society, increased mental health issues, plenty of sad stories just so few people can be happy about it.

Ngl, if someone nuked all USA's servers and wipes out the all these bullshit i'm not convinced the world would be in a worst state right now.

Let AI be used by scientific research, development, helping people out. But if it's just to sit your smelly ideas down, you may even be right, but ultimately the form, intentions and result matter more than recklessly endangering everybody.

TBH I feel like the AI discourse around human replacement smell like hard-core psychopathic behavior - or the one of a drunken dude who's driving a car just happy.

You have 0 data concerning the result it would do on society - and I definitely prefer to live in a less technological world than a world that is full of people with psychosis.

So until we find how we can solve this bottleneck I have 0 sympathy for this kind of discourse.

luxuryballs · 2 months ago
“This is what government is for” was the most terrifying thing I’ve read all month. The only thing more starkly dystopian than relying on robots and AI for survival would be adding “the government” to the list.

The government should keep its charge as the protector and upholder of justice, I don’t want it to be those things and then also become a fiat source for economic survival, that’s a terribly destructive combination because the government doesn’t care for competition or viability, and survival is the last place you want to have all your eggs in one basket, especially when the eggs are guaranteed by force of law and the basket becomes a magnet for corruption.

jwilber · 2 months ago
Ah yes, our government where career politicians from both sides have bent the rules to create 9-10 figure fortunes.
xivzgrev · 2 months ago
It's not so black and white - it depends on the scale

We as a society get to decide what is done in our society. If robots replace a few jobs but make goods cheaper for everyone that's a net positive for society.

If robots replace EVERYONE's job, where everyone has no income anymore that's clearly a huge negative for society and it should be prevented.

completelylegit · 2 months ago
“Hey Association of Retarded Citizens, robots can do it better so just stay home I guess.”
pesfandiar · 2 months ago
It's wrong to assume the owners will share the productivity gains with everyone, especially when reliance on labour will be at its lowest, and the power structure of a data/AI economy is more concentrated than anything we've seen before. IMO, the assumption that some form of basic income or social welfare system will be funded voluntarily is as delusional as thinking communism would work.
jvanderbot · 2 months ago
It's one thing to be fired and completely replaced by a robot, never to work again. It's another to have your industry change and embrace automation, but to remain in with higher productivity and a new role. You might not initially like the new industry or role, but ....

That's noble. The first is dystopian

romellem · 2 months ago
“Guns don’t kill people, etc…”
sharts · 2 months ago
Replace robots with immigrants and it’s the same fear mongering as usual.
xpe · 2 months ago
There is much more to concerns about AI than fear mongering. Reasonable people can disagree on predictions about probabilities of future events, but they should not discount reasonable arguments.
subjectivationx · 2 months ago
This is not a personal belief this is a regurgitation of the most standard neoliberal orthodoxy.
techblueberry · 2 months ago
We just keep moving forward, no one responsible, no one accountable, victims of our own progress.
gnarlouse · 2 months ago
Respectfully, that’s not a very functional belief. It’s sort of the equivalent to saying “communism is how mankind should operate”, while completely ignoring why communism doesn’t work: greedy, self-preserving genetic human instincts.

The workforce gives regular folks at least some marginal stake in civilization. Governments aren’t effective engines against AI. We failed to elect Andrew Yang in 2020–who was literally running on a platform of setting up a UBI tax on AI. Congress is completely corrupt and ineffectual. Trump gutting the government.

You may be right about ai taking jobs eventually if that’s what you’re saying, but you come off pretty coldly if you’re implying it’s what “should” happen because it’s Darwinian and inevitable, and just sorta “well fuck poor people.”

NoOn3 · 2 months ago
Not only Communism doesn't work, Capitalism doesn't work without crises and wars too. :)
81222222 · 2 months ago
Communism was never realized and what didn't work was the transitions stage between free market capitalism and communism, state capitalism. In literally every instance of communism not working, it was state capitalism that didn't work because the people who benefited from state capitalism worked to keep it in place as communism would have ended their greedy consolidation of power. This is not a flaw in communism, this is a flaw in capitalism: the inevitable hoarding of capital.
happytoexplain · 2 months ago
"Arguing against robots" is an oversimplification. They're arguing against misery and harm, and against the unprepared nature of our government and businesses to account for the impact of large jumps in automation, which, of course, also means arguing against robots. The distinction is pointless. And yes actually, it is in fact the responsibility of everybody to decline to inflict misery, whether a law has been made to control your behavior yet or not. It would be dishonest to imply there is a binary where your business can't ever use any automation based on this principle.

The over-application of objective phrases like "valid" vs "invalid" when talking about non-formal arguments is a sickness a lot of technical people tend to share. In this case, it's dismissive of harm to humans, which is the worst thing you can be dismissive about. "Please don't make me and my family miserable" is not an "invalid argument" - that's inhuman. That person isn't arguing their thesis.

"The problem". Another common oversimplifying phrase used by us thinkers, who believe there is "the answer", as if either of those two things exist as physical objects. "The problem" is that humans are harmed. Everything else just exists within that problem domain, not as "part of the problem" or "not part of the problem".

But most importantly:

Yes, you're absolutely correct (and I hate to use this word, but I'm angry): Obviously the ideal state is that robots do all the work we don't want to do and we do whatever we want and our society is structured in a way to support that. You've omitted the part where that level of social support is very hard to make physically feasible, very hard to convince people of depending on their politics, and, most importantly: It's usually only enough to spare people from death and homelessness, not from misery and unrest. Of course it would be ridiculous to outright ban for-profit use of automation, but even more ridiculous to write a bill that enforces it, e.g. by banning any form of regulation.

Short and medium term, automating technologies are good for the profit of businesses and bad for the affected humans. Long term, automating technologies are good for everybody, but only if society actually organizes that transition in a way that doesn't make those affected miserable/angry. It isn't, and I don't think it's pessimistic to say that it probably won't.

I'd love to live in Star Trek! We don't. We won't for hundreds of years if ever. Technology isn't the limiting factor, the immutable nature of human society and resources are the limiting factors. Nothing else is interesting to even talk about until we clear the bar of simply giving a shit about what actually, in concrete reality, happens to our countrymen.

gtsop · 2 months ago
> So the problem isn't robots, it's the structure of how we humans rely on jobs for income.

It's called capitalism

sythnet · 2 months ago
I agree with you
lwhi · 2 months ago
The problem is more existential.

Why are people even doing the jobs?

In a huge number of cases people have jobs that largely amount to nothing other than accumulation of wealth for people higher up.

I have a feeling that automation replacement will make this fact all the more apparent.

When people realise big truths, revolutions occur.

portaouflop · 2 months ago
Specialization is for insects.

A human being should be able to change a diaper, plan an invasion, butcher a hog, conn a ship, design a building, write a sonnet, balance accounts, build a wall, set a bone, comfort the dying, take orders, give orders, cooperate, act alone, solve equations, analyze a new problem, pitch manure, program a computer, cook a tasty meal, fight efficiently, die gallantly.

allturtles · 2 months ago
This is a brilliant piece of satire. "A Modest Proposal" for the AI age.

The leader bios are particularly priceless. "While working for 12 years as the Director of HR for a multinational, Faith realized that firing people gave her an almost-spiritual high. Out of the office, Faith coaches a little league softball team and looks after her sick mother - obligations she looks forward to being free of!"

bilekas · 2 months ago
> This is a brilliant piece of satire. "A Modest Proposal" for the AI age.

There's some truth in all satire though. I'm just shocked YC hasn't nuked the link from the front page.

hn_throwaway_99 · 2 months ago
> I'm just shocked YC hasn't nuked the link from the front page.

I'm not. People dump on VCs and YC all the time here and it's frequently on the front page.

overfeed · 2 months ago
Everyone is way above average here on HN, and will be thinking "The article speaks about every other idiot I work with - my genius is singular; I'm too valuable to my employer and irreplaceable. I'll be the one wrangling the AI that replaces my colleagues, while they figure out welfare"
username223 · 2 months ago
"I have been assured by a very knowing American of my acquaintance in London, that a young healthy child well nursed, is, at a year old, a most delicious nourishing and wholesome food, whether stewed, roasted, baked, or boiled; and I make no doubt that it will equally serve in a fricassee, or a ragout."

Would Sam Altman even understand the original, or would he just wander ignorantly into the kitchen and fling some salt at it (https://www.ft.com/content/b1804820-c74b-4d37-b112-1df882629...)? I'm not optimistic about our modern oligarchs.

oxag3n · 2 months ago
Why did I read that FT article start to finish?

Seems like a waste of time, but at the same time the feelings were similar to looking Hannibal Lecter in the kitchen scene.

sivartnotrab · 2 months ago
I know it's good satire when it takes me a min to realize its satire
sincerely · 2 months ago
I kind of get it, but at the same time...isn't "we made a machine to do something that people used to do" basically the entire history of of technology? It feels like somehow we should have figured out how to cope with the "but what about the old jobs" problem
darthoctopus · 2 months ago
that is the point of Luddism! the original Luddite movement was not ipso facto opposed to progress, but rather to the societal harm caused by society-scale economic obsolescence. the entire history of technology is also powerful business interests smearing this movement as being intrinsically anti-progress, rather than directly addressing these concerns…
Kiro · 2 months ago
I think we should be careful attributing too much idealism to it. The Luddites were not a unified movement and people had much more urgent concerns than thinking about technological progress from a sociocentric perspective. Considering the time period with the Napoleonic Wars as backdrop I don't think anyone can blame them for simply being angry and wanting to smash the machines that made them lose their job.
orourke · 2 months ago
I think the concern in this case is that, unlike before where machines were built for other people to use, we’re now building machines that may be able to use themselves.
ebcase · 2 months ago
Glad to see the Luddites getting a shout out here.

This is a new / recent book about the Luddite movement and it’s similarities to the direction we are headed due to LLMs:

https://www.littlebrown.com/titles/brian-merchant/blood-in-t...

Enjoyed the book and learned a lot from it!

johnwheeler · 2 months ago
There’s a difference between something and everything though
scotty79 · 2 months ago
Somehow modern Luddite messaging doesn't communicate that clearly either. Instead of "where's my fair share of AI benefits?" we hear "AI is evil, pls don't replace us".
CamperBob2 · 2 months ago
Would we be better off today if the Luddites had prevailed?

No?

Well, what's different this time?

Oh, wait, maybe they did prevail after all. I own my means of production, even though I'm by no means a powerful, filthy-rich capitalist or industrialist. So thanks, Ned -- I guess it all worked out for the best!

merth · 2 months ago
We invent machines to free ourselves from labour, yet we’ve built an economy where freedom from labour means losing your livelihood.
fainpul · 2 months ago
> We invent machines to free ourselves from labour

That's a very romantic view.

The development, production and use of machines to replace labour is driven by employers to produce more efficiently, to gain an edge and make more money.

brainwad · 2 months ago
Average hours worked is more or less monotonically decreasing since the start of the industrial revolution, so in the long run we are slowly freeing ourselves. But in the short run, people keep working because a) machines usually are complementary to labour (there are still coal miners today, they are just way more productive) and b) even if some jobs are completely eliminated by machines (ice making, for example), that only "solves" that narrow field. The ice farmers can (and did) reenter the labour market and find something else to do.
beeflet · 2 months ago
No other such economy has ever existed. "He who does not work, neither shall he eat"
Ray20 · 2 months ago
Because we invent machines not to free ourselves from labor (inventing machines is a huge amount of labor by itself), but to overcome the greed of the workers.
Tepix · 2 months ago
„We“? A few billionaires do. They won‘t free themselves from labour, they will „free“ you from it. Involuntarily.
_heimdall · 2 months ago
If ML is limited to replacing some tasks that humans do, yes it will be much like any past technological innovation.

If we build AGI, we don't have a past comparison for that. Technologies so far have always replaced a subset of what humans currently do, not everything at once.

scotty79 · 2 months ago
I love SF, but somehow I don't find it very good foundation for predicting the future. Especially when people focus of one, very narrow theme of SF and claim with certainty that's what's gonna happen.
yujzgzc · 2 months ago
AGI does not replace "everything". It might replace most of the work that someone can do behind a desk, but there are a lot of jobs that involve going out there and working with reality outside of the computer.
happytoexplain · 2 months ago
>we should have figured out

You would think! But it's not the type of problem Americans seem to care about. If we could address it collectively, then we wouldn't have these talking-past-each-other clashes where the harmed masses get told they're somehow idiots for caring more about keeping the life and relative happiness they worked to earn for their families than achieving the maximum adoption rate of some new thing that's good for society long term, but only really helps the executives short term. There's a line where disruption becomes misery, and most people in the clear don't appreciate how near the line is to the status quo.

AviationAtom · 2 months ago
I always compare it to the age of the industrial revolution. I have no doubt you had stubborn old people saying: "Why would I need a machine to do what I can do just fine by hand??" Those people quickly found themselves at a disadvantage to those who choose not to fight change, but to embrace it and harness technological leaps to improve their productivity and output.
happytoexplain · 2 months ago
Most people are not in a position to choose whether to embrace or reject. An individual is generally in a position to be harmed by or helped by the new thing, based on their role and the time they are alive.

Analogies are almost always an excuse to oversimplify. Just defend the thing on its own properties - not the properties of a conceptually similar thing that happened in the past.

beeflet · 2 months ago
The difference is that in the industrial revolution there was a migration from hard physical labor to cushy information work.

Now that information work is being automated, there will be nothing left!

This "embrace or die" strategy obviously doesn't work on a societal scale, it is an individual strategy.

zb3 · 2 months ago
Yes and thanks to this we're working more and more because most of the profit goes to the top as the inequality is rising. At some point it will not be possible to put up with this.
theptip · 2 months ago
> AI can do anything a human can do - but better, faster and much, much cheaper.

Should be pretty clear that this is a different proposition to the historical trend of 2% GDP growth.

Mass unemployment is pretty hard for society to cope with, and understandably causes a lot of angst.

brandensilva · 2 months ago
And that comes down to the moral and social contract we have and the power we give to digital money and who owns it.

We either let the peoples creativity and knowledge be controlled and owned by a select few OR we ensure all people benefit from humanities creativity and own it. And the fruits that it bears advance all of humanity. Where their are safety nets in place to ensure we are not enslaved by it but elevated to advance it.

aabhay · 2 months ago
History is full of technology doing things that go beyond human possibility as well. Think of microscopes, guns, space shuttles. There has been technology that explicitly replaces human labor but that is not at all the whole story.
array_key_first · 2 months ago
You eventually run out of jobs.

Every time we progress with new tech and eliminate jobs, the new jobs are more complicated. Eventually people can't do them because they're not smart enough or precise enough or unique enough.

Each little step, we leave people behind. Usually we don't care much. Sure some people are destined to a life of poverty, but at least most people aren't.

Eventually though even the best of the humans can't keep up, and there's just nothing left.

FloorEgg · 2 months ago
Every time it happens it's a bit different, and it was a different generation. We will figure it out. It will be fine in the end, even if things aren't fine along the way.

I'm starting to come around to the idea that electricity was the most fundamental force that drove WW1 and WW2. We point to many other more political, social and economic reasonings, but whenever I do a kind of 5-whys on those reasons I keep coming back to electricity.

AI is kind of like electricity.

Were also at the end of a big economic/money cycle (Petro dollar, gold standard, off gold standard, maxing out leverage).

The other side will probably involve a new foundation for money. It might involve blockchain, but maybe not, I have no idea.

We don't need post-scarcity so much as we just need to rebalance everything and an upgraded system that maintains that balance for another cycle. I don't know what that system is or needs, but I suspect it will become more clear over the next 10-20 years. While many things will reach abundance (many already have) some won't, and we will need some way to deal with that. Ignoring it won't help.

newsclues · 2 months ago
Replacing dirty, dangerous jobs, and allowing people to upskill and work better jobs is one thing.

Firing educated workers en mass for software that isn’t as good but cheaper, doesn’t have the same benefits to society at large.

What is the goal of replacing humans with robots? More money for the ownership class, or freeing workers from terrible jobs so they can contribute to society in a greater way?

Ray20 · 2 months ago
> doesn’t have the same benefits to society at large.

The benefits to society will be larger. Just think about it: when you replace a dirty dangerous jobs, the workers simply have nowhere to go, and they begin to generate losses for society in one form or another. Because initially, they took this dirty, dangerous jobs because they had no choice.

But when you firing educated workers en mass, society not only receives from software all the benefits that it received from workers, but all other fields are also starting to develop because these educated workers are taking on other jobs, jobs that have never been filled by educated workers before. Jobs that are understaffed because they are too dirty or too dangerous.

This will be a huge boost even for areas not directly affected by AI.

itsnowandnever · 2 months ago
> isn't "we made a machine to do something that people used to do" basically the entire history of of technology?

kinda, I guess. but what has everyone on edge these days is humans always used technology to build things. to build civilization and infrastructure so that life was progressing in some way. at least in the US, people stopped building and advancing civilization decades ago. most sewage and transportation infrastructure is from 70+ years ago. decades ago, telecom infrastructure boomed for a bit then abruptly halted. so the "joke" is that technology these days is in no way "for the benefit of all" like it typically was for all human history (with obvious exceptions)

overgard · 2 months ago
"we made a machine to do everything so nobody does anything" is a lot different though
Keyframe · 2 months ago
isn't "we made a machine to do something that people used to do" basically the entire history of of technology?

yes, until we reached art and thinking part. Big part of the problem might be that we reached that part first before the chores with AI.

Deleted Comment

Deleted Comment

Ray20 · 2 months ago
Hasn't every such technological development been accompanied by opponents of its implementation?

At least now, things aren't so bad, and today's Luddites aren't trashing offices of ai-companies and hanging their employees and executives on nearby poles and trees.

no_wizard · 2 months ago
The vast majority of the movement was peaceful. There is one verified instance where a mill owner was killed and it was condemned by leaders of the movement. It was not a violent movement at its core.

Second, the movement was certainly attacked first. It was mill owners who petitioned the government to use “all force necessary” against the luddites and the government acting on behalf of them killed and maimed people who engaged in peaceful demonstrations before anyone associated with the Luddite movement reacted violently, and again, even in the face of violence the Luddite movement was at its core non violent.

blibble · 2 months ago
they haven't started... yet

billions of unemployed people aren't going to just sit in poverty and watch as Sam Altman and Elon become multi-trillionaires

(why do you think they are building the bunkers?)

only-one1701 · 2 months ago
Can you imagine? Ha ha. Wow that would be crazy. Damn. I’m imagining it right now! Honestly it’s hard to stop imagining.
collinmanderson · 2 months ago
> isn't "we made a machine to do something that people used to do" basically the entire history of of technology?

I know, right? Machines have been gradually replacing humans for centuries. Will we actually get to the point where there are not enough jobs left? It doesn't seem like we're currently anywhere close to the point of not having any jobs available.

Has anyone thought about how the Federal Reserve plays a role with this? Automation puts downward pressure on inflation, because it doesn't cost as much to make stuff. The Federal Reserve will heavily incentivize job creation if inflation is low enough and there aren't enough jobs available, right?

beeflet · 2 months ago
We're already here. Most jobs are fake.
hsavit1 · 2 months ago
I feel like technology should exist to enhance the human experience, not eliminate the human experience?
rhetocj23 · 2 months ago
Yes.
poszlem · 2 months ago
Not really, because this time it's not machine to do something that people used to do, but a machine to do anything and everything that people used to do.
bamboozled · 2 months ago
Enjoy eating a bowl of pasta ?
classified · 2 months ago
> we should have figured out how to cope with the "but what about the old jobs" problem

We did figure that out. The ingenious cope we came up with is to entirely ignore said problem.

shortrounddev2 · 2 months ago
Manual labor was replaced with factory labor, factory labor replaced with knowledge work. If knowledge work is replaced with AI, what do we go to then? Not to mention that the efficiency gains of the modern tech industry are not even remotely distributed fairly. The logical extreme conclusion of an AI company would be where the CEO, Founder, 100% owner, and sole employee coordinates some underling AIs to run the entire company for him while he collects the entire profit and shares it with no one, because American government is an oligarchy
gaul_bladder · 2 months ago
> what do we go to then?

You’ll waste away for a little while in some sort of slum and then eventually you’ll head to the Soylent green factory, but not for a job. After that problem solved!

Deleted Comment

qgin · 2 months ago
We’re working on all-purpose human replacements.

Imagine if the tractor made most farm workers unnecessary but when they flocked to the cities to do factory work, the tractor was already sitting there on the assembly line doing that job too.

I don’t doubt we can come up with new jobs, but the list of jobs AGI and robotics will never be able to do is really limited to ones where the value intrinsically comes from the person doing it being a human. It’s a short list tbh.

zzzeek · 2 months ago
> I kind of get it, but at the same time...isn't "we made a machine to do something that people used to do" basically the entire history of of technology?

this is not about machines. machines are built for a purpose. who is "building" them for what "purpose" ?

if you look at every actual real world human referenced in this website, they all have something in common. which is that they're billionaires.

this is a website about billionaires and their personal agendas.

intended · 2 months ago
The idea is that there will be newer jobs that come up.

The issue is that there will be no one earning money except the owners of OpenAI.

Take outsourcing - the issue in developed nations was underemployment and the hollowing out of industrial centers. You went from factory foreman to burger flipper. However, it did uplift millions out of poverty in other nations. So net-net, we employed far more and distributed wealth.

With Automation, we simply employ fewer people, and the benefits accrue to smaller groups.

And above all - these tools were built, essentially by mass plagiarism. They train even now, on the random stuff we write on HN and Reddit.

TLDR: its not the automation, its the wealth concentration.

brandensilva · 2 months ago
The problem isn't the failure of the mathematicians and engineers who succeeded at the task of automating humanities mundane tasks in life.

It's that the people failed to elect and wield a government that ensures all humanity benefits from it and not a select few who control it all.

And I think it will become clear that the governments that are investing in it to benefit their people who have ownership versus the ones who invest in it to benefit just a handful of the rich are the ones who will keep society stable while this happens.

The other path we are going down is you will have mass unrest, move into a police state to control the resistance like America is doing now, and be exactly what Peter Thiel, Elon Musk, and Larry Ellison want with AI driven surveillance and Orwellian dystopian vision forcing people to comply or be cut out of existence deactivating their Digital IDs.

andai · 2 months ago
https://replacement.ai/complaints

At the bottom of this page, there is a form you can fill out. This website says they will contact your local representative on your behalf. (And forward you any reply.)

Here's the auto-generated message:

I am a constituent living in [state] with urgent concerns about the lack of guardrails surrounding advanced AI technologies. It is imperative that we act decisively to establish strong protections that safeguard families, communities, and our children from potential harms associated with these rapidly evolving systems.

As companies continue to release increasingly powerful AI systems without meaningful oversight, we cannot rely on them to police themselves, especially when the stakes are so high. While AI has the potential to do remarkable things, it also poses significant risks, including the manipulation of children, the development of bioweapons, the creation of deepfakes, and the threat of widespread unemployment.

I urge you to enact strong federal guardrails for advanced AI that protect families, communities, and children. Additionally, please do not preempt or block states from adopting strong AI protections that may be necessary for their residents.

Thank you for your time.

[name]

New York

riazrizvi · 2 months ago
Oh, too bad. I initially shared it on LinkedIn but deleted it once I saw this. I’m all for establishing in the mind of the commons, that displacing humans from the economy is inane, and to see the open dialogue on the subject. I’m not up for some little team to try to control things.
Atreiden · 2 months ago
What you call "some little team" others might call grassroots politics. And it's the alternative to top-down partisan politics.

Ideas are nice, and important, but there needs to be an action vector for those ideas to have practical value.

levitate · 2 months ago
To me it seems like contacting your local representative is actually pretty in line with your goal of "establishing in the mind of the commons". I'm not sure what the little team is that you think will be controlling everything.
nharada · 2 months ago
How come I never see any concrete proposals for how to equitably distribute the wealth of AI? It's always either "stop AI immediately for the sake of our labor" or "don't worry sometime in the future everyone will live in utopia probably".

Here's a starter example: any company whose main business is training AI models needs must give up 10% of their company to a fund whose charter is long-term establishing basic care (food, water, electricity, whatever) for citizens.

I'm sure people will come at me with "well this will incentivize X instead!" in which case I'd like to hear if there are better thought out proposals.

otterley · 2 months ago
This is what taxation and wealth redistribution schemes are for. The problem is that Americans generally find this idea to be abhorrent, even though it would probably benefit most of the people who are against the principle. They don’t want a dime to go to people they feel are undeserving of it (“lazy” people, which is typically coded language to mean minorities and immigrants).
perlgeek · 2 months ago
In theory, we know how to do wealth redistribution, AI or no AI: tax value creation and wealth transfer, such as inheritance. Then use the money to support the poor, or even everyone.

The problem really is political systems. In most developed countries, wealth inequality has been steadily increasing, even though if you ask people if they want larger or smaller inequality, most prefer smaller. So the political systems aren't achieving what the majority wants.

It also seems to me that most elections are won on current political topics (the latest war, the latest scandal, the current state of the economy), not on long-term values such as decreasing wealth inequality.

BrenBarn · 2 months ago
The question is what is different about equitably distributing the wealth of AI vs. equitably distributing wealth in general. It seems that the main difference is that, with AI wealth specifically, there is a lot of it being generated right now at a breakneck pace (although its long-term stability is in question). Given that, I don't think it's unreasonable to propose "stop AI immediately while we figure out how to distribute wealth".

The problem is that the longer you refrain from equitably distributing wealth, the harder it becomes to do it, because the people who have benefited from their inequitably distributed wealth will use it to oppose any more equitable distribution.

ben_w · 2 months ago
> How come I never see any concrete proposals for how to equitably distribute the wealth of AI?

Probably because most politics about how to "equitably distribute the wealth" of anything are one or both of "badly thought out" and/or "too complex to read".

For example of the former, I could easily say "have the government own the AI", which is great if you expect a government that owns AI to continue to care if their policies are supported by anyone living under them, not so much if you consider that a fully automated police force is able to stamp out any dissent etc.

For example of the latter, see all efforts to align any non-trivial AI to anything, literally even one thing, without someone messing up the reward function.

For your example of 10%, well, there's a dichotomy on how broad the AI is, if it's more like (it's not really boolean) a special-purpose system or if it's fully-general over all that any human can do:

• Special-purpose: that works but also you don't need it because it's just an assistant AI and "expands the pie" rather than displacing workers entirely.

• Fully-general: the AI company can relocate offshore, or off planet, do whatever it wants and raise a middle finger at you. It's got all the power and you don't.

idreyn · 2 months ago
This sounds a lot like a sovereign wealth fund. The government obtains fractional ownership over large enterprises (this can happen through market mechanisms or populist strongarming — choose your own adventure) and pours the profits on these investments into the social safety net or even citizens' dividends.

For this to work at scale domestically, the fund would need to be a double-digit percentage of the market cap of the entire US economy. It would be a pretty drastic departure from the way we do things now. There would be downsides: market distortions and fraud and capital flight.

But in my mind it would be a solution to the problem of wealth pooling up in the AI economy, and probably also a balm for the "pyramid scheme" aspect of Social Security which captures economic growth through payroll taxes (more people making more money, year on year) in a century where we expect the national population to peak and decline.

Pick your poison, I guess, but I want to see more discussion of this idea in the Overton window.

starik36 · 2 months ago
> The government obtains fractional ownership over large enterprises (this can happen through market mechanisms or populist strongarming...)

Isn't that what happened in the Soviet Union? Except it wasn't fractional. It ushered 50 years of misery.

atleastoptimal · 2 months ago
The problem is there are many people who think AI is a big scam and has no chance of long-term profitability, so a fund would be a non-starter, or people who think AI will be so powerful that any paltry sums would pale in comparison to ASI's full dominance of the lightcone, leaving human habitability a mere afterthought.

There honestly aren't a lot of people in the middle amazingly, and most of them work at AI companies anyway. Maybe there's something about our algorithmically manipulated psyche's in the modern age that draws people towards more absolutist all-or-nothing views, incapable of practical nuance when in the face of a potentially grave threat.

ArcHound · 2 months ago
Why would the AI owners want to distribute wealth equitably? They want to get rich.

What government in the foreseeable future would go after them? This would tank the US economy massively, so not US. The EU will try and regulate, but won't have enough teeth. Are we counting on China as the paragon of welfare for citizens?

I propose we let the economy crash, touch some grass and try again. Source: I am not an economist.

mondrian · 2 months ago
Bernie Sanders talks about a "robot tax" that is roughly what you're talking about. https://www.businessinsider.com/bernie-sanders-robot-tax-ai-...
IgorPartola · 2 months ago
Let’s take this to its logical limit: imagine that AI gets so good that it can replace a lot of jobs en masse. The most popular job in about half of the US is truck drivers, and in the states where it is teachers, truck drivers are number 2. Let’s say we do get to not only self-driving trucks but self-driving trucks loaded by robotic warehouse workers. Let’s postulate that this becomes the norm and let’s say that along with robotic truck drivers and warehouse workers, we also get robotic cashier, road work crews, and so on. I am not giving a timeline here, but let’s explore what happens when we get to 30% unemployment. Then 40%, 50%, 80%.

Sure we will have the robot wrangler engineers, scientists, teachers, nurses, etc. But typically we have social unrest past like 8% unemployment. What happens when double digits of people have no jobs and all the time on their hands? Well “eat the rich” might become very literal and no amount of protection against that can really be bought. Ultimately, the only option is either a Dune-style elimination of all AI (very unlikely) or we will have to decouple “living wage income” from “job”. If you think about it, the idea that you must have a job in order to make money is more of an implementation detail. If robots produce so much value that it is actively not even viable for humans to work, the only real logical solution is to distribute the profits from the labor of the robots in a way that isn’t by hourly rate times hours worked. In fact one possible way to do this is to tax the value produced by AI and then funnel that to a universal basic income program. Everyone by default is an artist. If you want to be a nurse or teacher or scientist or engineer you can. Otherwise just produce art at your leisure while the robots work the fields and cook your meals.

dlt713705 · 2 months ago
Unfortunately there is a very affordable alternative solution:

1. Massive population reduction (war is a very efficient way to achieve this)

2. Birth control, to slow down population growth to a stable rate near 0

3. Eugenics, to ensure only people with needed capabilities are born (brave new world)

In this scenario, 500,000 people (less ?) in charge of millions of robots and a minority of semi-enslaved humans would freely enjoy control over the world. The perfect mix between Asimov and Huxley.

All the agitation about "building a 1984-style world" is, at best, just a step toward this Asimov/Huxley model, and most likely, a deliberate decoy.

NaomiLehman · 2 months ago
what a disturbing comment.

and war is not a great way to reduce population at all

tomp · 2 months ago
> If robots produce so much value that it is actively not even viable for humans to work, the only real logical solution is to distribute the profits

You don't understand. Almost nobody actually thinks about this in the right way, but it's actually basic economics.

Salt.

We used to fight wars for salt, but now it's literally given away for free (in restaurants).

If "robots produce so much value" then all that value will have approximately zero marginal cost. You won't need to distribute profits, you can simply distribute food and stuff and housing because they're cheap enough to be essentially free.

But obviously, not everything will be free. Just look at the famous "cost of education vs TV chart" [1]. Things that are mostly expensive: regulated (education, medicine, law) and positional (housing / land - everyone wants to live in good places!) goods. Things that are mostly cheap: things that are mass produced in factories (economies of scale, automation). Robots might move food & clothing into the "cheap" category but otherwise won't really move the needle, unless we radically rethink regulation.

[1] https://kottke.org/19/02/cheap-tvs-and-exorbitant-education-...

zurfer · 2 months ago
Regulation might be one part of the equation, but I would like to understand a bit deeper on how much of cost driver it is.

Healthcare is heavily regulated in some countries and less in others. It should be possible to get some comparisons.

I somehow feel that making many humans healthy is fundamentally a really hard problem that gets harder every year because the population ages and expectations rise. It feels to easy of a talking point to put it all on regulation.

dsign · 2 months ago
This is very well thought-out. But if regulation has such power, shouldn't we find better ways to use it?

Yeah, I know, it's very hard to craft good legislation. In fact, there's this problem of agency: the will to have things be a certain way is not always in the humans, or does not always emanate from the direct needs of the people. Many of the problems of modern capitalism are because there's emergent agency from non-human things, i.e. corporations. In the case of US, agency emerging from the corporate world has purportedly sequestered democracy. But there also agency emerging from frenzied political parties that define themselves as opposition to each other with a salted no-mans-land in the middle. This emerging agency thing is not new; it existed before in other institutions, e.g. organized religion. In any case, the more things there are vying for power, the less power people have to govern themselves in a way that is fair.

With AI, there's a big chance we will at least super-charge non-human agency, and that if we can avoid the AIs themselves developing agency of their own.

wubrr · 2 months ago
I think it's inevitable that most jobs will be eventually automated away (even without AI), and that's going to come with major class struggle and restructuring of society.
elAhmo · 2 months ago
You underestimate how creative are people with finding things to do. Society didn't collapse because we automated stuff that was done in the past almost exclusively by humans, such as agriculture.

It won't if robots start driving trucks.

IgorPartola · 2 months ago
The difference I see is that when we automated car factories, etc. there were still loads of jobs that (a) were better done by a human AND (b) most humans could perform them. The issue is that if you eliminate all jobs that don’t require a PhD or equivalent, what happens to the people who are just not cut out to be nuclear physicists or biochemists, etc?
_DeadFred_ · 2 months ago
Past performance is not indicative of future results.

Everyone keeps saying 'no need to worry, no need for society to plan, because jobs happened in the past'. Like we should just put all our hope on these magic future jobs appearing. Plenty of countries exist where there aren't enough jobs. We aren't exempt from that as if some magic jobfairy is looking out for us.

_DeadFred_ · 2 months ago
They are never going to just lift everyone up. We could have done that for world hunger, we didn't. They gutted USAID because 38 billion, but sent 40 billion to Argentina because 'business'. They don't care if our living standards become the same as the third worlds. Just like we didn't care all that much that the third world had really rough lives. How do you currently think about the third world? That is about as much thought/concern as we will get. We are cooked if we leave it to 'THEM' be they business or government.
IgorPartola · 2 months ago
I think the difference is scale. When for every person with a 9-5 job and a living wage you have 5-10 able bodied adults who live next door and are starving and have nothing to lose, not much can physically protect you. It becomes economically cheaper to share than to protect your assets.
HPMOR · 2 months ago
I stopped reading past the first sentence, because that is not the logical limit. The logical limit is an unparalleled super intelligence that's akin to a god. What you state as the limit in the extrema is incorrect. Therefore with a faulty premise, all consequential propositions are inherently flawed.
IgorPartola · 2 months ago
Nonsense. If we give ourselves over to a super intelligence that we ourselves cannot fathom, there is no point in trying to argue about what that will look like. An ant cannot understand how a Saturn V rocket works no matter how hard it tries or how much time it has. But you do you :)
layer8 · 2 months ago
At first I thought the Sam Altman quote is a joke, but it’s actually real: https://archive.ph/gwdZ0

Deleted Comment

arisAlexis · 2 months ago
it's real and people were confused and didn't get it either. the authors of the website are some of them.
layer8 · 2 months ago
You could illuminate us and elucidate the correct understanding.
pugio · 2 months ago
Reminds me of the new book which just came out: "IF ANYONE BUT ME BUILDS IT, EVERYONE MAY AS WELL DIE: A CEO's Guide to Superhuman AI" (https://bsky.app/profile/shalevn.bsky.social/post/3m3jhso2rx...)