Readit News logoReadit News
laweijfmvo · a year ago
The "Owners" will use it to get rich[er], at any cost, and we'll only realize too late the damage it did. Government regulation could stop it, but politicians will only use their position to get rich[er] instead.
bamboozled · a year ago
The issue with "AI" and tech companies is they have incredible ability to influence, going against these companies might mean losing an election. It's a major conflict of interest when it comes to regulation of these "knowledge companies".
Animats · a year ago
Good point, bad rant.

Right now, the two real AI issues are 1) improved surveillance in a broad sense, and 2) job elimination. Both relate to what the owners of the tech want.

The political problem is that few governments are willing to effectively regulate 1) or 2).

Lerc · a year ago
Job elimination is not a new thing. People like to bring up textile workers because of how the term Luddite has come to refer to anti-technological sentiment, but the change in jobs has been fairly consistent.

I presume that in a few years electric cars will result in significant job losses amongst those who produce internal combustion engines. Renewable energy will reduce the number of employed coal miners.

I agree that this should be a role for government to assist people in a changing world.

janice1999 · a year ago
Car factories that make ICE vehicles can be switched to produce electric vehicles. The job losses in car plants you hear about are regional and caused by plants been moved to areas with cheaper workers and weaker labour law (Mexico for example).

Also less than 44 thousand people work in the coal mining industry in the US. Renewable didn't take those jobs, resource depletion, heavy machinery and fracking did.

TheGamerUncle · a year ago
there is a quite clear well and obvious difference between job elimination and job transformation and job change. But the express purpose of creating textile machines was not job elimination, it was output increase, meanwhile AI is not at least in business cases presented as an output increaser.

AI is presented and designed as an employee replacer, plain and simple.

Animats · a year ago
> Job elimination is not a new thing.

No, but this time it's come for the chattering classes from the Ivy League, who are freaking out.

daniel_reetz · a year ago
Not only improved surveillance, but vastly increased surveillance, as everyone rushes to capture as much data as they possibly can for training.
mindslight · a year ago
3) Diffusion of responsibility, continuing the trend of the past few decades. For example creating ever more layers of disempowered reps and rule-executing paperwork pushers, while cloistering away anyone who has any decision authority. Next it'll be LLMs running such interference.
iamleppert · a year ago
This is the same semantic argument of "Guns don't hurt people". Of course, guns hurt people. A physical object can be harmful without a human being to give it intent. It's a straw man argument that delves into the meaning of words than actually addressing if something is actually harmful or not.

I think we can all agree that AI is itself a tool, that can be used for good or bad. However, we can certainly make a judgement call on it, as a whole, by examining the current state of how it's being used, and by looking at what kinds of human behaviors it amplifies.

Out_of_Characte · a year ago
>This is the same semantic argument of "Guns don't hurt people". Of course, guns hurt people. A physical object can be harmful without a human being to give it intent. It's a straw man argument that delves into the meaning of words than actually addressing if something is actually harmful or not.

Given the proposition that 'objects' harm people without human involvement, that does not in any way improve the null-hypothesis of wanting to prevent harm. If an earthquake or tsunami hits we do not shrug it off as an act of nature or god. We have an intent to prevent the disaster even if no human was involved. The comparison to guns is an ill fitted one. A gun that no man knows about cannot hurt someone. A person still has to wield it, compared to AI, people are definitely the ones wielding the weapon. Weapons, like AI is a tool. whereas guns have a very clear target and modus operandi, AI has no such thing. AI is by definition innocent of every crime in the book since you cannot demonstrate it to have morality in the first place. Which is a problem if someone has used it to commit crimes. Obviously AI doesn't kill people, the people using AI want someone dead.

In the context of the article, it means we have to take responsibility for what the AI creates regardless of its intent, precisely because there might or might not be someone behind with intent.

Dig1t · a year ago
Once you invent the gun you can't really stop them proliferating though, people with the means and determination will own them. Allowing one group of people to own them and not others is just asking for the group of owners to dominate and rule over the ones who do not own them.
llamaimperative · a year ago
This is literally empirically false. Guns haven’t proliferated across the world. There are countless examples of locales that decided to and succeeded in not having guns proliferate in their region.
ninininino · a year ago
Do you feel the same way about nuclear weapons? How about engineered viruses and pathogens?

This argument will fall apart when school shooters gain access to deadlier weapons.

There are (at least, partial) solutions, we just don't like them and they are really really hard:

1) totalitarianism and top-down technology control (and knowledge proliferation control)

2) scientific (prob through genetic engineering) or cultural attempts to alter human species to be more non-violent / more peaceful / abhor conflict, or resolve social conflict / scarcity / prevent the means by which people _want_ to kill each other.

peterashford · a year ago
I live in New Zealand. Plenty of people are allowed to have guns (those with gun licenses) and plenty of people are not allowed to have guns (those without gun licenses). However, it does not appear to me that we are ruled by a gun-toting minority.
silverquiet · a year ago
Yeah, in the last few years I realized somewhere along the way that I stopped believing in laws. It's a strange feeling and hard to think about how I used to feel in retrospect; it's like I used to believe there was some force that existed throughout the country that should control our behavior, but at some point I realized that there just isn't anything there. It's a bit like losing religion I suppose.
MBlume · a year ago
The article doesn't seem to make any attempt at all to defend the first half of its thesis.
Lerc · a year ago
I watch a fair few Youtube Video Essays and one thing I notice is that there is quite a discontent with AI. Unfortunately there is a lot of preaching to the choir and not enough making a case. Nevertheless that discontent itself is a signal, and I think this article touches on why.

People are quite disillusioned. There is growing wealth inequality, political division, Dark patterns (which should simply be called abusive software). We now have a generation that does not expect to ever own a home.

You often hear the term now "Late Stage Capitalism" Which while admittedly was a clever term of speech when it was originated, has become a lazy expression of nihilism. It evokes the idea of capitalism as a cancer at a stage where the symptoms are obvious and debilitating, but also that it is at a point of no return and that it will kill itself and its host and there is nothing that can be done.

I rarely hear a detailed complaint about the problems of AI without the phrase "Late Stage Capitalism" cropping up. The irony is, of course, that the devices, platforms, and information used to express these ideas were in a large part due to innovations that were enabled by a capitalist economy. I think without the anger and frustration of their current situation, many of those railing against capitalism would admit that what they would like is a well regulated socialist/capitalist system. Something that works for society but affords people the means to innovate.

The problem for AI is that people fear that the empowering capabilities will go to the already powerful. As with wealth inequality, the expectation is that the power distribution of AI will also be unequal.

There are numerous other complaints about various aspects of AI, but I feel like these are things that should be argued individually but are often too heavily influenced by the issues above.

jauntywundrkind · a year ago
This feels only vaguely less affronting than "guns don't kill people, bullets do".

AI is, for the foreseeable, the work of extremely vast troves of resources. Countless of the most expensive chips on the planet, the best interconnects, the highest paid expert teams. Whatever the output is, these inputs are the firing mechanism that creates & keeps the AI firing.

tootie · a year ago
I can't believe OpenAI employees threatened to mutiny so their CEO could exploit them for more personal wealth. Meanwhile Anthropic now has the highest-rated model.
vouaobrasil · a year ago
> We shouldn’t fear AI as a technology. We should instead worry about who owns AI and how its owners wield AI to invade privacy and erode democracy.

The entire problem with this article is that it is arguing against technological determinism without even knowing it, and so it has no argument against it. But with the seductive power of technology and the ability for it to provide short-term marginal advantages to people, and combined with the fact that humans are horribly poor at acting to stop long-term detriments and tragedy, it is far more likely that AI and advanced technology will spread even if we can see the death at the end of the tunnel.

The only way we will gain more wisdom against advanced technology is to gain more wisdom without using advanced technology, because the immense power it offers is precisely the thing that makes it even harder to acquire that wisdom, simply because the power is too seductive.

Sorry to say, but we cannot handle AI and that is abundantly clear: the short-term advantages are too great, even though it promises a horrendous future. Sort of like burning fossil fuels.