Readit News logoReadit News
smnrg · 2 years ago
"If."

As if leadership did not already cut copywriters, designers, or accountants during the last rounds of "optimizations."

And as if video productions don't increasingly use ElevenLabs to replace voice actors for any digital and broadcast video, paying $5 for what would have been talent work with usage rights worth $1,200/6mo. Plus agency fees.

In the real world, it already happened.

kurthr · 2 years ago
I think it would be really hilarious if it was decided that since those were not human created they did not receive the benefit of copyright.

Nice movie business you have there... shame if something bad were to happen to it.

Nice code you have there, yes that employee can reuse it!

Honestly, I don't see anyway to keep AI generated content from being distributed (on a worldwide basis), but perhaps it can be used to level the playing field for new players rather than building higher barriers for the entrenched "leaders".

smnrg · 2 years ago
Lead generation marketing materials that make millions with a Facebook or Google campaign don't need copyright. Most livelihood for media and creatives doesn't come from entertainment, but in the form of ephemeral "content."

Sadly, content copyright doesn't matter much to the companies that produce that.

herval · 2 years ago
> I think it would be really hilarious if it was decided that since those were not human created they did not receive the benefit of copyright

Thats already the case for images in the US, and likely will be for other formats too

SketchySeaBeast · 2 years ago
Of course people jumped on it, but will it last? They've effectively tried to offshore a bunch of positions to a digital world, but I don't think it's going to work out like they think it will.
KptMarchewa · 2 years ago
"Tried"? There are millions of people in India working for western companies.
CharlesW · 2 years ago
Actual title: "Big Tech usually dismisses fears that AI kills jobs. Now it’s studying them."

Many Big Tech companies are participating, with unions advising. The Cisco announcement is less clickbait-y: https://newsroom.cisco.com/c/r/newsroom/en/us/a/y2024/m04/le...

garbanz0 · 2 years ago
"Benefiting humanity" would be letting profits stay the same but giving an equal UBI salary to people who lose their jobs to this stuff.

In its current state, the main thing AI does is hollow out the middle class. Who has access to the best models, and where does the money go when i.e. an entire customer support tier is let go?

shrimp_emoji · 2 years ago
> letting profits stay the same but giving an equal UBI salary to people who lose their jobs to this stuff.

We didn't do that with computers in the '70s (which are also "AI"); why start now? ;D

AuryGlenz · 2 years ago
Computers, printing press, cars, electric tools, chainsaws…
alecsm · 2 years ago
I find it hard to trust studies like these made by the same companies that provide the tech and have a lot at stake.
randomdata · 2 years ago
They have a lot at stake on both sides of the coin. They need AI to succeed, but not so much that they lose their customer base. Google needs to be especially careful – ain't no advertiser buying up ad space to market to legions of unemployed people.

But, no matter who is behind a study, trust doesn't even begin until replication has occurred.

2devnull · 2 years ago
Seems wise to get out in front of the impending backlash.
HPsquared · 2 years ago
Basically all research, on any given topic, is done by people with some kind of stake.
stonks · 2 years ago
While GPT and other AI tools have been incredibly useful in work and outside work, we must consider the negative impact it has on the society.

They promote their AI tools in such way that companies actively try to replace their employees with these tools, leading to more layoffs. Unemployment Insurance is either not enough, or in some cases people are ineligible to get it (I worked just over one quarter in NY and my application was rejected). There is no UBI (Universal Basic Income), also people who struggle with rent cannot get housing.

OpenAI, Google, Anthropic, Microsoft and others are causing layoffs and they are happy to do that because of greed (profits and prestige of having great products). While people are left struggling to survive. When these people have nothing left to live for there will be an increase in petty and violent crime and thus affect all people.

We need better care for the people affected - housing, food, health care benefits and helping them get their next job by providing opportunities to advance their career / upskill or change career.

slingnow · 2 years ago
Thank god Microsoft and Google are on the case! I was thinking next, we should have Exxon Mobile study the effects of oil on the environment.
reaperducer · 2 years ago
we should have Exxon Mobile study the effects of oil on the environment

It already did. In 1977 Exxon studied just that, and concluded that it will destroy the planet.

But Exxon kept on drilling because of greed, demonstrating that its leaders care more about making money than killing people.

https://www.latimes.com/environment/story/2023-01-12/exxonmo...

randomdata · 2 years ago
> But Exxon kept on drilling because of greed

Drilling oil no reason other than to give people jobs seems more virtuous than greedy, no?

icapybara · 2 years ago
I don't think it's going to be as bad as everyone is saying in this comment thread. Current AI's not accurate enough to replace most human work and only showing incremental improvements lately. A new technological strategy is probably needed to achieve human-level capability.

The jobs at risk / already dead from AI are things like internet ghost writing / SEO spam writing and customer service roles for cheap companies. For the rest of us it's a (very useful) tool, not a competitor.

jmholla · 2 years ago
> Current AI's not accurate enough to replace most human work and only showing incremental improvements lately.

But I don't think that's the real risk. The real risk is people in charge of jobs that believe AI can replace their employees, regardless of its true capabilities.

pfdietz · 2 years ago
Like any of the myriad of delusions that have swept through market economies, that's a self-limiting risk.
bearjaws · 2 years ago
I am in the same boat as you.

LLMs cannot even solve math problems, and we expect them to do tasks far more complex e.g. software engineering.

Attempts to make them good at math have shown its not easy, and often times sacrifices language quality. How do we expect something that cannot solve y=mx+b reliably for any input >100 to write more than a todo list app?

A lot of companies/investors are just assuming its a training data problem, but we can already produce massive amounts of robust math training data, and LLMs still aren't good at math.

At least this is my understanding, I was doing some research into Kahnmigo and its math tutoring capabilities and went down a rabbit hole about LLM math.

randomdata · 2 years ago
> and we expect them to do tasks far more complex e.g. software engineering.

Who expects that? Even if LLMs produced perfect output every time, they are still just a fancy compiler. You continue to need a software engineer to write the code.

Some expect that they will enable software to go the way of the elevator, which is to say that modern elevators didn't eliminate the elevator operator – they made everyone the elevator operator. This is quite possible, and has arguably already happened at least in some niche cases. But that is an expansion of the pool of human software engineers, not an LLM doing software engineering.

Maybe there is some kind of technology out there that will someday take on the tasks of software engineering, but not LLMs. They cannot do software engineering any more than a C++ compiler can do software engineering.

somenameforme · 2 years ago
I'd concur and also add one other aspect here. The reason a person hires a e.g. janitor is not because they themselves are incapable of janitorial work. It's because they have different tasks that they would prefer to dedicate their energy to. Even if there were absolutely perfect LLMs capable of e.g. generating code precisely to specification, you still end up with the issue of actually creating said specification, deploying the result, managing with various local constraints/limitations, updating all of this when it turns out there's an unforeseen issue, and so on.

Even if its work that a wider range of people could do, it's still work that's going to need to be done, and it's not going to be done by a chatbot. So it seems that even for many jobs that LLMs could ostensibly replace, it's more likely to widen the labor pool rather than literally replace people. If anything it might increase the number of businesses and opportunities, because it would enable the breadth of domains that micro and small companies could perform in.

analognoise · 2 years ago
It will widen the labor pool until it suddenly doesn’t anymore, right? “Horses Need Not Apply” was all about that - the point being new devices were introduced until suddenly horses became pets rather than working animals (on a farm), and our insistence that technological change creates new jobs (“creative destruction”) wasn’t well founded.
maurimo · 2 years ago
Need to start enabling a pre-filtering of unbearably silly titles.

This title raising the question of whether an elephant in a porcelain shop could possibly affect the porcelains.