Readit News logoReadit News
dfedbeef · 9 months ago
The penalty for willful infringement in the US is pretty steep per instance.
thedevilslawyer · 9 months ago
Good point to think about. What specific act of training is infringing? Looks like none in this case, but happy to be corrected.
dfedbeef · 9 months ago
Downloading and reproducing the pirated copy of the work. It's called a COPYright for a reason.
puppycodes · 9 months ago
i really dont get this and i personally beleive the world would be a better place without IP in any form.

But also no one is selling "your book", the product is completely different in literally every conceivable way.

you have never (and no one ever should) own words arranged in a certain way. You own the right to sell a book. Not the words themselves.

meta does bad things and im not a fan, but this really pales in comparison.

bernb · 9 months ago
Put 10,000 hours into writing a book. Watch somebody with more resources or media coverage take full credits for it and/or make money instead of yourself. Copyright is a good thing. Same principle applies to the core of similar laws.
bigyabai · 9 months ago
Copyright can be a good or bad thing, it doesn't stop businesses from arguing fair/transformative use, and winning in many cases: https://en.wikipedia.org/wiki/Authors_Guild,_Inc._v._Google,....
puppycodes · 9 months ago
but literally no one is doing that in this scenario so... huh? Also the number of hours you put into a thing does not make it valuable.
zzo38computer · 9 months ago
I agree with you. I think that copyright is bad, and patents is also bad.

It is a different issue if they steal your private data or your power (I mean the electrical power for the computers, in case that isn't already clear).

Making copies of published books, music, etc (and doing what you want with them) is not the bad things.

bernb · 9 months ago
How would you replace the incentives for R&D for medication, for example? That often costs billions of dollars.
card_zero · 9 months ago
Well, if you own copyright for a song, you can claim licensing fees for any public performance of those lyrics.

I wonder if an equivalent to Performance Rights Organizations will emerge as a channel for LLM publishers (so to speak) to pay fees.

puppycodes · 9 months ago
and that is equally atrocious and should be eliminated from a society that wants to share ideas freely.

Idk if your in the US but you also massively oversimplify in your example, copyright law is waaaaaaaay more complex than that and it would take a set of special circumstances way beyond doing what you say it siphon money from an infringment claim

spatialspice · 9 months ago
It feels like there are two equally valid sides to this argument that get muddied because of our current model’s/regulations inability to differentiate one over the other.

On the free-information side, I don’t think anyone would argue that AI shouldn’t be allowed to offer a general synopsis of a given book / series. From an author/creator’s POV, it feels like extortion to be able to summarize/recreate any given chapter/subsection to the point that the entire work could be reproduced near-verbatim.

IMO the question is, can we meaningfully draw a line between the two, and if so, how?

rich_sasha · 9 months ago
I don't think anyone is stopping AI learning on the synopses of books. Or learning on books having paid licensing costs. It's the wanting to have cake, eat cake and for free that is falling.
3eb7988a1663 · 9 months ago
In contrast to typical corporate crime, it seems there is documentation of upper management signing off on the decision.

Are there other juicy examples where the C-suite can be directly implicated? Always assumed that management knew how to leave instructions vague enough so as to keep their hands clean (a la meddlesome priests). The bad actor was always some middle-manager gone rogue.

acomjean · 9 months ago
I think the main issue is that authors published books with the intention of human not machine consumption. Nobody though to put a contract in a book saying "human consumption only, not to be used to Train AI". Meta pirated the books in question, but what if they had bought a copy. Oddly cracking the encryption, a violation of the DMCA might be the infraction..

The courts have some tough questions to answer here.

CamperBob2 · 9 months ago
(Shrug) We'll see what the courts say, Gary.

If training AI doesn't constitute fair use, you will lose more than you could ever possibly hope to gain. As will the rest of us.

Meanwhile, sublimate your dudgeon towards advocating for free access to the resulting models. That's what's important. Meta is not the company you want to go after here, since they released the resulting model weights.

ebiederm · 9 months ago
To point out the obvious.

Unauthorized copying (aka pirating) is definitely a copyright violation.

That appears to be a huge problem with the large models and training. They don't secure legal access to the materials they train on, and thus fail to compensate authors for their work.

AKA students are required to buy or otherwise obtain legal access to their text books(like checking the book out of the library).

Training AI should play the same rules humans students have to follow.

MacsHeadroom · 9 months ago
Obtaining copies of pirated works is not infringement. Unauthorized sharing is infringement but being on the receiving end of sharing is not (even if one is an active participant).
thedevilslawyer · 9 months ago
And to point out the obvious - it seems training is not unauthorized copying. (At least this is the current legal status quo)
anoncow · 9 months ago
This. I am not asking for a special 1000x fee for AI. Just pay the normal fee a human would have paid, but at least pay that.
verzali · 9 months ago
Are you also willing to work for OpenAI for free then? Have you contacted them with such an offer?
CamperBob2 · 9 months ago
As long as I have access to the resulting model, sure. I thought I made that clear. Copyright is not as important as reaching the next stage of our intellectual evolution. Current-gen AI may not be sufficient to reach that stage, but I believe it is a necessary step.

Like the author of this screed, my work went into training every major model. I get paid back every time one of those models helps me learn or do something. The injustice, if it happens, will occur when a few well-heeled players like OpenAI succeed in locking the technology up with regulatory capture or (worse) if a few greedy, myopic assholes render it illegal or uneconomical to continue development by advocating copyright maximalism.

thedevilslawyer · 9 months ago
It's like saying that because a student reads a textbook, they now have to work for the author for free?
alanfranz · 9 months ago
Does fair use imply that pirating copyrighted material is ok?

I mean, it’s a serious question; I don’t see this as really connected.

As long as an AI can “understand” the content of a book and spit out a summary of it, or even leverage what it learned to perform further inference, I’d be inclined to say that this is fair use; a human would do the same.

But this has nothing to do with using pirated material for training, especially for some kind of commercial purpose (even if llama is free, they’re building on top of it) - I don’t see why it should be legal.

thedevilslawyer · 9 months ago
Fair use is literally that:

"Fair use" in copyright law allows limited, specific uses of copyrighted material without permission.

Hence, by definition, not "pirating".

ricardobeat · 9 months ago
I get the commercial/legal angle, but from the viewpoint of AI being something we as a society have an interest in developing, how should this work?

Do you want to severely limit evolution of models by having them pick (and buy) a tiny subset of all books?

Should every training run put money into a pool that gets paid out to every rights holder of every book that has ever been published?

Should Meta buy a physical or electronic copy of every book they want to use for training? That has zero impact on revenue for individual authors.

Would they be paid by word, by token, by book? This makes little sense. We don’t charge people for the knowledge they acquired while going to the library over 50 years, AI just squeezes this into weeks. Our legal framework simply doesn’t fit.

Dead Comment

Dead Comment

OneDeuxTriSeiGo · 9 months ago
Why should it be fair use? Why would being a derivative work not be OK? There is a massive corpus of public domain and FOSS works. Likewise plenty of permissively licensed government created datasets. There is no reason why any corpus created from these sources is insufficient.
throwaway150 · 9 months ago
> Why would being a derivative work not be OK?

That's not even the real problem. It's a problem, yes, but not the real problem. The problem is that before they could train the model on the book, they had to copy the book from somewhere. Is it ok to make illegal pirated copies of a copyrighted book to train your model? I think that's the issue we are dealing with here.

Whether it is ok to create a derivative work or not is beside the point.

card_zero · 9 months ago
The illustration shows a page from Matter by Iain M. Banks. I don't suppose that's an IP violation, but it implies a human artist with attention to detail.

Mind you, it's page 1 and the book is not on page 1.

Deleted Comment