Readit News logoReadit News
ndiddy · 4 months ago
The title for this submission is somewhat misleading. The judge didn't make any sort of ruling, this is just reporting on a pretrial hearing. He also doesn't seem convinced as to how relevant downloading books from LibGen is to the case:

> At times, it sounded like the case was the authors’ to lose, with [Judge] Chhabria noting that Meta was “destined to fail” if the plaintiffs could prove that Meta’s tools created similar works that cratered how much money they could make from their work. But Chhabria also stressed that he was unconvinced the authors would be able to show the necessary evidence. When he turned to the authors’ legal team, led by high-profile attorney David Boies, Chhabria repeatedly asked whether the plaintiffs could actually substantiate accusations that Meta’s AI tools were likely to hurt their commercial prospects. “It seems like you’re asking me to speculate that the market for Sarah Silverman’s memoir will be affected,” he told Boies. “It’s not obvious to me that is the case.”

> When defendants invoke the fair use doctrine, the burden of proof shifts to them to demonstrate that their use of copyrighted works is legal. Boies stressed this point during the hearing, but Chhabria remained skeptical that the authors’ legal team would be able to successfully argue that Meta could plausibly crater their sales. He also appeared lukewarm about whether Meta’s decision to download books from places like LibGen was as central to the fair use issue as the plaintiffs argued it was. “It seems kind of messed up,” he said. “The question, as the courts tell us over and over again, is not whether something is messed up but whether it’s copyright infringement.”

bgwalter · 4 months ago
The RIAA lawyers never had to demonstrate that copying a DVD cratered the sales of their clients. They just got high penalties for infringers almost by default.

Now that big capital wants to steal from individuals, big capital wins again.

(Unrelatedly, has Boies ever won a high profile lawsuit? I remember him from the Bush/Gore recount issue, where he represented the Democrats.)

Majromax · 4 months ago
> The RIAA lawyers never had to demonstrate that copying a DVD cratered the sales of their clients. They just got high penalties for infringers almost by default.

The argument for 'fair use' in DVD copying/sharing is much weaker since the thing being shared in that case is a verbatim, digital copy of the work. 'Format shifting' is a tenuous argument, and it's pretty easily limited to making (and not distributing) personal copies of media.

For AI training, a central argument is that training is transformative. An LLM isn't intended to produce verbatim copies of trained-upon works, and the problem of hallucination means an LLM would be unreliable at doing so even if instructed to. That transformation could support the idea of fair use, even though copies of the data are made (internally) during the training process and the model's weights are in some sense a work 'derived' from the training data.

If you analogize to human leaning, then there's clearly no copyright infringement in a human learning from someone's work and creating their own output, even if it "copies" an artist's style or draws inspiration from someone's plot-line. However, it feels unseemly for a computer program to do this kind of thing at scale, and the commercial impact can be significantly greater.

999900000999 · 4 months ago
Meta needs these books.

They seek to convert them into more products. The needs of the copyright holders , who are relatively small businesses and individuals are outweighed by the needs of Meta.

Sarah wanting to watch a movie or listen to music... Too bad she doesn't have an elite team of lawyers to justify whatever she wants.

In practice Meta has the money to stretch this out forever and at most pay inconsequential settlements.

YouTube largely did the same thing, knowingly violate copyright law, stack the deck with lawyers and fix it later.

anshumankmr · 4 months ago
Interesting figure that guy.

Here's this: >Boies also was on the Theranos board of directors,[2][74] raising questions about conflicts of interest.[75] Boies agreed to be paid for his firm's work in Theranos stock, which he expected to grow dramatically in value.[75][3]

https://en.wikipedia.org/wiki/David_Boies

That was one of the decisions of all time.

ImPostingOnHN · 4 months ago
If I remember correctly the legal precedent from that era, and if I'm summarizing correctly: Those who served or uploaded were considered to be infringing, since they were "making copies" by serving or uploading, whereas those who downloaded infringing copies were not themselves infringers. Meta in this case is at least described by the latter, and the question is whether LLM generation constitutes the former.
kranke155 · 4 months ago
Copyright was invented (in its modern form) by corporations. It will be uninvented if need be for corporations.

Deleted Comment

cma · 4 months ago
That's because that was statutory infringement where marketplace impact things come up more in fair use ("drummer reacts to hearing most famous drummer for the first time"). They look at whether it acts as a substitute for the original, but there are different rules depending on the type of fair use, how transformative it is, and more.
dragonwriter · 4 months ago
> The RIAA lawyers never had to demonstrate that copying a DVD cratered the sales of their clients.

Did any of the defendants raise a fair use defense based on a transformative use that they were making of the downloaded copies? If not, you are in the domain of "unlike legal situations lead to unlike decisions" which is not exactly surprising.

favorited · 4 months ago
> Unrelatedly, has Boies ever won a high profile lawsuit? I remember him from the Bush/Gore recount issue, where he represented the Democrats.

He teamed up with opposing counsel from Bush v. Gore, Ted Olson, and the pair of them represented plaintiffs in Hollingsworth v. Perry, the SCOTUS case which overturned Prop 8, California's gay marriage ban.

nadermx · 4 months ago
Copyright infringement is not stealing[0]

[0] https://en.m.wikipedia.org/wiki/Dowling_v._United_States_(19...

fmblwntr · 4 months ago
ironically, he was the head lawyer on the legal team for napster (obviously a huge loss) but it accords well with your theory
fazeirony · 4 months ago
when kids during the napster era were downloading music, the mega-corporations yelled that their bottom-lines to shareholders were being destroyed.

now, when the mega-corporations do it, it is 'just the cost of doing business'.

in both cases, the mega-corporations win because...they have the most money. law, and certainly justice, is not for the poor. at least not in america.

doctorpangloss · 4 months ago
Hacker News readers want simple, first principles answers that fit in a tweet, that require no reading, let alone case law, to understand.

This trial is way beyond the statutes and case law. The judge is doing a job, hard to conceive what the best job would be - I'm not sure Congress even knows what the policy should be or if the public has even the faintest wiff of how things should work.

1vuio0pswjnm7 · 4 months ago
"The title of this submission is somewhate misleading."

Better to read the submission before drawing conclusions rather than only the HN title. In this case the HN title has been editorialised.

The actual title of the article is "A Judge Says Meta's AI Copyright Case Is About `the Next Taylor Swift'"

"The judge didn't make any sort of ruling, this is just reporting on a pretrial hearing."

The HN title doesn't mention anything about a "ruling". Nor does the title chosen by Wired.

The subheading in the article reads "Meta's contentious AI copyright battle is heating up-and the court may be close to a ruling."

That is accurate. The Court will soon decide the SJ motions.

Reading the article leaves no chance of being mislead by any title:

"If Chhabria grants either motion, he'll issue a ruling before the case goes to trial-and likely set an important precedent shaping how courts deal with generative AI copyright cases moving forward."

pessimizer · 4 months ago
> “It seems like you’re asking me to speculate that the market for Sarah Silverman’s memoir will be affected,” he told Boies. “It’s not obvious to me that is the case.”

"LLM, please summarize Sarah Silverman's memoir for me."

edit: Reader's Digest would be very surprised to know that they shouldn't have been paying for books.

Dylan16807 · 4 months ago
If you do that, it won't be able to give you a summary detailed enough to infringe anything.
selfselfgo · 4 months ago
To me it’s a totally insane argument from the judge, if it doesn’t stop the authors from making money on their works, then the judge is basically capping the income on all writers. The AI is totally useless without their knowledge and yet they have to prove they aren’t hurting its profits. Like these authors are entitled to derivative uses of their writing, if they’re not it’s a total farce.
Workaccount2 · 4 months ago
Let me make a clarifying statement since people confuse (purposely or just out of ignorance) what violating copyright for AI training can refer to:

1. Training AI on freely available copyright - Ambiguous legality, not really tested in court. AI doesn't actually directly copy the material it trains on, so it's not easy to make this ruling.

2. Circumventing payment to obtain copyright material for training - Unambiguously illegal.

Meta is charged with doing the latter, but it seems the plaintiffs want to also tie in the former.

dragonwriter · 4 months ago
> Circumventing payment to obtain copyright material for training - Unambiguously illegal.

The judge in this case seems to disagree with you, not accepting the premise that downloading the material from pirate sites for this use inherently gets the plaintiffs an out from having to address fair use defense as to the actual use.

> the plaintiffs want to also tie in the former.

No, the defense wants to and the judge hasn't let the plaintiffs avoid it the way you argue they automatically can.

aidenn0 · 4 months ago
> The judge in this case seems to disagree with you, not accepting the premise that downloading the material from pirate sites for this use inherently gets the plaintiffs an out from having to address fair use defense as to the actual use.

This is a good point, as a reminder, the Folsom tests (failing or passing any one is not conclusive, they are to be holistically considered) are:

- the purpose and character of the use, including whether such use is of a commercial nature or is for nonprofit educational purposes (Note also that whether or not the use is transformative is part of this test).

- the nature of the copyrighted work

- the amount and substantiality of the portion used in relation to the copyrighted work as a whole

- the effect of the use upon the potential market for or value of the copyrighted work

https://en.wikipedia.org/wiki/Fair_use#U.S._fair_use_factors

flessner · 4 months ago
If the former ever gets tested in court, it's the end of the road. All major AI companies have trained on copyrighted work, one way or another.

What is inspiration? What is imitation? What is plagiarism? The lines aren't clearly drawn for humans... much less for LLMs.

ekidd · 4 months ago
> If the former ever gets tested in court, it's the end of the road. All major AI companies have trained on copyrighted work, one way or another.

I can absolutely guarantee you that neither DeepSeek nor Alibaba's highly talented Qwen group will care even a little bit, in the long run. Not if there's value to be had in AI. (And I can tell you down to the dollar what LLMs can save in certain business use cases.)

If the US decides to unilaterally shut down LLMs, that just means that the rest of the world will route around us. Whether this is good or bad is another question.

dragonwriter · 4 months ago
> If the former ever gets tested in court, it's the end of the road. All major AI companies have trained on copyrighted work, one way or another.

You assume that getting tested means the AI trainers lose, and also thar the model architectures that have been developed can’t be retrained from scratch with public domain, owned, and purpose-licensed material. (With several AI companies having been actively pursuing deals to license content for AI training for a while now.)

diggan · 4 months ago
> If the former ever gets tested in court, it's the end of the road. All major AI companies have trained on copyrighted work, one way or another.

End of the road for major AI companies, and hopefully something better can be created once it's declared illegal without any murky waters.

There are LLMs trained on data that isn't illegally obtained, OLMo by Ai2 is one such model, that is actually open source and uses open data for training. Just because it's "very difficult" for OpenAI et al shouldn't be an argument to force them to behave ethically anyways. If they cannot survive acting legally, then so be it, sucks for them.

aprilthird2021 · 4 months ago
It's not the end, all these companies have "clean" datasets which they train their models on now, along with training on the previous "dirty" models. But it's been so many generations, that they don't need to worry about this copyright issue anymore
const_cast · 4 months ago
The lines for humans aren't clearly drawn, but they are drawn. The main difference is that humans are humans and LLMs are computer programs.

I see no reason why we should even entertain the idea of extending human rights to computer programs, and so far, nobody has been able to give me any good reasons why.

Furthermore, why are we only entertaining the human rights that can be used for profit-driven purposes? Why do LLMs, for example, not have the right to free speech? Or an attorney? It seems highly unethical to grant these computer programs some protections as if they're humans but not grant them personhood. This is akin to slavery, which is something we actually have to consider. Anthropomorphization is a double-edged sword. We cannot simultaneously consider them human when convenient and then consider them programs when it's not. Or, if we want to do that, we need to form coherent argument to why, how, and when.

imtringued · 4 months ago
Or maybe they just need a license for their particular use case...
Yizahi · 4 months ago
The whole point of the fair use clauses is to protect humans. Clearly we can easily say that programs are altogether exempt in favor of humans, and it would be a proper thing to do, until the first real AI is built.
vkou · 4 months ago
If corporations owned human slaves and fed them copyrighted materials so that they were inspired to produce original creative output, I don't think that creative output should enjoy legal protections either. Even if slavery were not illegal.

Because the obvious question would be - how can free people compete with that?

nickpsecurity · 4 months ago
The FairTrained models claim to train with only public domain and legal works. Companies are also licensing works. This company has a lawful, foundation model:

https://273ventures.com/kl3m-the-first-legal-large-language-...

So, it's really the majority of companies breaking the law who will be affected. Companies using permissible and licensed works will be fine. The other companies would finally have to buy large collections of content, too. Their billions will have go to something other than GPU's.

triceratops · 4 months ago
> AI doesn't actually directly copy the material it trains on

Of course it does. Large models are trained on gigantic clusters. How can you train without copying the material to machines in the cluster?

thethimble · 4 months ago
“Copy” is ambiguous here. Of course data is copied during training. That said, OP is referring to whether the resulting model is able to produce verbatim copies of the data.
nashashmi · 4 months ago
Copyright law does not restrict storing copyright information. It restricts distribution of copyright data without permission. So a computer can store and analyze data but cannot spit it out verbatim. If it spits it out under fair use clause, then it becomes debatable whether the new work is fair use.
ksynwa · 4 months ago
Don't they mean that LLMs cannot perfectly reproduce the source material?
giancarlostoro · 4 months ago
I have a weird controversial view on this in terms of how to legally do it, and that is, for your 1 model, you should be only required to buy a digital copy of the work, maybe publishers should make digital copies that are tailored for LLMs to churn through, but then price it at a reasonable rate, and make the format basically perfect for LLMs.
romanzubenko · 4 months ago
This is actually clever, let the market decide the price and the worth of each book for training. Pricing per model might be tricky, instead annual licensing for training might be better pricing structure. Very quickly all big publishers and big labs might find very precisely what the fair price is to pay per book/catalogue.
nashashmi · 4 months ago
All AI is “trained” on existing works. But it also works by outputting altered copied data. This output part is a copyright violation.
startupsfail · 4 months ago
It’s weird that you are saying it’s unambiguously illegal. AFAIK, in some cases used for training were initially created by non-profits and transformed sufficiently to strip the copyrights.
Lerc · 4 months ago
I'm not sure if Meta did anything illegal in 2. either.

I thought the copyright infringement was by the people who provided the copyrighted material when they did not have the rights to do so.

I may be wrong on this, but it would seem a reasonable protection for consumers in general. Meta is hardly an average consumer, but I doubt that matters in the case of the law. Having grounds to suspect that the provider did not have the rights might though.

singron · 4 months ago
The original complaint alleges that the training process requires copying the material into the model and thus requires consent of the copyright holder. (Copyright protects copying but notably not use, so the complaint has to say they copied it in order to have standing). Then it says they didn't have consent.

They also mention Books3, but they don't appear to actually allege anything against Meta in regards to it and are just providing context.

I don't think it actually changes anything material about this complaint if Meta bought all the books at a bookstore since that also doesn't give you the right to copy the works.

The original complaint is 2 years old though, so I don't really know the current state of argumentation.

https://www.courtlistener.com/docket/67569326/1/kadrey-v-met...

Note that incidental copying (i.e. temporary copies made by computers in order to perform otherwise legal actions) is generally legal, so "copying" in the complaint can't refer merely to this and must refer more broadly to the model itself being a copy in order to have standing.

rixthefox · 4 months ago
> but it would seem a reasonable protection for consumers in general.

The final say may ultimately come from the Cox vs Record Labels case from 2019 that is still working it's way through the appeal courts.

If the record labels win their appeal, anyone who helped facilitate the infringement can be brought into a lawsuit. The record labels sued Cox for infringement by it's users. It's not out of the question that any ISP that provides Internet connectivity to Facebook could be pulled in for damages.

For Meta these two cases could result in an existential threat to the company, and rightly so because the record labels do not play games. The blood is already in the water.

CyberMacGyver · 4 months ago
The source being illegal doesn’t make your use legal. Infact one could argue that it’s equally illegal or worse since a corporation knowingly engaged in illegal activity.
blibble · 4 months ago
Blizzard managed to get a copyright infringement win against a defendant company that merely accessed their game client (IP) in memory: a cheat reading values of player position

IP that had been previously loaded by Blizzard itself

https://en.wikipedia.org/wiki/MDY_Industries,_LLC_v._Blizzar....

lsaferite · 4 months ago
Even if your belief that only the person *providing* the content is liable, do you honestly think a single person found all the content, downloaded it, directly trained the model themselves, and then deleted the content? If at any step the content was given or shared to anyone else for any reason, have they not converted into a provider themselves?
knowitnone · 4 months ago
So you're saying I can legally download movies as long as I don't provide them to others? Sweet!
alangibson · 4 months ago
> AI doesn't actually directly copy the material it trains on, so it's not easy to make this ruling.

IANAL, but it doesn't look that hard. On first glance this is a fair use issue.

What an LLM spits out is pretty clearly transformative use. But the fact that it pulls not only the entirety of the work, but the entirety of MOST works means that the amount is way beyond what could be fair use. Plus it's commercial use. Put it together and all LLMs are way illegal.

Dylan16807 · 4 months ago
> the fact that it pulls not only the entirety of the work

What do you mean by "pulls"?

What matters in traditional fair use is how substantially your output copies the work (among other factors). Your input is generally assumed to be reading/watching/listening to the entire work, and there is no problem with that.

TimPC · 4 months ago
I think the headline is a bit misleading. Mets did pirate the works but may be entitled to use them under fair use. It seems like the authors are setting up for failure by making the case about whether the AI generation hinders the market for books. AI book writing is such a tiny segment what these models do that if needed Meta would simply introduce guard rails to prevent copying the style of an author and continue to ingest the books. I also don’t think AI generated fiction is anywhere near high quality enough to substantially reduce the market for the original author.
stego-tech · 4 months ago
The problem is that "harm" as defined by copyright law is strictly limited to loss of sales due to breach of that copyright; it makes no allowment (that I know of) to livelihoods lost by the theft of the work indefinitely, as AI boosters suggest their tools can do (replace people). The way this court case is going, it's an uphill battle for the plaintiffs to prove concrete harm in that very narrow context, when the real harm is the potential elimination of their future livelihoods through theft, rather than immediately tangible harms.

As a (creative) friend of mine flatly said, they refuse to use an LLM until it can prove where it learned something from/cite its original source. Artists and creatives can cite their inspirational sources, while LLMs cannot (because their developers don't care about credit, only output) by design. To them, that's the line in the sand, and I think that's a reasonable one given that not a single creative in my circles has been cut payment from these multi-billion-dollar AI companies for the unauthorized use of their works in training these models.

ijk · 4 months ago
A difficult, but not intractable problem: OLMoTrace claims to be able to trace from output to training data in seconds [1]. Notably, it can do this because OLMo itself was intentionally designed to be open and transparent [2]; it was trained on 4.6 trillion tokens of entirely open data (which you can download yourself) [3]. There's nothing stopping Meta or OpenAI from creating a similar tool, other than the obvious detail of that showing their exact training data.

[1] https://arxiv.org/abs/2504.07096

[2] https://allenai.org/blog/olmotrace

[3] https://huggingface.co/datasets/allenai/olmo-mix-1124

lopis · 4 months ago
> Artists and creatives can cite their inspirational sources

Even humans have a lot of internalized unconscious inspirational sources, but I get your point.

immibis · 4 months ago
Well, no, because all those file-sharing users used to get fined $250,000 or whatever, which is obviously much greater than the amount they would have paid for whatever they downloaded.
tedivm · 4 months ago
Github does give free copilot access to open source developers it considers important enough (which is a pretty low bar). While not the same as actually paying, it's the only example I can think of where the company that used people's copyrighted material actually gave something back to those people.
msabalau · 4 months ago
AI Boosters can suggest whatever nonsense strikes their fancy, and creatives can give into fear for no reason, but the best estimates we have from the BLS is that the there are careers and ongoing demand for artists, writers, photographers.

Regardless, deep learning models are valuable because they generalize within the training data to uncover patterns and features and relationships that are implicit, rather (simply) present with the data. While they can return things that happen to be within the training set, there is no reason to believe that any particular output is literally found there or is something that could be attributable, or that a human would ever attribute. Human artists also make meaning from the broad texture of their life experiences and general diffuse unattributable experience of culture.

Sure, this is something a random artist is unlikely to know, but if they are simply refusing to pick up a useful tools that can't give credit--say avoiding LLMs for brainstorming, or generative selection tools for visual editing, or whatever, their particular careers will be harmed by their incurious sentimentality, and other human artists will thrive because they know that tools are just tools, and it is the humans using the tools that make meaning that people care about.

subscribed · 4 months ago
Your friend might want to check Perplexity.
apercu · 4 months ago
> but may be entitled to use them under fair use.

Why? Was it legal for me to download copyrighted songs from Limewire as "fair use"? Because a few people were made examples of.

I'm a musician, so 80% of the music I listen to is for learning so it's fair use, right? ;)

Filligree · 4 months ago
> I'm a musician, so 80% of the music I listen to is for learning so it's fair use, right? ;)

I would be happy with that outcome. I’m a fanfiction writer, and a lot of the stories I read are very much for learning. ;-)

lukeschlather · 4 months ago
I don't believe anyone was ever penalized for downloading only uploading which seems like a pretty similar principle to what the judge is saying here.
immibis · 4 months ago
If you used it for fair-use purposes, it could well have been legal. The only way to find out for sure would be to have them sue you, and then successfully or unsuccessfully defend yourself with a fair use argument. Please keep in mind that the law is a kind of stochastic process; "how illegal" something is dependent on how many times someone is found liable for it, which is something that takes a bunch of lawsuits to actually know, and each lawsuit is unique. It's not a computer program where if(X && Y && !Z) then punishment(); (well it sort of is, but X and Y and Z aren't definite boolean values, but things that have to be estimated based on evidence). (I am not a lawyer and this is not legal advice)
BrawnyBadger53 · 4 months ago
If the result of this becomes that substantial remixes and fanfiction can be commercialized without permission from authors then I am happy. This stuff should have been fair use to begin with. Granted it probably already is fair use but because of the way copyright is enforced online it is effectively banned regardless.
SideburnsOfDoom · 4 months ago
Firstly, no kidding, of course it's "illegal" and "Piracy".

Secondly, there's an argument that the infringement happens only when the LLM produces output based in part of whole on the source material.

In other words, training a model is not infringing in itself. You could "research" with it. But selling the output as "from your model" is highly suspect. Your business is then based on selling something based other people's work, that you do not have rights to.

aurizon · 4 months ago
Yes, current AI video/text product is inferior at this time. Youtube is full of all genres of inferior products - at this time! The ramp of improvement is quite steeply pointing upwards. This is retrospective of the days of spinning jennies and knitting/weaving machines that soon made manual products un-economic - that said, excellent craft/art product endured on a smaller scale. AI is also taking a toll on the movie arts, staring at the low end and climbing the same incremental improvement rungs. All the special effects(SFX) are in a similar boat. Prop rentals are hit hard. 100 high res photos of an old Studio Tv camera - all angles/sizes/lighting can be added to an AI prop library and with a green screen insert the prop can manifest as a true object in any aspect. There can be many. It still takes people to cull the hallucinations - a declining problem. Same with actors. They can be patterned after a famous actor - with likeness fees, or created de-novo. All the classic aspects of a studio production suffer the same incremental marginalisation - in 5 years = what will remain? - what new tech will emerge? I feel that many forks will emerge, all fighting for a place in the sun = some will be weeded out, some will flower - but at a very high pace. The old producers/directors/writers - the whole panoply of what makes a major studio will be scattered like dried bread crumbs,
internetter · 4 months ago
> Meta did pirate the works but may be entitled to use them under fair use

What fair use? Were the books promised to them by god or something?

TimPC · 4 months ago
Fair use allows for certain uses of copyrighted works without a specific license for those works. One of the major criterion is how transformative the work is and an LLM model is very different from the original work so it seems likely that criterion at least is met.
matkoniecz · 4 months ago
"fair use" is a specific legal term
gabriel666smith · 4 months ago
I think there's a really fundamental misunderstanding of the playing field in this case. (Disclaimer that my day job is 'author', and I'm pro-piracy.)

We need to frame this case - and ongoing artist-vs-AI-stuff -using a pseudoscience headline I saw recently: 'average person reads 60k words/day'.

I won't bother sourcing this, because I don't think it's true, but it illustrates the key point: consumers spend X amount of time/day reading words.

> It seems like the authors are setting up for failure by making the case about whether the AI generation hinders the market for books. AI book writing is such a tiny segment what these models do that if needed Meta would simply introduce guard rails to prevent copying the style of an author and continue to ingest the books.

and from the article:

> When he turned to the authors’ legal team, led by high-profile attorney David Boies, Chhabria repeatedly asked whether the plaintiffs could actually substantiate accusations that Meta’s AI tools were likely to hurt their commercial prospects. “It seems like you’re asking me to speculate that the market for Sarah Silverman’s memoir will be affected,” he told Boies. “It’s not obvious to me that is the case.”

The market share an author (or any other artist type) is competing with for Meta is not 'what if an AI wrote celebrity memoirs?'. Meta isn't about to start a print publishing division.

Authors are competing with Meta for 'whose words did you read today?' Were they exclusively Meta's - Instagram comments, Whatsapp group chat messages, Llama-generated slop, whatever - or did an author capture any of that share?

The current framing is obviously ludicrous; it also does the developers of LLMs (the most interesting literary invention since....how long ago?) a huge disservice.

Unfortunately the other way of framing it (the one I'm saying is correct) is (probably) impossible to measure (unless you work for Meta, maybe?) and, also, almost equally ridiculous.

onlyrealcuzzo · 4 months ago
> I also don’t think AI generated fiction is anywhere near high quality enough to substantially reduce the market for the original author.

Legal cases are often based on BS, really an open form of extortion.

The plaintiffs might've been hoping for a settlement.

Meta could pay $xM+ to defend itself.

Maybe they thought Meta would be happy to pay them $yM to go away.

The reality is, there's very little Meta couldn't just find a freely available substitute for if it had to, it might just take a little more digging on their end.

The idea that any one individual or small group is so valuable that can hold back LLMs by themselves is ridiculous.

But you'll find no end to people vain enough to believe themselves that important.

kazinator · 4 months ago
Do you not understand that "fair use" is not some copyright free-for-all which lets you use works wholesale without attribution as if they were suddenly public domain?

To make fair use of a book's passage, you have to cite it. The except has to be reasonably small.

Without fair use, it would not be possible to write essays and book reviews that give quotes from books. That's what it's for. Not for having a machine read the whole book so it can regurgitate mashups of any part of it without attribution.

Making a parody is a kind of fair use, but parodies are original expression based on a certain structure of the work.

danaris · 4 months ago
> To make fair use of a book's passage, you have to cite it.

That's not true. That's what's required for something not to be plagiarism, not for something not to be copyright infringement.

Fair use is not at all the same as academic integrity, and while academic use is one of the fair use exceptions, it's only one. The most you would have to do with any of the other fair use exceptions is credit where you got the material (not cite individual passages), because you're not necessarily even using those passages verbatim.

ryandrake · 4 months ago
AI hucksters vs. the Copyright Cartel. When two evil villains fight, who do you root for? Here's hoping they somehow destroy each other.
pessimizer · 4 months ago
Last time they fought, it was google vs. the publishers, and it resulted in the scanning and archiving of all of those books in the first place.

Neither of them died, though, both parties just kept all the books from the public and used them for their own purposes, while normal people had to squirrel them away and trade them illegally. It's the Tech Cartels vs. the Copyright Trolls. It'll end up as a romance.

thomastjeffery · 4 months ago
I can only root for them both to lose.

Letting Meta launder copyrighted works to make billions, while threatening the rest of us over the most trivial derivative work, sounds like the worst outcome to me.

Copyright is a mistake. It demands that we compete instead of collaborate. LLMs don't provide enough utility to deserve special treatment in these circumstances. If anyone can infringe copyright, then everyone should be able to.

akomtu · 4 months ago
The copyright cartel is going to lose because it represents the dying old world order of many competing enclaves. AI isn't just a sloppy text generator, it's the new ideology of forced uniformity that permits no boundaries. So no copyright cartels.
probably_wrong · 4 months ago
You can always root for the lawyers.
labrador · 4 months ago
"Chhabria is cutting through the moral noise and zeroing in on economics. He doesn't seem all that interested in how Meta got the data or how “messed up” it feels—he’s asking a brutally simple question: Can you prove harm?"

https://archive.is/Hg4Xr

trinsic2 · 4 months ago
> Can you prove harm?

Where was this argument when Napster was being sued?

dragonwriter · 4 months ago
This is the source headline, but it is pure clickbait; the judge absolutely did not say that in any of the quotes in the article; in the hearing on both parties motions for partial sunmary judgement, he both said that would be the case if the plaintiffs proved certain facts and raised doubts that they have the evidence to prove them.
ebfe1 · 4 months ago
And this is how Chinese model will win in long term, perhaps... They will be trained on everything and anything without consequences and we will all use it because these models are smarter (except for area like Chinese history and geography). I don't have the right answer on what can be done here to protect copyright or rather contributing back to authors of a paper without all these millions dollar wasted in lawsuits.
aprilthird2021 · 4 months ago
There's no winning though. There's no real moat when it comes to AI remember. There will be tons of models of similar, squishy types of unique attributes (squishy meaning it works great sometimes and not other times, and that's just normal). And it will mostly be decided which to use based on cost and compliance.
granzymes · 4 months ago
Title seems misleading after reading the article.
gtowey · 4 months ago
It's mind blowing to me that the court might deny the right of the authors to control licensing of this kind of usage of their work.

Deleted Comment

jwatte · 4 months ago
If I put something up for anyone to read on the internet. And someone reads it on the internet. I can't really control that, right?

Now, if someone makes an infringing use of the thing I put up on the internet. Then I have some kind of recourse, at least through the courts, if I have a lot of money to pay lawyers.

But if someone makes a fair use of the thing I put up on the internet, then I don't have any recourse, because that's the way the law works.

As far as I understand it, using data as input data to a machine learning model that substantially transforms and does not duplicate the input data is currently believed to be fair use.

So, the training use of freely available data seems pretty straightforward that authors can't control when they make it freely available.

It seems like Facebook made use of data that wasn't freely available, though -- ebook rip library type stuff. That's the bit I think they could be in trouble for. But that's just a plain-old "Napster" style copyright question, as far as I understand it.

The lawyer's argument that Llama "obliterates the market" for written works seems weak. I, and anyone I know, put down AI slop fiction before the first paragraph is done, because it's not the same thing as real fiction.