In the world of tattooing, it’s frowned upon for a tattoo artist to take another tattoo artist’s original work and replicate it without permission, yet it’s common practice to take well known IP (Pokémon, Studio Ghibli, etc) and tattoo that on a client. The ethical boundary seems to be between whether the source artwork was created by an individual vs. a corporation.
In the tattoo case, tattooing pikachu on a person does not harm Nintendo’s business, but copying another tattoo artist’s work or style directly takes their business. Tattoo art is an industry where your art style largely defines your career.
I can see the argument LLMs are transformative, but when you set up specific evaluation of copying a company/artist and then advertise that you clone that specific studio’s work, that’s harming them and in my opinion crossing a line.
This isn’t an individual vs corporation thing, (though people are very selfish).
There’s so much more here than just corporate vs individual. There’s the sheer scale of it, the enforcement double standards, questions of consent, and taking advantage of the commons (artists public work) etc. To characterize it as people not liking business is plain wrong.
Very well put. It boggles my mind that some people cannot separate individual from corporations. As if their mind are not able to comprehend how corporations can cause harm at scale.
I'd argue that if an art piece is famous enough, then it ought to become "public". In the sense that, yes, the artist made it and it became famous, but once it reaches a certain scale of fame, it means that the public itself made the work famous. So it's a kind of co-creation (eg for instance the fact that people were writing fan-fiction about Harry Potter or whatever, contributing to the fame of the book, etc). So in this case, after it has reached a sufficient level of fame, it should become public work
Technically the individual tattoo artist needs a license to reproduce the IP (unless the studio released the likeness into the public domain), but it's small potatoes and these IP holders know that the free promotion helps them infinitely more than whatever they'd get from trying to enforce IP licenses.
But the AI companies are making billions off of this (and a lot of other) IP, so it totally makes sense for the IP holders to care about copyright protection.
I am still torn on this issue. On the one hand, it feels like a copyright violation when other people's works are used to train an ML model. On the other hand, it is not a copyright infringement if I paint a picture in the Studio Ghibli style myself. The question is whether removing a ‘skill requirement’ for replication is sufficient grounds to determine a violation.
I believe the case is that you're welcome to paint a picture perfectly copying Studio Ghibli, but you cannot sell it. You're welcome to even take the style and add enough personal creativity that it becomes a different work and sell that, but only if a random on the street doesn't look at it and say "wow, what Studio Ghibli film is that from?".
That's the problem here, there's no creative input apart from the prompt, so obviously the source is blatant (and often in the prompt).
> I believe the case is that you're welcome to paint a picture perfectly copying Studio Ghibli, but you cannot sell it.
Technically, you can't, but there's no way to enforce copyright infringement on private work.
You can paint a Studio Ghibli-style painting -- the style isn't protected.
These rules assume that copying the style is labor intensive, and righteously rewards the worker.
When an LLM can reproduce thousands and thousands of Ghibli-style paintings effortlessly, not protecting the style seems less fair, because the work of establishing the Ghibli-style was harder than copying it large-scale.
I'm in the "don't fight a roaring ocean, go with the flow" boat:
If your entire livelihood depends on having the right to distribute something anyone can copy, get a stronger business.
The language used to describe LLM behaviour such as "training" and "reasoning" has led people to treat them the same as humans, instead of a new and different entity that requires us to update our set of rules.
If I was the first person to invent a car, for example, and I named its method of locomotion "walking", would you treat it the same as a human and let it "walk" in all the same places humans walk? After all, it's simply using kinetic energy and friction to propel itself along the ground, as we do.
Because a car is so obviously different to a human, we intuitively understand it requires an alteration to our rules in order for us to coexist peacefully. Since LLMs are so abstract, we don't intuitively understand this distinction, and so continue to treat them as if they should be bound by the same rules and laws as us.
It's less about paint a picture yourself, arguably there is little to no value there. OpenAI et al, sell the product of creating pictures in the style of their material. I see this as a direct competition to Studio Ghibli's right to produce their own material with their own IP.
I agree with this. I don't know how to create artistic styles by hand or using any creative software for that matter. All the LLM tools out there gave me the "ability" and "talent" to create something "good enough" and, in some cases, pretty close to the original art.
I rarely use these tools (I'm not in marketing, game design, or any related field), but I can see the problem these tools are causing to artists, etc.
Any LLM company offering these services needs to pay the piper.
Is it not a copyright infringement if you pain it yourself? Why is that the case? I thought it would be just that studios wouldn't care for the most part. Hasn't Disney gone after this type of personal projects in the past?
I am mostly ok with these copyright crackdowns in AI in the spirit of - if a human were to do it commercially, it would be illegal.
We can argue if that should be the case or not, which is a different issue.
However, it should not be legal to automate something at scale that is illegal when done by an individual human. Allowing it just tips the scale against labor even more.
You are not a for profit software product. You are a human. If you make a drawing, that drawing MIGHT be a for profit product.
If I make a for profit AI, that AI is a product. And if that product required others' copyrighted works to derive it's deliverable, it is by definition creating derivative works. Again, creating a small human, not creating a product. Creating a for profit AI, creating a product.
If my product couldn't produce the same output without having at some point consumed the other works, I've triggered copyright concerns.
If I make and train a small human, I am not creating a for profit product so it doesn't come in to play at all. The two are not similar in any way. The human is not the product. If THEY create a product later (a drawing in this case) then THAT is where copyright comes in.
It is a tough issue, but legalese will probably start from where and on what and how did this thing train on in order to be able to produce such results. It's similar to (guitar) amp modeling. Eventually they will either have to remove it or license it. I don't see other way. What makes it challenging is that vast majority of things that it was trained on can make a similar claim and it opens the floodgate. Outcome is either youtube-like thing where things exist in certain capacity, but not full or alternative is Napster like destiny where it gets banned altogether. Stakes are now a bit too high for Napster scenario. OpenAI might have a youtube-like angle for licensing with its Sora thing which seems to be turning into its own social network of sorts.
If you paint a Studio Ghibli totoro on your cup, pillow, PC, T-shirt, nobody is going to care. If you do this a thousand times it obviously is an issue. And if you charge people to access you to do this, it is also obviously an issue.
The difference is that you’re not making money off your pictures (or else you'd probably need a licensing agreement), whereas the AI companies are making money off of the IP holders.
it's a form of slavery when someone is profiting off the looong and hard, concentrated work of others without reimbursing them adequately.
that's why piracy Robin Hood Style is fine, but corporate piracy is not. I downloaded a Ghibli movie because I could't afford the DVD. I didn't copy it on VHS then to sell it via e-commerce to 1000 people.
AI companies grabbed IP and pull hundreds of thousands of customers with it, then collect their interactions either way and profit exponentially while Ghibli, Square Enix et al. don't profit from users using more and more AI ...
and most people are not "training" ML models ... people are using copy machines that already learned to compensate for their lack of will to put effort into stuff.
a lot of us been there and enough decided to move beyond and get/become better at being human aka evolve vs get cozy in some sub-singularity. some didn't and won't, and they are easiest to profit from.
It is absolutely infringement if you paint a picture in Ghibli style. You just have fair use to infringe in a personal, noncommercial, educational, etc. purpose.
Fair use is a defense to infringement, like self defense is a defense to homicide. If you infringe but are noncommercial, it is more likely to be ruled fair use. If Disney did a Ghibli style ripoff for their next movie, that is clearly not fair use.
OpenAI is clearly gaining significant material benefits from their models being able to infringe Ghibli style.
> It is absolutely infringement if you paint a picture in Ghibli style.
Of course not because by this twisted logic every piece of art is inspired by what comes before and you could claim Ghibli is just a derivative of what came before and nobody has any copyright then...
Human learning is transformative. It takes time. Time, not money, is the criteria for value to human life. Humans can re-draw in styles they see and the law sees it as transformative work. Humans draw conclusions from learning and apply it to other fields - that’s natural intelligence. If AI helps humans make more new, transformative work, not the same characters as the studios, not the same plots, then the studio should not try to rent-extract. Otherwise someone will try to do it to stick figures and any other style and you’d have to teach toddlers rules for any stile before you give them a crayon. The only real impact will be that hype and styles “in fashion” will just play out faster and in the mean time the Studios get their name sealed in human minds for anything done in their style - which is more immortality than abuse.
There’s certainly a difference with AI. Not only can it reproduce exactly the same characters - in fact I doubt you can train the style without this ability - it puts those characters in reach of crayon-age children at a professional capacity. This crosses the line between copying and enabling IP theft in my mind. While I don’t agree with our current IP laws, the ease and rapidity of this seems like a real problem. How long before some kid in a third world country can produce a feature length Ghibli-esque movie and distribute it online? And how much will this dilute their brand etc? Is it really so much to ask the AI companies to filter their training sets?
Exploring the scenarios and corner cases is how rules should be written, just like any code.
In this case producing anything commercial and anything with AI period should always be disclosed.
Since at this stage we can often tell when something is AI (though not everyone and not always), especially food images at a restaurant, for me that immediately downgrades the quality or value of a product That’s going to be the natural human response. And users of the tech will likely be lumped in with very poor attempts, downgrading the value of anyone who uses it. That is natural payback for trying to go commercial.
However in the hobbyist space - the space where humans learn what AI attempts will also do is expand the creative space massively - people will get to iterate much faster with their own styles and new styles will emerge. Just like the invention of writing and publishing - the original writers were people with tremendous time and resource privilege on their hands, but the art of writing would have never ever bloomed if it didn’t become available to anyone over time. Humans then draw higher order conclusions and insights from the abundance, even if it takes energy for filtering.
That said, abuse in the form of pretending something generated is real or taking credit for generated work as real should be illegal. If you teach the moral compass along with the book or you built the identification along with the work you will get a lot more authentic novelty even with AI tools.
Is it possible that the primary liability for OpenAI is trade dress? If you can produce things in (for example) the style of a Studio Ghibli film, such that an ordinary consumer can’t tell if the source is Studio Ghibli or AI, is that actionable? I feel like I see copyright concerns all the time with AI but rarely is trademark discussed.
(not a lawyer) This is the exact opposite of how it works, at least in the US.
Copyright: covers works on publication. Registering it allows for seeking of statutory damages.
Trademark: covers defining characteristics. This can be muddy since defining characteristics are not necessarily the same as style.
It can be especially confusing when certain things become characteristic of a genre. This is mostly what transformer and diffusion models cover: the strongest weights will be what's most common in the training data. You get a lot of em dashes and heroes in colorful outfits, but they don't constitute a violation on their own in any modern model unless the operator of the model goes out of their way to violate.
You wouldn't steal a handbag.
You wouldn't steal a television.
You wouldn't steal a DVD.
Downloading someone's content for AI training is stealing.
Stealing is a crime.
really, really happy that someone is calling out data-harvesting for what it really is.
Hah, hilarious this is being unironically used, this was a lame ad old people they put out against piracy, and widely mocked by Millenials in the early 2000s.
We've have always known we should harvest the internet for absolutely everything. If we don't, it's fine, we can squabble about our IP and China will just ingest the entire Internet, make a model out of it, then release the Ghiblifier and we'll all download it or run it on Openrouter. You can already download Hunyuan Image 3.0 and it's just as good as OpenAI's image create, if not better. What's Japan going to do about that?
Also, for the record, I would absolutely download a handbag, or a television, or a DVD, or a car, if I could. I'd be pumping out Louis Vuittons and iPhones for my kids all day long, and driving a Lambo because why not?
Reefer Madness. Satan in vinyl. DnD panic. Downloading as piracy.
I was once accused of being a pirate back then because I was talking about downloading in a chatroom.
What was I downloading? Linux. Probably Debian. It's the same kind of nonsense as people going around accusing everyone whose process they don't understand of using AI. I'm surprised no one has come after me for making flame fractals.
Probably not as much a generational thing ("old people", versus "Millennials" or really, Gen X at that time), as just a tone-deaf shaming attempt by our corporate overlords.
Why not go straight for the goal and just download the Lambo?
Copyright/Patent/Intellectual Property for more than 25 years is illogical in my personal weird opinion. I think it grinds innovation to a halt and only serves to generate money. In other news i would glady recieve a lot of money forever just because i invented some thing ages ago. I'm only a humble capitalist human. :)
OpenAI will just fork out a bunch of money and settle everything, because that is how money works.
The difference is: You're making copies of something:
Scenario 1: I take your baguette. Your hand is empty. You starve.
Scenario 2: I take your baguette recipe. You still have a baguette recipe. You continue to live.
Scenario 3: I take your baguette recipe and publish it. Your customers leave you. You starve.
Copying someone's IP can also impact you economically if your financial model depends on you being the only distributor of copies of something.
Should we enforce the protection of people's right to have monopoly of distribution of intellectual property?
Or should we accept that in reality, copies are free and distribution monopolies only exist in inefficient markets?
It seems totally right to protect people's intellectual property.
But information wants to be free.
It's a dilemma. Do we side what feels right, or what's real?
I don’t get it. Did you men that “It seems totally right to protect people's intellectual property” follow from “your financial model depends on you being the only distributor of copies of something”?
It is accepted, within limits, for humans do transformative work, but it's not been yet established which the limits for AIs are, primarily (IMO): 1. whether the work is transformative or not 2. whether the scale of transformation/distribution changes the nature of the work.
Embedding other people's work in a vector space, then sampling from the distribution at a different point in the vector space, is not a central member of the "transformative" category. The justifications for allowing transformative uses do not apply to it.
Any type of art is inspired by the art of others. Its the simplicity in which you now can generate "art" which is the issue. Stealing artists work while also making it harder than ever for them to make a living is a deeply ethical issue. AI "artists" and "art" disgust me. Its a skill you build over your whole life, taking the shortcut because you're unwilling to learn the craft is deeply insulting to real artists.
Good thing traditional art is still somewhat safe from this.
Thankfully, this is making it easier to leave highly addicting online platforms as I boycott AI content of any form.
This is an example of why analogies make for bad quality reasoning. The first 3 goods have marginal cost, so stealing them causes direct financial harm. Digital goods have zero marginal cost, so there is not any automatic harm. You may still wish for it to classified as stealing. The law may even agree with you that it is stealing. But it doesn't change the fact that this analogy is useless for the purposes of reasoning.
depends, they definitely didn't build "the streets" they plaster with ads but if it's their channel or their website, it's up us to pay for blockers, put effort into doing ourselves or using one of the tools build by generous contributors who crave user-defined spaces instead of 'enforced and conform' ("VC"-)demand-based spaces
"In 1710, the British Parliament passed a piece of legislation entitled An Act for the Encouragement of Learning. It became known as the Statute of Anne, and it was the world’s first copyright law. Copyright protects and regulates a piece of work - whether that's a book, a painting, a piece of music or a software programme. It emerged as a way of balancing the interests of authors, artists, publishers, and the public in the context of evolving technologies and the rise of mechanical reproduction. Writers and artists such as Alexander Pope, William Hogarth and Charles Dickens became involved in heated debates about ownership and originality that continue to this day - especially with the emergence of artificial intelligence. With:
Lionel Bently, Herchel Smith Professor of Intellectual Property Law at the University of Cambridge
Will Slauter, Professor of History at Sorbonne University, Paris
Katie McGettigan, Senior Lecturer in American Literature at Royal Holloway, University of London.
1) Download Ghibli film material from an arrrr site. Extract frames using ffmpeg. Pay someone $1 per week in Nigeria to add metadata for each (or some) frame(s).
It's possible that they purchased the movies (although definitely without the proper licensing; buying a DVD allows for personal use, not training a commercial model), or maybe they simply pirated them.
It's also possible that models' entire understanding of the aesthetic comes from screenshots of the movie. Even if OpenAI didn't feed in each frame of the movie, they definitely fed in lots of images from the web, some of which were almost certainly movie screenshots.
I think AI seriously breaks down the “transformative work” condition used to determine fair use. The work is clearly being transformed, but it also doesn’t seem fair.
These models critically depend on many hours of artistic effort during training/prompting, but don’t require any additional effort by the new distributors/consumers. On top of that, proper attribution is often impossible.
In the tattoo case, tattooing pikachu on a person does not harm Nintendo’s business, but copying another tattoo artist’s work or style directly takes their business. Tattoo art is an industry where your art style largely defines your career.
I can see the argument LLMs are transformative, but when you set up specific evaluation of copying a company/artist and then advertise that you clone that specific studio’s work, that’s harming them and in my opinion crossing a line.
This isn’t an individual vs corporation thing, (though people are very selfish).
There’s so much more here than just corporate vs individual. There’s the sheer scale of it, the enforcement double standards, questions of consent, and taking advantage of the commons (artists public work) etc. To characterize it as people not liking business is plain wrong.
But the AI companies are making billions off of this (and a lot of other) IP, so it totally makes sense for the IP holders to care about copyright protection.
That's the problem here, there's no creative input apart from the prompt, so obviously the source is blatant (and often in the prompt).
Technically, you can't, but there's no way to enforce copyright infringement on private work.
You can paint a Studio Ghibli-style painting -- the style isn't protected.
These rules assume that copying the style is labor intensive, and righteously rewards the worker.
When an LLM can reproduce thousands and thousands of Ghibli-style paintings effortlessly, not protecting the style seems less fair, because the work of establishing the Ghibli-style was harder than copying it large-scale.
I'm in the "don't fight a roaring ocean, go with the flow" boat:
If your entire livelihood depends on having the right to distribute something anyone can copy, get a stronger business.
If I was the first person to invent a car, for example, and I named its method of locomotion "walking", would you treat it the same as a human and let it "walk" in all the same places humans walk? After all, it's simply using kinetic energy and friction to propel itself along the ground, as we do.
Because a car is so obviously different to a human, we intuitively understand it requires an alteration to our rules in order for us to coexist peacefully. Since LLMs are so abstract, we don't intuitively understand this distinction, and so continue to treat them as if they should be bound by the same rules and laws as us.
I rarely use these tools (I'm not in marketing, game design, or any related field), but I can see the problem these tools are causing to artists, etc.
Any LLM company offering these services needs to pay the piper.
We can argue if that should be the case or not, which is a different issue.
However, it should not be legal to automate something at scale that is illegal when done by an individual human. Allowing it just tips the scale against labor even more.
Deleted Comment
If I make a for profit AI, that AI is a product. And if that product required others' copyrighted works to derive it's deliverable, it is by definition creating derivative works. Again, creating a small human, not creating a product. Creating a for profit AI, creating a product.
If my product couldn't produce the same output without having at some point consumed the other works, I've triggered copyright concerns.
If I make and train a small human, I am not creating a for profit product so it doesn't come in to play at all. The two are not similar in any way. The human is not the product. If THEY create a product later (a drawing in this case) then THAT is where copyright comes in.
Deleted Comment
If you paint a Studio Ghibli totoro on your cup, pillow, PC, T-shirt, nobody is going to care. If you do this a thousand times it obviously is an issue. And if you charge people to access you to do this, it is also obviously an issue.
None of the training data was originally drawn by OpenAI. OpenAI also actively monetizes that work.
Deleted Comment
that's why piracy Robin Hood Style is fine, but corporate piracy is not. I downloaded a Ghibli movie because I could't afford the DVD. I didn't copy it on VHS then to sell it via e-commerce to 1000 people.
AI companies grabbed IP and pull hundreds of thousands of customers with it, then collect their interactions either way and profit exponentially while Ghibli, Square Enix et al. don't profit from users using more and more AI ...
and most people are not "training" ML models ... people are using copy machines that already learned to compensate for their lack of will to put effort into stuff.
a lot of us been there and enough decided to move beyond and get/become better at being human aka evolve vs get cozy in some sub-singularity. some didn't and won't, and they are easiest to profit from.
Fair use is a defense to infringement, like self defense is a defense to homicide. If you infringe but are noncommercial, it is more likely to be ruled fair use. If Disney did a Ghibli style ripoff for their next movie, that is clearly not fair use.
OpenAI is clearly gaining significant material benefits from their models being able to infringe Ghibli style.
Of course not because by this twisted logic every piece of art is inspired by what comes before and you could claim Ghibli is just a derivative of what came before and nobody has any copyright then...
Only if you copy their characters. If you make your own character and story, and are replicating ghibli style, it is OK. Style is not copyrightable.
In this case producing anything commercial and anything with AI period should always be disclosed.
Since at this stage we can often tell when something is AI (though not everyone and not always), especially food images at a restaurant, for me that immediately downgrades the quality or value of a product That’s going to be the natural human response. And users of the tech will likely be lumped in with very poor attempts, downgrading the value of anyone who uses it. That is natural payback for trying to go commercial.
However in the hobbyist space - the space where humans learn what AI attempts will also do is expand the creative space massively - people will get to iterate much faster with their own styles and new styles will emerge. Just like the invention of writing and publishing - the original writers were people with tremendous time and resource privilege on their hands, but the art of writing would have never ever bloomed if it didn’t become available to anyone over time. Humans then draw higher order conclusions and insights from the abundance, even if it takes energy for filtering.
That said, abuse in the form of pretending something generated is real or taking credit for generated work as real should be illegal. If you teach the moral compass along with the book or you built the identification along with the work you will get a lot more authentic novelty even with AI tools.
Copyright: covers works on publication. Registering it allows for seeking of statutory damages.
Trademark: covers defining characteristics. This can be muddy since defining characteristics are not necessarily the same as style.
It can be especially confusing when certain things become characteristic of a genre. This is mostly what transformer and diffusion models cover: the strongest weights will be what's most common in the training data. You get a lot of em dashes and heroes in colorful outfits, but they don't constitute a violation on their own in any modern model unless the operator of the model goes out of their way to violate.
We've have always known we should harvest the internet for absolutely everything. If we don't, it's fine, we can squabble about our IP and China will just ingest the entire Internet, make a model out of it, then release the Ghiblifier and we'll all download it or run it on Openrouter. You can already download Hunyuan Image 3.0 and it's just as good as OpenAI's image create, if not better. What's Japan going to do about that?
Also, for the record, I would absolutely download a handbag, or a television, or a DVD, or a car, if I could. I'd be pumping out Louis Vuittons and iPhones for my kids all day long, and driving a Lambo because why not?
I was once accused of being a pirate back then because I was talking about downloading in a chatroom.
What was I downloading? Linux. Probably Debian. It's the same kind of nonsense as people going around accusing everyone whose process they don't understand of using AI. I'm surprised no one has come after me for making flame fractals.
Probably not as much a generational thing ("old people", versus "Millennials" or really, Gen X at that time), as just a tone-deaf shaming attempt by our corporate overlords.
Copyright/Patent/Intellectual Property for more than 25 years is illogical in my personal weird opinion. I think it grinds innovation to a halt and only serves to generate money. In other news i would glady recieve a lot of money forever just because i invented some thing ages ago. I'm only a humble capitalist human. :)
OpenAI will just fork out a bunch of money and settle everything, because that is how money works.
https://arstechnica.com/gadgets/2025/04/you-wouldnt-steal-a-...
The difference is: You're making copies of something:
Copying someone's IP can also impact you economically if your financial model depends on you being the only distributor of copies of something.Should we enforce the protection of people's right to have monopoly of distribution of intellectual property?
Or should we accept that in reality, copies are free and distribution monopolies only exist in inefficient markets?
It seems totally right to protect people's intellectual property.
But information wants to be free.
It's a dilemma. Do we side what feels right, or what's real?
It is accepted, within limits, for humans do transformative work, but it's not been yet established which the limits for AIs are, primarily (IMO): 1. whether the work is transformative or not 2. whether the scale of transformation/distribution changes the nature of the work.
Also counts as "downloading someone's content" - at least partially
We will eventually need that policeman's helmet as a retaliation means /s
From its title
"In 1710, the British Parliament passed a piece of legislation entitled An Act for the Encouragement of Learning. It became known as the Statute of Anne, and it was the world’s first copyright law. Copyright protects and regulates a piece of work - whether that's a book, a painting, a piece of music or a software programme. It emerged as a way of balancing the interests of authors, artists, publishers, and the public in the context of evolving technologies and the rise of mechanical reproduction. Writers and artists such as Alexander Pope, William Hogarth and Charles Dickens became involved in heated debates about ownership and originality that continue to this day - especially with the emergence of artificial intelligence. With:
Lionel Bently, Herchel Smith Professor of Intellectual Property Law at the University of Cambridge
Will Slauter, Professor of History at Sorbonne University, Paris
Katie McGettigan, Senior Lecturer in American Literature at Royal Holloway, University of London.
1) Download Ghibli film material from an arrrr site. Extract frames using ffmpeg. Pay someone $1 per week in Nigeria to add metadata for each (or some) frame(s).
2) ???
3) Profit
It's also possible that models' entire understanding of the aesthetic comes from screenshots of the movie. Even if OpenAI didn't feed in each frame of the movie, they definitely fed in lots of images from the web, some of which were almost certainly movie screenshots.
These models critically depend on many hours of artistic effort during training/prompting, but don’t require any additional effort by the new distributors/consumers. On top of that, proper attribution is often impossible.