That said, trademark laws like life of the author + 95 years are absolutely absurd. The ONLY reason to have any law prohibiting unlicensed copying of intangible property is to incentivize the creation of intangible property. The reasoning being that if you don't allow people to exclude 3rd party copying, then the primary party will assumedly not receive compensation for their creation and they'll never create.
Even in the case where the above is assumed true, the length of time that a protection should be afforded should be no more than the length of time necessary to ensure that creators create.
There are approximately zero people who decide they'll create something if they're protected for 95 years after their death but won't if it's 94 years. I wouldn't be surprised if it was the same for 1 year past death.
For that matter, this argument extends to other criminal penalties, but that's a whole other subject.
> The ONLY reason to have any law prohibiting unlicensed copying of intangible property is to incentivize the creation of intangible property.
That was the original purpose. It has since been coopted by people and corporations whose incentives are to make as much money as possible by monopolizing valuable intangible "property" for as long as they can.
And the chief strategic move these people have made is to convince the average person that ideas are in fact property. That the first person to think something and write it down rightfully "owns" that thought, and that others who express it or share it are not merely infringing copyright, they are "stealing."
This plan has largely worked, and now the average person speaks and thinks in these terms, and feels it in their bones.
>the average person speaks and thinks in these terms,
(Trademarks aside) Even more surprising to me is how everyone seems concerned about the studios making enough money?! As if they should make any money at all. As if it is up to us to create a profitable game for them.
If they all go bankrupt today I won't lose any sleep over it.
People also try to make a living selling bananas and apples. Should we create an elaborate scheme for them to make sure they survive? Their product is actually important to have. Why can't they own the exclusive right to sell bananas similarly? If anyone can just sell apples it would hurt their profit.
It is long ago but that is how things use to work. We do still have taxi medallions in some places and all kinds of legalized monopolies like it.
Perhaps there is some sector where it makes sense but I can't think of it.
If you want to make a movie you can just do a crowd funder like Robbert space industry.
It's been a US-led project for the benefit of American corporations.
If I was running the trade emergency room in any European state right now, I'd have "stop enforcing US copyright" up there next to "reciprocal tarrifs".
We were close to your viewpoint being the popular one, but sadly many (most?) independent content creators are so overtaken by fear of AI that they've done a 180. The same people who learned by tracing references to sell fanart of a copyrighted franchise (not complaining, I spend thousands on such things) accuse AI of stealing when it glances at their own work. We're entering a new golden age of creative opportunity and they respond by switching sides to the philosophy of intellectual property championed by Disney and Oracle (except for those companies' ironic use of AI themselves..).
There's also a moral issue at play:
To safeguard the interests of a few publishers (sometimes the creators, but they can easily end up with a shitty deal) you remove freedoms to the entire population to copy the same idea.
You need a central structure funded by everyone's taxes which enforce a contract almost nobody of the infringers has signed.
That's appaling, I hope with this AI wave we'll get rid of copyright all together.
> There are approximately zero people who decide they'll create something if they're protected for 95 years after their death but won't if it's 94 years.
I’m sure you’re right for individual authors who are driven by a creative spark, but for, say, movies made by large studios, the length of copyright is directly tied to the value of the movie as an asset.
If that asset generates revenue for 120 years, then it’s slightly more valuable than an asset that generates revenue for 119 years, and considerably more valuable than an asset that generates revenue for 20 years.
The value of the asset is in turn directly linked to how much the studio is willing to pay for that asset. They will invest more money in a film they can milk for 120 years than one that goes public domain after 20.
Would studios be willing to invest $200m+ in movie projects if their revenue was curtailed by a shorter copyright term? I don’t know. Probably yes, if we were talking about 120->70. But 120->20? Maybe not.
A dramatic shortening of copyright terms is something of a referendum on whether we want big-budget IP to exist.
In a world of 20 year copyright, we would probably still have the LOTR books, but we probably wouldn’t have the LOTR movies.
> If that asset generates revenue for 120 years, then it’s slightly more valuable than an asset that generates revenue for 119 years, and considerably more valuable than an asset that generates revenue for 20 years.
Not so, because of net present value.
The return from investing in normal stocks is ~10%/year, which is to say ~670% over 20 years, because of compounding interest. Another way of saying this is that $1 in 20 years is worth ~$0.15 today. A dollar in 30 years is worth ~$0.05 today. A dollar in 40 years is worth ~$0.02 today. As a result, if a thing generates the same number of dollars every year, the net present value of the first 20 years is significantly more than the net present value of all the years from 20-120 combined, because money now or soon from now is worth so much more than money a long time from now. And that's assuming the revenue generated would be the same every year forever, when in practice it declines over time.
The reason corporations lobby for copyright term extensions isn't that they care one bit about extended terms for new works. It's because they don't want the works from decades ago to enter the public domain now, and they're lobbying to make the terms longer retroactively. But all of those works were already created and the original terms were sufficient incentive to cause them to be.
> I’m sure you’re right for individual authors who are driven by a creative spark, but for, say, movies made by large studios, the length of copyright is directly tied to the value of the movie as an asset.
That would be fine, if the studios didn't want to have it both ways. They want to retain full copyright control over their "asset", but they also use Hollywood Accounting [1] to both avoid paying taxes and cheat contributors that have profit-sharing agreements.
If studios declare that they made a loss on producing and releasing something to get a tax break, the copyright term for that work should be reduced to 10 years tops.
IIRC, of works that bring in any money to their creators, the vast majority is returned, for almost all works, in the first handful of years after creation. Sure. the big names you know have value longer, but those are a miniscule fraction of works.
Make copyright last for a fixed term of 25 years with optional 10-year renewals up to 95 years on an escalating fee schedule (say, $100k for the first decade and doubling every subsequent decade) and people—and studios—would have essentially the same incentive to create as they do now, and most works would get into the public domain far sooner.
Probably be fewer entirely lost works, as well, if you had firmer deposit requirements for works with extended copyrights (using the revenue from the extensions to fund preservation) with other works entering the public domain soon enough that they were less likely to be lost before that happened.
The Fellowship of the Ring, the first of Peter Jackson's LOTR movies released in 2001, made $887 million in its original theatrical run (on a $93 million budget). It would absolutely still have been made if copyright was only 20 years. And now it would be in the public domain!
For movies in particular the tail is very thin. Only very few 50 year old movies are ever watched. Was any commercial movie ever financed without a view to making a profit in the box office/initial release?
> If that asset generates revenue for 120 years, then it’s slightly more valuable than an asset that generates revenue for 119 years
Movies for instance make most of their revenue in the 2 week following their release in theater. Beyond, peoole who wanted to see it had already seen it and the others don't care.
I'd argue it's similar for other art form, even for music. The gain at the very end of the copyright lifetime is extremely marginal and doesn't influence spending decision, which is mostly measured on a return basis of at most 10 years.
> If that asset generates revenue for 120 years, then it’s slightly more valuable than an asset that generates revenue for 119 years, and considerably more valuable than an asset that generates revenue for 20 years
Due to the fairy high cost of capital right now, pretty much anything more than 5 years away is irrelevant. 10 years max, even for insanely high returns on investment.
Seems to me most of that inflated budget is needed for the entertainment role of films, not the art in them, which a low budget can often stimulate rather than inhibit. In which case nothing of importance would be lost by a drastic shortening of copyright terms.
OK? So we wouldn't have $100m movies, the vast majority of which are forgotten about in a few months. I don't think a $100m movie is ten times better than a $10m one, so I think I'd be fine with movies with much smaller budgets, if they meant that LotR (the books) are now in the public domain for everyone to enjoy.
If movies had a payoff curve like rents, this would be more true, but they're cultural artifacts that decay in relevance precipitously after release, and more permanently after a few decades, where they become "dated", outside a few classics.
Trademark isn't copyright, those are two different things. Trademarks can be renewed roughly every 10 years [1] until the end of time and are about protecting a brand. Now copyright law lasts for "author plus 70 years. For anonymous works, pseudonymous works, or works made for hire, the copyright term is 95 years from the year of first publication or 120 years from creation, whichever comes first." [2]
Is copyright too long? Yes. Is it only that long to protect large media companies? Yes. But I would argue that AI companies are pushing the limits of fair use if not violating fair use, which is used as a affirmative defense by the way meaning that AI companies have to go to court to argue what they are doing is okay. They don't just get to wave their hands and say everything is okay because what we're doing is fair use and we get to scrape the world's entire creative output for our own profit.
While I think the laws are broken, I also get why companies fight so hard to defend their IP: it is valuable, and they've built empires around it. But at some point, we have to ask: are we preserving culture or just hoarding it?
Missing is why laws fight so hard too, missing the opposite of what we have (in the west), namely blatant and rampant piracy. The other extreme is really bad, creators of any type pirated by organized crime. There was no video game nor movie market in eastern Europe for example, can't compete against large scale piracy.
Which is to say, preservation without awareness of the threat will look like hoarding. A secondary question is to what extent is that threat real? Without seeing what true rampant piracy looks like, I think it would be easy to be ignorant of the threat.
I agree with your specific point. I think copyright should last for 50 years. That would protect long running franchises like Harry Potter or the Dresden Files but still allow things like Spiderman and Aretha Franklin to be freely reimagined.
However, most of the examples in the article should still be protected. Lara Croft first appeared 29 years ago. Any copyright system should still be protecting IP from 1996.
Regardless, it's not just copyright laws that are at issue here. This is reproducing human likenesses - like Harrison Ford's - and integrating them into new works.
So if I want to make an ad for a soap company, and I get an AI to reproduce a likeness of Harrison Ford, does that mean I can use that likeness in my soap commercials without paying him? I can imagine any court asking "how is this not simply laundering someone's likeness through a third party which claims to not have an image / filter / app / artist reproducing my client's likeness?"
All seemingly complicated scams come down to a very basic, obvious, even primitive grift. Someone somewhere in a regulatory capacity is either fooled or paid into accepting that no crime was committed. It's just that simple. This, however, is so glaring that even a child could understand the illegality of it. I'm looking forward to all of Hollywood joining the cause against the rampant abuse of IP by Silicon Valley. I think there are legal grounds here to force all of these models to be taken offline.
Additionally, "guardrails" that prevent 1:1 copies of film stills from being reprinted are clearly not only insufficient, they are evidence that the pirates in this case seek to obscure the nature of their piracy. They are the evidence that generative AI is not much more than a copyright laundering scheme, and the obsession with these guardrails is evidence of conspiracy, not some kind of public good.
> So if I want to make an ad for a soap company, and I get an AI to reproduce a likeness of Harrison Ford, does that mean I can use that likeness in my soap commercials without paying him?
No, you can't! But it shouldn't be the tool that prohibits this. You are not allowed to use existing images of Harrison Ford for your commercial and you also will be sued into oblivion by Disney if you paint a picture of Mickey Mouse advertising your soap, so why should it be any different if an AI painted this for you?
> This is reproducing human likenesses - like Harrison Ford's - and integrating them into new works.
The thing is though, there is also a human requesting that. The prompt was chosen specifically to get that result on purpose.
The corporate systems are trying to prevent this, but if you use any of the local models, you don't even have to be coy. Ask it for "photo of Harrison Ford as Indiana Jones" and what do you expect? That's what it's supposed to do. It does what you tell it to do. If you turn your steering wheel to the left, the car goes to the left. It's just a machine. The driver is the one choosing where to go.
Human appearance does not have enough dimensions to make likeness a viable thing to protect; I don't see how you could do that without say banning Elvis impersonators.
That said:
> I'm looking forward to all of Hollywood joining the cause against the rampant abuse of IP by Silicon Valley.
If you're framing the sides like that, it's pretty clear which I'm on. :)
Trademarks are very different from copyrights. In general, they never expire as long as the fees are paid in the region, and products bearing the mark are still manufactured. Note, legal firms will usually advise people that one can't camp on Trademarks like other areas of IP law.
For example, Aspirin is known as an adult dose of acetylsalicylic acid by almost every consumer, and is Trademarked to prevent some clandestine chemist in their garage making a similarly branded harmful/ineffective substance that damages the Goodwill Bayer earned with customers over decades of business.
Despite popular entertainment folklore, sustainable businesses actually want consumer goodwill associated with their products and services.
While I agree in many WIPO countries copyright law has essentially degenerated into monetized censorship by removing the proof of fiscal damages criteria (like selling works you don't own.) However, Trademarks ensure your corporate mark or product name is not hijacked in local Markets for questionable purposes.
Every castle needs a moat, but I do kind of want to know more about the presidential "Tesler". lol =)
You are missing a bunch of edge cases, and the law is all about edge cases.
An artist who works professionally has family members, family members who are dependent on them.
If they pass young, become popular just before they pass and their extremely popular works are now public domain. Their family sees nothing from their work, that is absolutely being commercialized ( publishing and creation generally spawns two seperate copyrights).
GP's not missing those edge cases; GP recognizes those edge cases are themselves a product of IP laws.
Those laws are effectively attempting to make information behave as physical objects, by giving them simulated "mass" through a rent-seeking structure. The case you describe is where this simulated physical substrate stops behaving like physical substrate, and choice was made to paper over that with extra rules, so that family can inherit and profit from IP of a dead creator, much like they would inherit physical products of a dead craftsman and profit from selling them.
It's a valid question whether or not this is taking things too far, just for the sake of making information conform to rules of markets for physical goods.
If copyright law is reduced to say, 20 years from the date of creation (PLENTY of time for the author to make money), then it's irrelevant if he dies young or lives until 100.
You seem to talk about fairness. Copyright law isn't supposed to be fair, it's supposed to benefit society. On one side you have the interest of the public to make use of already created work. On the other side is the financial incentive to create such work in the first place.
So the question to ask is whether the artist would have created the work and published it, even knowing that it isn't an insurance to their family in case of their early death.
Il not sure IP should be used as a life insurance, there’s already many public and private ideas tools for that.
Also it seems you assume inheritance is a good think. Most people do think the same on a personal level, however when we observes the effect on a society the outcome is concentration of wealth on a minority and barriers for wealth hand change === barriers for "American dream".
For writing many people only become popular after they are dead.
I heard this explained once as the art in some writing is explaining how people feel in a situation that is still too new for many to want to pay to have it illustrated to them. But once the newness has passed, and people understand or want to understand, then they enjoy reading about it.
As a personal example, I could enjoy movies about unrequited love before and long after I experienced it firsthand, but not during or for years after. People may not yet have settled feelings about an event until afterward, and not be willing to “pick at the scab”.
The other, more statistical explanation is that it just takes a lot of attempts to capture an idea or feeling and a longer window of time represents more opportunities to hit upon a winning formula. So it’s easier to capture a time and place afterward than during.
For what it's worth, this is a uniquely American view of copyright:
> The ONLY reason to have any law prohibiting unlicensed copying of intangible property is to incentivize the creation of intangible property.
In Europe, particularly France, copyright arose for a very different reason: to protect an author's moral rights as the creator of the work. It was seen as immoral to allow someone's work -- their intellectual offspring -- to be meddled with by others without their permission. Your work represents you and your reputation, and for others to redistribute it is an insult to your dignity.
That is why copyrights in Europe started with much longer durations than they did in the United States, and the US has gradually caught up. It is not entirely a Disney effect, but a fundamental difference in the purpose of copyright.
> There are approximately zero people who decide they'll create something if they're protected for 95 years after their death but won't if it's 94 years. I wouldn't be surprised if it was the same for 1 year past death.
I think life of creator + some reasonable approximation of family members life expectancy would make sense. Content creators do create to ensure their family's security in some cases, I would guess.
You’re thinking of copyright law, not trademark law. Which serves a different function. If you’re going to critique something it’s useful to get your facts right.
Really? Because there are a lot of very stupid laws out there that should absolutely be broken, regularly, flagrantly, by as many people as possible for the sake of making enforcement completely null and pointless. Why write in neutered corporate-speak while commenting on a casual comment thread while also (correctly) pointing out the absurdity of certain laws.
this reminds me of the time I tried to use a prepaid lawyer to research some copyright issues
they went down the rabbit hole on trademark laws, which are not only not copyright related, they are an entirely different federal agency at the Patent Office
gave me a giggle and the last time I used cheapo prepaid lawyers
This. It seem the situation is controversial now because of the beloved Studio Ghibli IP, but I want to see the venn diagram of people outrage at this, and the people clamoring for overbearing Disney chatachter protection when the copyright expired and siding with paloworld in the Nintendo lawsuit.
It seem most of the discussion is emotionally loaded, and people lost the plot of both why copyright exists and what copyright protects and are twisting that to protect whatever media they like most.
But one cannot pick and choose where laws apply, and then the better question would be how we plug the gap that let a multibilion Corp get away with distribution at scale and what derivative work and personal use mean in a image generation as a service world, and artist should be very careful in what their wishes are here, because I bet there's a lot of commission work edging on style transfer and personal use.
It was always me and them. They are either "nobel" or just rich. me is the rest of peasants. We pay everything and we give everything to the rich ones. Thats it.
I honestly don't see why anyone who isn't JK Rowling should be allowed to coopt her world. I probably feel even more strongly for worlds/characters I like.
New rule: you get to keep your belongings for 20 years and that's it. Then they are taken away. Every dollar you made twenty or more years ago, every asset you acquired, gone.
That oughtta be enough to incentivize people to work and build their wealth.
I think you mean "20 years after you die". That seems like a perfectly rational way to deal with people who want to be buried in their gold and jewels in a pyramid.
So every year I go to a swap meet and try to exchange my “expiring” belongings for something similar to reset the clock? Or, they get taken away by force? By someone who has meticulous records of all my stuff?
Not sure if anyone is interested in this story, but I remember at the height of the PokemonGo craze I noticed there were no shirts for the different factions in the game, cant rememebr what they were called but something like Teamread or something. I setup an online shop to just to sell a red shirt with the word on it. The next day my whole shop was taken offline for potential copyright infringement.
What I found surprising is I didnt even have one sale. Somehow someone had notified Nintendo AND my shop had been taken down, to sell merch that didn't even exist for the market and if I remember correctly - also it didnt even have any imagery on it or anything trademarkable - even if it was clearly meant for pokmeonGo fans.
Im not bitter I just found it interesting how quick and ruthless they were. Like bros I didn't even get a chance to make a sale. ( yes and also I dont think I infringed anything).
I asked Sora to turn a random image of my friend and myself into Italian plumbers. Nothing more, just the two words "Italian plumbers". The created picture was not shown to me because it was in violation of OpenAI's content policy. I asked then just to turn the guys on the picture into plumbers, but I asked this in the Italian language.
Without me asking for it, Sora put me in an overall and gave me a baseball cap, and my friend another baseball cap. If I asked Sora to put mustache on us, one of us received a red shirt as well, without being asked to. Starting with the same pic, if I asked to put one letter on the baseball caps each - guess, the letters chosen were M and L.
These extra guardrails are not really useful with such a strong, built-in bias towards copyright infringement of these image creation tools.
Should it mean that with time, Dutch pictures will have to include tulips, Italian plumbers will have to have a uniform with baseball caps with L and M, etc. just not to confuse AI tools?
You (and the article, etc) show what a lot of the "work" in AI is going into at the moment - creating guardrails against creating something that might get them in trouble, and / or customizing weights and prompts under water to generate stuff that isn't the obvious. I'm reminded of when Google's image generator came up and this customization bit them in the ass when they generated a black pope or asian vikings. AI tools don't do what you wish they did, they do what you tell them and what they are taught, and if 99% of their learning set associates Mario with prompts for Italian plumbers, that's what you'll get.
A possible (probably already exists) business is setting up truly balanced learning sets, that is, thousands of unique images that match the idea of an italian plumber, with maybe 1% of Mario. But that won't be nearly as big a learning set as the whole internet is, nor will it be cheap to build it compared to just scraping the internet.
OpenAI will eventually have competition for GPT 4o image generation.
They'll eventually have open source competition too. And then none of this will matter.
OmniGen is a good start, just woefully undertrained.
The VAR paper is open, from ByteDance, and supposedly the architecture this is based on.
Black Forest Labs isn't going to sit on their laurels. Their entire product offering just became worthless and lost traction. They're going to have to answer this.
I'd put $50 on ByteDance releases an open source version of this in three months.
Many years ago I tried to order a t-shirt with the postscript tiger on the front from Spreadshirt.
It was removed on Copyright claims before I could order one item myself. After some back and forth they restored it for a day and let me buy one item for personal use.
My point is: Doesn't have to be Sony, doesn't have to be a snitch - overzealous anticipatory obedience by the shop might have been enough.
>After some back and forth they restored it for a day and let me buy one item for personal use.
I used Spreadshirt to print a panel from the Tintin comic on a T-shirt, and I had no problem ordering it (it shows Captain Haddock moving through the jungle, swatting away the mosquitoes harassing him, giving himself a big slap on the face, and saying, 'Take that, you filthy beasts!').
Twenty years ago, I worked for Google AdWords as a customer service rep. This was still relatively early days, and all ads still had some level of manual human review.
The big advertisers had all furnished us a list of their trademarks and acceptable domains. Any advertiser trying to use one that wasn’t on the allow-list had their ad removed at review time.
I suspect this could be what happened to you. If the platform you were using has any kind of review process for new shops, you may have run afoul of pre-registered keywords.
Well the teams in Pokemon Go aren't quite as generic as Teamred: they are Team Instinct, Team Mystic, and Team Valor. Presumably Nintendo has trademarks on those phrases, and I’m sure all the big print on demand houses have an API for rights-holders to preemptively submit their trademarks for takedowns.
Nintendo is also famously protective of their IP: to give another anecdote, I just bought one of the emulator handhelds on Aliexpress that are all the rage these days, and while they don't advertise it they usually come preloaded with a buttload or ROMs. Mine did, including a number of Nintendo properties — but nary an Italian plumber to be found. The Nintendo fear runs deep.
Allen Pan, a youtuber "maker" who runs in the circle of people who run OpenSauce, was a contestant on a Discovery channel show that was trying to force the success of Mythbusters by "finding the next mythbusters!". He lost, but it was formative to him because those people were basically all inspired by the original show.
A couple years ago, he noticed that the merchandise trademark for "Mythbusters" had lapsed, so he bought it. He, now the legal owner of the trademark Mythbusters for apparel, made shirts that used that trademark.
Discovery sent him a cease and desist and threatened to sue. THEY had let the trademark lapse. THEY had lost the right to the trademark, by law. THEY were in the wrong, and a lawyer agreed.
But good fucking luck funding that legal battle. So he relinquished the trademark.
Hilariously, a friend of Allen Pan's, from the same "Finding the next mythbuster" show; Kyle Hill, is friends enough with Adam Savage to talk to him occasionally, and supposedly the actual Mythbusters themselves were not empathetic to Allen's trademark claim.
Not sure where you get that from. He doesn't say that in the cease & desist announcement video (though it's worded in a way that lets the viewers speculate that). Also from every time it's brought up on the podcast he's on, it very much seams like he knows that he doesn't have legal ground to stand on.
Just because someone let's a trademark lapse doesn't mean you can rightfully snatch it up with a new registration (as the new registration may be granted in error). It would be a different story if he had bought the trademark rights before them lapsing.
Allen Pan makes entertaining videos, but one shouldn't base ones understanding of how trademarks work based on them.
I think the problem there was being dependent on someone who is a complete pushover, doesn't bother to check for false positives and can kill your business with a single thought.
>Redbubble is a significant player in the online print-on-demand marketplace. In fiscal year 2023, it reported having 5 million customers who purchased 4.8 million different designs from 650,000 artists. The platform attracts substantial web traffic, with approximately 30.42 million visits in February 2025.
Print in demands definitely have terms of service allowing them to take whatever down. You’re playing by their rules, and your $2 revenue / tshirt and very few overall sales is not worth the potentially millions in legal fees to fight for you.
Sure, from the suing party who sent a DMCA takedown request to your webhost, who forward it to you and give you 24 hours before they take it down. Nobody wants to actually go to court over this stuff because of how expensive it is.
Redbubble was sued by the Pokemon Company two months prior to the launch of Pokemon Go, so you picked the exact wrong company and moment to try this with
Idk, the models generating what are basically 1:1 copies of the training data from pretty generic descriptions feels like a severe case of overfitting to me. What use is a generational model that just regurgitates the input?
I feel like the less advanced generations, maybe even because of their limitations in terms of size, were better at coming up with something that at least feels new.
In the end, other than for copyright-washing, why wouldn't I just use the original movie still/photo in the first place?
People like what they already know. When they prompt something and get a realistic looking Indiana Jones, they're probably happy about it.
To me, this article is further proof that LLMs are a form of lossy storage. People attribute special quality to the loss (the image isn't wrong, it's just got different "features" that got inserted) but at this point there's not a lot distinguishing a seed+prompt file+model from a lossy archive of media, be it text or images, and in the future likely video as well.
The craziest thing is that AI seems to have gathered some kind of special status that earlier forms of digital reproduction didn't have (even though those 64kbps MP3s from napster were far from perfect reproductions), probably because now it's done by large corporations rather than individuals.
If we're accepting AI-washing of copyright, we might as well accept pirated movies, as those are re-encoded from original high-resolution originals as well.
Probably the majority of people in the world already "accept pirated movies". It's just that, as ever, nobody asks people what they actually want. Much easier to tell them what to want, anyway.
To a viewer, a human-made work and an AI-generated one both amount to a series of stimuli that someone else made and you have no control over; and when people pay to see a movie, generally they don't do it with the intent to finance the movie company to make more movies -- they do it because they're offered the option to spend a couple hours watching something enjoyable. Who cares where it comes from -- if it reached us, it must be good, right?
The "special status" you speak of is due to AI's constrained ability to recombine familiar elements in novel ways. 64k MP3 artifacts aren't interesting to listen to; while a high-novelty experience such as learning a new culture or a new discipline isn't accessible (and also comes with expectations that passive consumption doesn't have.)
Either way, I wish the world gave people more interesting things to do with their brains than make a money, watch a movies, or some mix of the two with more steps. (But there isn't much of that left -- hence the concept of a "personal life" as reduced to breaking one's own and others' cognitive functioning then spending lifetimes routing around the damage. Positively fascinating /s)
Tried Flux.dev with the same prompts [0] and it seems actually to be a GPT problem. Could be that in GPT the text encoder understands the prompt better and just generates the implied IP, or could be that a diffusion model is just inherently less prone to overfitting than a multimodal transformer model.
DALL-E 3 already uses a model that trained on synthetic data that take the prompt and augments it. This might lead to the overfitting. It could also be, and might be the simpler explanation, that its just looks up the right file from a RAG.
If it overfits on the whole internet then it’s like a search engine that returns really relevant results with some lossy side effect.
Recent benchmark on unseen 2025 Math Olympiad shows none of the models can problem solve . They all accidentally or on purpose had prior solutions in the training set.
You probably mean the USAMO 2025 paper. They updated their comparison with Gemini 2.5 Pro, which did get a nontrivial score. That Gemini version was released five days after USAMO, so while it's not entirely impossible for the data to be in its training set, it would seem kind of unlikely.
What if the word "generic" were added to a lot of these image prompts? "generic image of an intergalactic bounty hunter from space" etc.
Certainly there's an aspect of people using the chat interface like they use google: describe xyz to try to surface the name of a movie. Just in this case, we're doing the (less common?) query of: find me the picture I can vaguely describe; but it's a query to a image /generating/ service, not an image search service.
Generic doesn't help. I was using the new image generator to try and make images for my Mutants and Masterminds game (it's basically D&D with superheroes instead of high fantasy), and it refuses to make most things citing that they are too close to existing IP, or that the ideas are dangerous.
So I asked it to make 4 random and generic superheroes. It created Batman, Supergirl, Green Lantern, and Wonder Woman. Then at about 90% finished it deleted the image and said I was violating copyright.
I doubt the model you interact with actually knows why the babysitter model rejects images, but it claims to know why and leads to some funny responses. Here is it's response to me asking for a superhero with a dark bodysuit, a purple cape, a mouse logo on their chest, and a spooky mouse mask on their face.
> I couldn't generate the image you requested because the prompt involved content that may violate policy regarding realistic human-animal hybrid masks in a serious context.
Idk, a couple of the examples might be generic enough that you wouldn't expect a very specific movie character. But most of the prompts make it extremely clear which movie character you would expect to see, and I would argue that the chat bot is working as expected by providing that.
Even if I'm thinking of an Indiana Jones-like character doesn't mean I want literally Indiana Jones. If I wanted Indiana Jones I could just grab a scene from the movie.
> I feel like the less advanced generations, maybe even because of their limitations in terms of size, were better at coming up with something that at least feels new.
Ironically that's probably because the errors and flaws in those generations at least made them different from what they were attempting to rip off.
Yeah, I've been feeling the same. When a model spits out something that looks exactly like a frame from a movie just because I typed a generic prompt, it stops feeling like “generative” AI and more like "copy-paste but with vibes."
To my knowledge this happens when that single frame is overrepresented in its training data. For instance, variations of the same movie poster or screenshot may appear hundreds of times. Then the AI concludes that this is just a unique human cultural artifact, like the Mona Lisa (which I would expect many human artists could also reproduce from memory).
I'm not sure if this is a problem with overfitting. I'm ok with the model knowing what Indiana Jones or the Predator looks like with well remembered details, it just seems that it's generating images from that knowledge in cases where that isn't appropriate.
I wonder if it's a fine tuning issue where people have overly provided archetypes of the thing that they were training towards. That would be the fastest way for the model to learn the idea but it may also mean the model has implicitly learned to provide not just an instance of a thing but a known archetype of a thing. I'm guessing in most RLHF tests archetypes (regardless of IP status) score quite highly.
What I'm kind of concerned about is that these images will persist and will be reinforced by positive feedback. Meaning, an adventurous archeologist will be the same very image, forever. We're entering the epitome of dogmatic ages. (And it will be the same corporate images and narratives, over and over again.)
> I'm ok with the model knowing what Indiana Jones or the Predator looks like with well remembered details,
ClosedAI doesn't seem to be OK with it, because they are explicitly censoring characters of more popular IPs. Presumably as a fig leaf against accusations of theft.
Probably an over-representation in the training data really so it's causing overfitting. Because using training data in amounts right from the Internet it's going to be opinionated on human culture (Bart Simpson is popular so there are lots of images of him, Ori is less well known so there are fewer images). Ideally it should be training 1:1 for everything but that would involve _so_ much work pruning the training data to have a roughly equal effect between categories.
The prompt didn't exactly describe Indiana Jones though. It left a lot of freedom for the model to make the "archeologist" e.g. female, Asian, put them in a different time period, have them wear a different kind of hat etc.
It didn't though, it just spat out what is basically a 1:1 copy of some Indiana Jones promo shoot. No where did the prompt ask for it to look like Harrison Ford.
What would most humans draw when you describe such a well known character by their iconic elements. Think if you deviated and acted a pedant about it people would think you're just trying to prove a point or being obnoxious.
One thing I would say, it's interesting to consider what would make this not so obviously bad.
Like, we could ask AI to assess the physical attributes of the characters it generated. Then ask it to permute some of those attributes. Generate some random tweaks: ok but brawy, short, and a different descent. Do similarly on some clothing colors. Change the game. Hit the "random character" button on the physical attributes a couple times.
There was an equally shatteringly-awful less-IP-theft (and as someone who thinks IP is itself incredibly ripping off humanity & should be vastly scoped down, it's important to me to not rest my arguments on IP violations).... An equally shattering recent incident for me. Having trouble finding it, don't remember the right keywords, but an article about how AI has a "default guy" type that it uses everywhere, a super generic personage, that it would use repeatedly. It was so distasteful.
The nature of 'AI as compression', as giving you the most median answer is horrific. Maybe maybe maybe we can escape some of this trap by iterating to different permutations, by injecting deliberate exploration of the state spaces. But I still fear AI, worry horribly when anyone relies on it for decision making, as it is anti-intelligent, uncreative in extreme, requiring human ingenuity to budge off its rock of oppressive hypernormality that it regurgitates.
Are you telling me that our culture should be deprived of the idea of Indiana Jones and the feelings that character inspires in all of us forever just because a corporation owns the asset?
Indiana Jones is 44 years old. When are we allowed to remix, recreate and expand on this like humanity has done since humans first started sitting down next to a fire and telling stories?
Mandrake: Colonel... that Coca-Cola machine. I want you to shoot the lock off it. There may be some change in there.
Guano: That's private property.
Mandrake: Colonel! Can you possibly imagine what is going to happen to you, your frame, outlook, way of life, and everything, when they learn that you have obstructed a telephone call to the President of the United States? Can you imagine? Shoot it off! Shoot! With a gun! That's what the bullets are for, you twit!
Guano: Okay. I'm gonna get your money for ya. But if you don't get the President of the United States on that phone, you know what's gonna happen to you?
Mandrake: What?
Guano: You're gonna have to answer to the Coca-Cola company.
I guess we all have to answer to the Walt Disney company.
"idea of Indiana Jones and the feelings that character inspires in all of us forever just because a corporation owns the asset" is very different from the almost exact image of Indiana Jones.
Not forever. But 75 years after the death of the creator by current international agreement. I definitely think that the exact terms of copyright should be revisited - a lot of usages should be allowed like 50 years of publishing a piece of work. But that needs to be agreed upon and converted into law. Till then, one should expect everyone, especially large corporations, to stick to the law.
Its kind of funny that everyone is harping this way or that way about IP.
This is a kind of strange comment for me to read. Because imby tone it sounds like a rebuttal? But by content, it agrees with a core thing I said about myself:
> and as someone who thinks IP is itself incredibly ripping off humanity & should be vastly scoped down, it's important to me to not rest my arguments on IP violations
What's just such a nightmare to me is that the tech is so normative. So horribly normative. This article shows that AI again and again reproduced only the known, only the already imagined. Its not that it's IP theft that rubs me so so wrong, it's that it's entirely bankrupt & uncreative, so very stuck. All this power! And yet!
You speak at what disgusts me yourself!
> When are we allowed to remix, recreate and expand on this like humanity has done
The machine could be imagining all kinds of Indianas. Of all different remixed recreated expanded forms. But this pictures are 100% anything but that. They're Indiana frozen in Carbonite. They are the driest saddest prison of the past. And call into question the validity of AI entirely, show something greviously missing.
But I can hire an artist and ask him to draw me a picture of Indiana Jones, he creates a perfect copy and I hang it on my fridge. Where did I (or the artist) violate any copyright (or other) laws? It is the artist that is replaced by the AI, not the copyrighted IP.
> But I can hire an artist and ask him to draw me a picture of Indiana Jones,
Sure, assuming the artist has the proper license and franchise rights to make and distribute copies. You can go buy a picture of Indy today that may not be printed by Walt Disney Studios but by some other outfit or artists.
Or, you mean if the artist doesn't have a license to produce and distribute Indiana Jones images? Well they'll be in trouble legally. They are making "copies" of things they don't own and profiting from it.
Another question is whether that's practically enforceable.
> Where did I (or the artist) violate any copyright (or other) laws?
When they took payment and profited from making unauthorized copies.
> It is the artist that is replaced by the AI, not the copyrighted IP.
Exactly, that's why LLMs and the companies which create them are called "theft machines" -- they are reproducing copyrighted material. Especially the ones charging for "tokens". You pay them, they make money and produce unauthorized copies. Show that picture of Indy to a jury and I think it's a good chance of convincing them.
I am not saying this is good or bad, I just see this having a legal "bite" so to speak, at least in my pedestrian view of copyright law.
That does infringe copyright...you're just unlikely to get in trouble for it. You might get a cease and desist if the owner of the IP finds out and can spare a moment for you.
Presumably the artist is a human who directly or indirectly paid money to view a film containing an archaeologist with the whip.
I don't think this is about reproduction as much as how you got enough data for that reproduction. The riaa sent people to jail and ruined their lives for pirating. Now these companies are doing it and being valued for hundreds of billions of dollars.
The artist is violating copyright by selling you that picture. You can’t just start creating and distributing pictures of a copyrighted property. You need a license from the copyright holder.
You also can’t sell a machine that outputs such material. And that’s how the story with GenAI becomes problematic. If GenAI can create the next Indiana Jones or Star Wars sequel for you (possibly a better one than Disney makes, it has become a low bar of sorts), I think the issue becomes obvious.
Nobody can prevent you from drawing a photo realistic picture of Indy, or taking a photo of him from the internet and hanging it on your fridge. Or asking a friend to do it for you. And let's be honest -- because nobody is looking -- said friend could even charge you a modest sum to draw a realistic picture of Indy for you to hang on your fridge; yes, it's "illegal" but nobody is looking for this kind of small potatos infringement.
I think the problem is when people start making a business out of this. A game developer could think "hey, I can make a game with artwork that looks just like Ghibli!", where before he wouldn't have anyone with the skills or patience to do this (see: the 4-second scene that took a year to make), now he can just ask the Gen AI to make it for them.
Is it "copyright infringement"? I dunno. Hard to tell, to be honest. But from an ethical point of view, it seems odd. And before you actually required someone to take the time and effort to copy the source material, now it's an automated and scalable process that does this, and can do this and much more, faster and without getting tired. "Theft at scale", maybe not so small potatos anymore.
--
edit: nice, downvotes. And in the other thread people were arguing HN is such a nice place for dissenting opinions.
I mean... If I go to Google right now and do an image search for "archeologist adventurer who wears a hat and uses a bullwhip," the first picture is a not-even-changed image of Indiana Jones. Which I will then copy and paste into whatever project I'm working on without clicking through to the source page (usually because the source page is an ad-ridden mess).
Perhaps the Internet itself is the hideous theft machine, and AI is just the most efficient permutation of user interface onto it.
(Incidentally, if you do that search, you will also, hilariously, turn up images of an older gentleman dressed in a brown coat and hat who is clearly meant to be "The Indiana Jones you got on Wish" from a photo-licensing site. The entire exercise of trying to extract wealth via exclusive access to memetic constructs is a fraught one).
The key difference is that the google example is clearly copying someone elses work and there are plenty of laws and norms that non-billionaires need to follow. If you made a business reselling the image you copied you would expect to get in trouble and have to stop. But AI companies are doing essentially the same thing in many cases and being rewarded for it.
The hypocrisy is much of the problem. If we're going to have IP laws that severely punish people and smaller companies for reselling the creative works of others without any compensation or permission then those rules should apply to powerful well-connected companies as well.
I hate how it is common to advance a position to just state a conclusion as if it were a fact. You keep repeating the same thing over and over until it seems like a concensus has been reached instead of an actual argument reasoned from first principle.
This is no theft here. Any copyright would be flimsier than software patents. I love Studio Ghibli (including $500/seat festival tickets) but it's the heart and the detail that make them what they are. You cannot clone that. Just some surface similarity. If that's all you like about the movies... you really missed the point.
Imagine if in early cinema someone had tried to claim mustachioed villian, ditsy blonde, or dumb jock? These are just tropes and styles. Quality work goes much much much deeper, and that cannot be synthesised. I can AI generate a million engagement rings, but I cannot pick the perfect one that fits you and your partners love story.
PS- the best work they did was "When Marnie was There". Just fitted together perfectly.
The engagement ring is a good example object, but I feel it serves the opposite argument better.
If engagement rings were as ubiquitous and easy to generate as Ghibli images have become, they would lose their value very quickly -- not even just in the monetary sense, but the sentimental value across the market would crash for this particular trinket. It wouldn't be about picking the right one anymore, it would be finding some other thing that better conveys status or love through scarcity.
If you have a 3d printer you'd know this feeling where abundance diminishes the value of something directly. Any pure plastic items you have are reduced to junk very quickly once you know you can basically have anything on a whim (exceptions for things with utility, however these are still printable). If I could print 30 rings a day, my partner wouldn't want any of them as a show of my undying love. Something more special and rare and thoughtful would have to take its place.
This isn't meant to come across as shallow in any way, its just classic supply and demand relating to non monetary value.
> I hate how it is common to advance a position to just state a conclusion as if it were a fact. You keep repeating the same thing over and over until it seems like a concensus has been reached instead of an actual argument reasoned from first principle.
I have interacted with the parent account before and have actually looked at the the amount of times they used words like awful, horrific, etc, and I definitely agree, as one should not have such a strong attachment to such words that they feel the need to continue to use (or rather, abuse) them endlessly.
Interesting proposal. Maybe if race or sex or height or eye color etc isn't given, and the LLM determines there's no reason not to randomize in this case (avoid black founding fathers), the backend could tweak its own prompt by programatically inserting a few random traits to the prompt.
If you describe an Indiana Jones character, but no sex, 50/50 via internal call to rand() that it outputs a woman.
Ah, I thought I knew this account from somewhere. It seems surprisingly easy to figure out what account is commenting just based on the words used, as I've commented that only a few active people on this site seem to use such strong words as shown here.
So if it’s a theft machine, how is the answer to try teaching it to hide the fact that it’s stealing by changing its outputs? That’s like a student plagiarizing an essay and then swapping some words with a thesaurus pretending that changes anything.
Wouldn’t the more appropriate solution in the case of theft be to remunerate the victims and prevent recidivism?
Instead of making it “not so obviously bad” why not just… make it good? Require AI services to either prove that 100% of their training corpus is either copyright free or properly licensed, or require them to compensate copyright holders for any infringing outputs.
(below is my shallow res, maybe naive?)
That might inject a ton of $ into "IP", doing further damage to the creative commons.
How can we support remix culture for humans, while staving off ultimately-destructive AI slop?
Maybe copyleft / creative-commons licenses w/ explicit anti-AI prohibitions? Tho that could have bad ramifications too.
ALL of this makes me kind of uncomfortable and sad, I want more creativity and fewer lawyers.
Looks to me like OpenAI drew their guardrails somewhere along a financial line. Generate a Micky Mouse or a Pikachu? Disney and Pokemon will sue the sh*t out of you. Ghibli? Probably not powerful enough to risk a multimillion years long court battle.
They did, but the rights expired. GKIDS now has the theatrical and home video rights to Studio Ghibli films in the US (except for Grave of the Fireflies).
I think the cat is out of the bag when it comes to generative AI, the same way how various LLMs for programming have been trained even on codebases that they had no business using, yet nobody hasn’t and won’t stop them. It’s the same as what’s going to happen with deepfakes and such, as the technology inevitably gets better.
> Hayao Miyazaki’s Japanese animation company, Studio Ghibli, produces beautiful and famously labor intensive movies, with one 4 second sequence purportedly taking over a year to make.
It makes me wonder though - whether it’s more valuable to spend a year on a scene that most people won’t pay that much attention to (artists will understand and appreciate, maybe pause and rewind and replay and examine the details, the casual viewer just enjoy at a glance) or use tools in addition to your own skills to knock it out of the park in a month and make more great things.
A bit how digital art has clear advantages over paper, while many revere the traditional art a lot, despite it taking longer and being harder. The same way how someone who uses those AI assisted programming tools can improve their productivity by getting rid of some of the boilerplate or automate some refactoring and such.
AI will definitely cheapen the art of doing things the old way, but that’s the reality of it, no matter how much the artists dislike it. Some will probably adapt and employ new workflows, others stick to tradition.
It's a very clear difference between a cheap animation and Ghibli. Anyone can see it.
In the first case, there's only one static image for an entire scene, scrolled and zoomed, and if they feel generous, there would be an overlay with another static image that slides over the first at a constant speed and direction. It feels dead.
In the second case, each frame is different. There's chaotic motions such as wind and there's character movement with a purpose, even in the background, there's always something happening in the animation, there's life.
There is a huge middle ground between "static image with another sliding static image" and "1 year of drawing per 4 second Ghibli masterpiece". From your comment is almost looks like you're suggesting that you have to choose either one or the other, but that is of course not true.
I bet that a good animator could make a really impressive 4-second scene if they were given a month, instead of a year. Possibly even if they were given a day.
So if we assume that there is not a binary "cheap animation vs masterpiece" but rather a sort of spectrum between the two, then the question is: at what point do enough people stop seeing the difference, that it makes economic sense to stay at that level, if the goal is to create as much high-quality content as possible?
Fundamentally I think this comes down to answering the question of "why are you creating this?".
There are many valid answers.
Maybe you want to create it to tell a story, and you have an overflowing list of stories you're desperate to tell. The animation may be a means to an end, and tools that help you get there sooner mean telling more stories.
Maybe you're pretty good at making things people like and you're in it for the money. That's fine, there are worse ways to provide for your family than making things people enjoy but aren't a deep thing for you.
Maybe you're in it because you love the act of creating it. Selling it is almost incidental, and the joy you get from it comes down to spending huge amounts of time obsessing over tiny details. If you had a source of income and nobody ever saw your creations, you'd still be there making them.
These are all valid in my mind, and suggest different reasons to use or not to use tools. Same as many walks of life.
I'd get the weeds gone in my front lawn quickly if I paid someone to do it, but I quite enjoy pottering around on a sunny day pulling them up and looking back at the end to see what I've achieved. I bake worse bread than I could buy, and could buy more and better bread I'm sure if I used the time to do contracting instead. But I enjoy it.
On the other hand, there are things I just want done and so use tools or get others to do it for me.
One positive view of AI tools is that it widens the group of people who are able to achieve a particular quality, so it opens up the door for people who want to tell the story or build the app or whatever.
A negative side is the economics where it may be beneficial to have a worse result just because it's so much cheaper.
> the same way how various LLMs for programming have been trained even on codebases that they had no business using, yet nobody hasn’t and won’t stop them
I remember the discourse on HN a few years ago when GitHub Copilot came out, and well, people were missing the fact that the GitHub terms and conditions (T&C) explicitly permits usage of code for analytic purposes (of which training is implicitly part), so it turns out that if one did not want such training, they should not have hosted on GitHub in the first place, because the website T&C was clear as day.
>It makes me wonder though - whether it’s more valuable to spend a year on a scene that most people won’t pay that much attention to (artists will understand and appreciate, maybe pause and rewind and replay and examine the details, the casual viewer just enjoy at a glance) or use tools in addition to your own skills to knock it out of the park in a month and make more great things.
If they didn't spend a year on it they wouldn't be copied now.
Oooh those guardrails make me angry. I get why they are there (dont poke the bear) but it doesn't make me overlook the self serving hypocrisy involved.
Though I am also generally opposed to the notion of intellectual property whatsoever on the basis that it doesn't seem to serve its intended purpose and what good could be salvaged from its various systems can already be well represented with other existing legal concepts, i.e deceptive behaviors being prosecuted as forms of fraud.
The problem is people at large companies creating these AI models, wanting the freedom to copy artists’ works when using it, but these large companies also want to keep copyright protection intact, for their regular business activities. They want to eat the cake and have it too. And they are arguing for essentially eliminating copyright for their specific purpose and convenience, when copyright has virtually never been loosened for the public’s convenience, even when the exceptions the public asks for are often minor and laudable. If these companies were to argue that copyright should be eliminated because of this new technology, I might not object. But now that they come and ask… no, they pretend to already have, a copyright exception for their specific use, I will happily turn around and use their own copyright maximalist arguments against them.
I don't care for this line of argument. It's like saying you can't hold a position that trespassing should be illegal while also holding that commercial businesses should be legally required to have public restrooms. Yes, both of these positions are related to land rights and the former is pro- while the latter is anti-, but it's a perfectly coherent set of positions. OpenAI can absolutely be anti-copyright in the sense of whether you can train an an NN on copyrighted data and pro-copyright in the sense of whether you can make an exact replica of some data and sell it as your own without making it into hypocrisy territory. It does suggest they're self-interested, but you have to climb a mountain in Tibet to find anybody who isn't.
Arguments that make a case that NN training is copyright violation are much more compelling to me than this.
I guess the best explanation for what we're witnessing is the notion that 'Money Talks', and sadly nothing more. To think thats all that fair use activists lacked in years passed..
It's not just the guardrails, but the ham-fisted implementation.
Grok is supposed to be "uncensored", but there are very specific words you just can't use when asking it to generate images. It'll just flat out refuse or give an error message during generation.
But, again, if you go in a roundabout way and avoid the specific terms you can still get what you want. So why bother?
Is it about not wanting bad PR or avoiding litigation?
The implementation is what gets to me too. Fair enough that a company doesn't want their LLM used in a certain way. That's their choice, even if it's just to avoid getting sued.
How they then go about implementing those guardrails is pretty telling about their understand and control over what they've build and their line of thinking. Clearly, at no point before releasing their LLMs onto the world did anyone stop and ask: Hey, how do we deal with these things generating unwanted content?
Resorting to blocking certain terms in the prompts is like searching for keywords in spam emails. "Hey Jim, I got another spam email from that Chinese tire place" - "No worry boss, I've configured the mail server to just delete any email containing the words China or tire".
Some journalist should go to a few of these AI companies and start asking questions about the long term effectiveness and viability of just blocking keywords in prompts.
Everyone is talking about theft - I get it, but there's a more subtler point being made here.
Current generation of AI models can't think of anything truly new. Everything is simply a blend of prior work. I am not saying that this doesn't have economic value, but it means these AI models are closer to lossy compression algorithms than they are to AGI.
The following quote by Sam Altman from about 5 years ago is interesting.
"We have made a soft promise to investors that once we build this sort-of generally intelligent system, basically we will ask it to figure out a way to generate an investment return."
That's a statement I wouldn't even dream about making today.
Novelty in one medium arises from novelty in others, shifts to the external environment.
We got brass bands with brass instruments, synth music from synths.
We know therefore, necessarily, that they can be nothing novel from an LLM -- it has no live access to novel developments in the broader environment. If synths were invented after its training, it could never produce synth music (and so on).
The claim here is trivially falsifiable, and so obviously so that credulous fans of this technology bake it in to their misunderstanding of novelty itself: have an LLM produce content on developments which had yet to take place at the time of its training. It obviously cannot do this.
Yet an artist which paints with a new kind of black pigment can, trivially so.
Disregarding the (common!) assumption that AGI will consist of one monolithic LLM instead of dozens of specialized ones, I think your comment fails to invoke an accurate, consistent picture of creativity/"truly new" cognition.
To borrow Chomsky's framework: what makes humans unique and special is our ability to produce an infinite range of outputs that nonetheless conform to a set of linguistic rules. When viewed in this light, human creativity necessarily depends on the "linguistic rules" part of that; without a framework of meaning to work within, we would just be generating entropy, not meaningful expressions.
Obviously this applies most directly to external language, but I hope it's clear how it indirectly applies to internal cognition and--as we're discussing here--visual art.
TL;DR: LLMs are definitely creative, otherwise they wouldn't be able to produce semantically-meaningful, context-appropriate language in the first place. For a more empirical argument, just ask yourself how a machine that can generate a poem or illustration depicting [CHARACTER_X] in [PLACE_Y] doing [ACTIVITY_Z] in [STYLE_S] without being creative!
This may not be apparent to an english speaker as the language has a rather fixed set of words, but in German, where creating new words is common, the lack of linguistic creativity is obvious.
As an example, let's talk about "vibe coding" - It's a new term describing heavy LLM usage in programming, usually associated with Generation Z.
If I am asking an LLM to generate a German translation for "vibe coder" it comes up with the neutral "Vibe-Programmierer". When asking it to be more creative it came up with "Schwingungsschmied" ("vibration smith"?) - What?
I personally came up with the following words:
* Gefühlsprogrammierer ("A programmer, that focuses on intuition and feeling.")
* Freischnauzeprogrammierer ("Free-mouthed programmer - highlighting straightforwardness and the creative expression of vibe coding." - colloquial)
Interesstingly, LLMs can describe both these terms, they just can't create them naturally. I tested this on all major LLMs and the results were similar. Generating a picture of a "vibe coder" also highlights more of a moody atmosphere instead of the Generation Z aspects that are associated with it on social media nowadays.
> a machine that can generate a poem or illustration depicting [CHARACTER_X] in [PLACE_Y] doing [ACTIVITY_Z] in [STYLE_S] without being creative
Your example disproves itself; that's a madlib. It's not creative, it's just rolling the dice and filling in the blanks. Complex die and complex blanks are a difference of degree only, not creativity.
That said, trademark laws like life of the author + 95 years are absolutely absurd. The ONLY reason to have any law prohibiting unlicensed copying of intangible property is to incentivize the creation of intangible property. The reasoning being that if you don't allow people to exclude 3rd party copying, then the primary party will assumedly not receive compensation for their creation and they'll never create.
Even in the case where the above is assumed true, the length of time that a protection should be afforded should be no more than the length of time necessary to ensure that creators create.
There are approximately zero people who decide they'll create something if they're protected for 95 years after their death but won't if it's 94 years. I wouldn't be surprised if it was the same for 1 year past death.
For that matter, this argument extends to other criminal penalties, but that's a whole other subject.
That was the original purpose. It has since been coopted by people and corporations whose incentives are to make as much money as possible by monopolizing valuable intangible "property" for as long as they can.
And the chief strategic move these people have made is to convince the average person that ideas are in fact property. That the first person to think something and write it down rightfully "owns" that thought, and that others who express it or share it are not merely infringing copyright, they are "stealing."
This plan has largely worked, and now the average person speaks and thinks in these terms, and feels it in their bones.
(Trademarks aside) Even more surprising to me is how everyone seems concerned about the studios making enough money?! As if they should make any money at all. As if it is up to us to create a profitable game for them.
If they all go bankrupt today I won't lose any sleep over it.
People also try to make a living selling bananas and apples. Should we create an elaborate scheme for them to make sure they survive? Their product is actually important to have. Why can't they own the exclusive right to sell bananas similarly? If anyone can just sell apples it would hurt their profit.
It is long ago but that is how things use to work. We do still have taxi medallions in some places and all kinds of legalized monopolies like it.
Perhaps there is some sector where it makes sense but I can't think of it.
If you want to make a movie you can just do a crowd funder like Robbert space industry.
If I was running the trade emergency room in any European state right now, I'd have "stop enforcing US copyright" up there next to "reciprocal tarrifs".
You need a central structure funded by everyone's taxes which enforce a contract almost nobody of the infringers has signed.
That's appaling, I hope with this AI wave we'll get rid of copyright all together.
Deleted Comment
I’m sure you’re right for individual authors who are driven by a creative spark, but for, say, movies made by large studios, the length of copyright is directly tied to the value of the movie as an asset.
If that asset generates revenue for 120 years, then it’s slightly more valuable than an asset that generates revenue for 119 years, and considerably more valuable than an asset that generates revenue for 20 years.
The value of the asset is in turn directly linked to how much the studio is willing to pay for that asset. They will invest more money in a film they can milk for 120 years than one that goes public domain after 20.
Would studios be willing to invest $200m+ in movie projects if their revenue was curtailed by a shorter copyright term? I don’t know. Probably yes, if we were talking about 120->70. But 120->20? Maybe not.
A dramatic shortening of copyright terms is something of a referendum on whether we want big-budget IP to exist.
In a world of 20 year copyright, we would probably still have the LOTR books, but we probably wouldn’t have the LOTR movies.
Not so, because of net present value.
The return from investing in normal stocks is ~10%/year, which is to say ~670% over 20 years, because of compounding interest. Another way of saying this is that $1 in 20 years is worth ~$0.15 today. A dollar in 30 years is worth ~$0.05 today. A dollar in 40 years is worth ~$0.02 today. As a result, if a thing generates the same number of dollars every year, the net present value of the first 20 years is significantly more than the net present value of all the years from 20-120 combined, because money now or soon from now is worth so much more than money a long time from now. And that's assuming the revenue generated would be the same every year forever, when in practice it declines over time.
The reason corporations lobby for copyright term extensions isn't that they care one bit about extended terms for new works. It's because they don't want the works from decades ago to enter the public domain now, and they're lobbying to make the terms longer retroactively. But all of those works were already created and the original terms were sufficient incentive to cause them to be.
That would be fine, if the studios didn't want to have it both ways. They want to retain full copyright control over their "asset", but they also use Hollywood Accounting [1] to both avoid paying taxes and cheat contributors that have profit-sharing agreements.
If studios declare that they made a loss on producing and releasing something to get a tax break, the copyright term for that work should be reduced to 10 years tops.
[1] https://en.wikipedia.org/wiki/Hollywood_accounting
Make copyright last for a fixed term of 25 years with optional 10-year renewals up to 95 years on an escalating fee schedule (say, $100k for the first decade and doubling every subsequent decade) and people—and studios—would have essentially the same incentive to create as they do now, and most works would get into the public domain far sooner.
Probably be fewer entirely lost works, as well, if you had firmer deposit requirements for works with extended copyrights (using the revenue from the extensions to fund preservation) with other works entering the public domain soon enough that they were less likely to be lost before that happened.
Movies for instance make most of their revenue in the 2 week following their release in theater. Beyond, peoole who wanted to see it had already seen it and the others don't care.
I'd argue it's similar for other art form, even for music. The gain at the very end of the copyright lifetime is extremely marginal and doesn't influence spending decision, which is mostly measured on a return basis of at most 10 years.
Due to the fairy high cost of capital right now, pretty much anything more than 5 years away is irrelevant. 10 years max, even for insanely high returns on investment.
Is copyright too long? Yes. Is it only that long to protect large media companies? Yes. But I would argue that AI companies are pushing the limits of fair use if not violating fair use, which is used as a affirmative defense by the way meaning that AI companies have to go to court to argue what they are doing is okay. They don't just get to wave their hands and say everything is okay because what we're doing is fair use and we get to scrape the world's entire creative output for our own profit.
[1] https://www.uspto.gov/learning-and-resources/trademark-faqs#...
[2] https://www.copyright.gov/history/copyright-exhibit/lifecycl...
Which is to say, preservation without awareness of the threat will look like hoarding. A secondary question is to what extent is that threat real? Without seeing what true rampant piracy looks like, I think it would be easy to be ignorant of the threat.
However, most of the examples in the article should still be protected. Lara Croft first appeared 29 years ago. Any copyright system should still be protecting IP from 1996.
Regardless, it's not just copyright laws that are at issue here. This is reproducing human likenesses - like Harrison Ford's - and integrating them into new works.
So if I want to make an ad for a soap company, and I get an AI to reproduce a likeness of Harrison Ford, does that mean I can use that likeness in my soap commercials without paying him? I can imagine any court asking "how is this not simply laundering someone's likeness through a third party which claims to not have an image / filter / app / artist reproducing my client's likeness?"
All seemingly complicated scams come down to a very basic, obvious, even primitive grift. Someone somewhere in a regulatory capacity is either fooled or paid into accepting that no crime was committed. It's just that simple. This, however, is so glaring that even a child could understand the illegality of it. I'm looking forward to all of Hollywood joining the cause against the rampant abuse of IP by Silicon Valley. I think there are legal grounds here to force all of these models to be taken offline.
Additionally, "guardrails" that prevent 1:1 copies of film stills from being reprinted are clearly not only insufficient, they are evidence that the pirates in this case seek to obscure the nature of their piracy. They are the evidence that generative AI is not much more than a copyright laundering scheme, and the obsession with these guardrails is evidence of conspiracy, not some kind of public good.
No, you can't! But it shouldn't be the tool that prohibits this. You are not allowed to use existing images of Harrison Ford for your commercial and you also will be sued into oblivion by Disney if you paint a picture of Mickey Mouse advertising your soap, so why should it be any different if an AI painted this for you?
The thing is though, there is also a human requesting that. The prompt was chosen specifically to get that result on purpose.
The corporate systems are trying to prevent this, but if you use any of the local models, you don't even have to be coy. Ask it for "photo of Harrison Ford as Indiana Jones" and what do you expect? That's what it's supposed to do. It does what you tell it to do. If you turn your steering wheel to the left, the car goes to the left. It's just a machine. The driver is the one choosing where to go.
That said:
> I'm looking forward to all of Hollywood joining the cause against the rampant abuse of IP by Silicon Valley.
If you're framing the sides like that, it's pretty clear which I'm on. :)
If you have to explain "laundering someone's likeness" to them maybe not, I think it's a frankly bizarre phrase.
For example, Aspirin is known as an adult dose of acetylsalicylic acid by almost every consumer, and is Trademarked to prevent some clandestine chemist in their garage making a similarly branded harmful/ineffective substance that damages the Goodwill Bayer earned with customers over decades of business.
Despite popular entertainment folklore, sustainable businesses actually want consumer goodwill associated with their products and services.
While I agree in many WIPO countries copyright law has essentially degenerated into monetized censorship by removing the proof of fiscal damages criteria (like selling works you don't own.) However, Trademarks ensure your corporate mark or product name is not hijacked in local Markets for questionable purposes.
Every castle needs a moat, but I do kind of want to know more about the presidential "Tesler". lol =)
https://www.youtube.com/watch?v=WFZUB1eJb34
An artist who works professionally has family members, family members who are dependent on them.
If they pass young, become popular just before they pass and their extremely popular works are now public domain. Their family sees nothing from their work, that is absolutely being commercialized ( publishing and creation generally spawns two seperate copyrights).
Those laws are effectively attempting to make information behave as physical objects, by giving them simulated "mass" through a rent-seeking structure. The case you describe is where this simulated physical substrate stops behaving like physical substrate, and choice was made to paper over that with extra rules, so that family can inherit and profit from IP of a dead creator, much like they would inherit physical products of a dead craftsman and profit from selling them.
It's a valid question whether or not this is taking things too far, just for the sake of making information conform to rules of markets for physical goods.
So the question to ask is whether the artist would have created the work and published it, even knowing that it isn't an insurance to their family in case of their early death.
Also it seems you assume inheritance is a good think. Most people do think the same on a personal level, however when we observes the effect on a society the outcome is concentration of wealth on a minority and barriers for wealth hand change === barriers for "American dream".
I heard this explained once as the art in some writing is explaining how people feel in a situation that is still too new for many to want to pay to have it illustrated to them. But once the newness has passed, and people understand or want to understand, then they enjoy reading about it.
As a personal example, I could enjoy movies about unrequited love before and long after I experienced it firsthand, but not during or for years after. People may not yet have settled feelings about an event until afterward, and not be willing to “pick at the scab”.
The other, more statistical explanation is that it just takes a lot of attempts to capture an idea or feeling and a longer window of time represents more opportunities to hit upon a winning formula. So it’s easier to capture a time and place afterward than during.
Deleted Comment
> The ONLY reason to have any law prohibiting unlicensed copying of intangible property is to incentivize the creation of intangible property.
In Europe, particularly France, copyright arose for a very different reason: to protect an author's moral rights as the creator of the work. It was seen as immoral to allow someone's work -- their intellectual offspring -- to be meddled with by others without their permission. Your work represents you and your reputation, and for others to redistribute it is an insult to your dignity.
That is why copyrights in Europe started with much longer durations than they did in the United States, and the US has gradually caught up. It is not entirely a Disney effect, but a fundamental difference in the purpose of copyright.
Are the origins the same when looking at other intellectual property like patents?
How did they deal with quoting and/or critiquing other's ideas? Did they allow limited quotation? What about parody and satire?
I think life of creator + some reasonable approximation of family members life expectancy would make sense. Content creators do create to ensure their family's security in some cases, I would guess.
Really? Because there are a lot of very stupid laws out there that should absolutely be broken, regularly, flagrantly, by as many people as possible for the sake of making enforcement completely null and pointless. Why write in neutered corporate-speak while commenting on a casual comment thread while also (correctly) pointing out the absurdity of certain laws.
they went down the rabbit hole on trademark laws, which are not only not copyright related, they are an entirely different federal agency at the Patent Office
gave me a giggle and the last time I used cheapo prepaid lawyers
If the work is popular it will make plenty of money in that time. If it isn't popular, it probably won't make much more money after that.
If you torrent a movie right now, you'll be fined in many advanced countries.
But a huge corporation led by a sociopath scrapes the entire internet and builds a product with other people's work ?
Totally fine.
It seem most of the discussion is emotionally loaded, and people lost the plot of both why copyright exists and what copyright protects and are twisting that to protect whatever media they like most.
But one cannot pick and choose where laws apply, and then the better question would be how we plug the gap that let a multibilion Corp get away with distribution at scale and what derivative work and personal use mean in a image generation as a service world, and artist should be very careful in what their wishes are here, because I bet there's a lot of commission work edging on style transfer and personal use.
Why am I wrong?
That oughtta be enough to incentivize people to work and build their wealth.
Anything more than that is unnecessary.
So how about we go back to how it used to be and just remove this entire concept.
You can own things you can't own an idea.
What problem are you trying to solve?
What I found surprising is I didnt even have one sale. Somehow someone had notified Nintendo AND my shop had been taken down, to sell merch that didn't even exist for the market and if I remember correctly - also it didnt even have any imagery on it or anything trademarkable - even if it was clearly meant for pokmeonGo fans.
Im not bitter I just found it interesting how quick and ruthless they were. Like bros I didn't even get a chance to make a sale. ( yes and also I dont think I infringed anything).
A possible (probably already exists) business is setting up truly balanced learning sets, that is, thousands of unique images that match the idea of an italian plumber, with maybe 1% of Mario. But that won't be nearly as big a learning set as the whole internet is, nor will it be cheap to build it compared to just scraping the internet.
They'll eventually have open source competition too. And then none of this will matter.
OmniGen is a good start, just woefully undertrained.
The VAR paper is open, from ByteDance, and supposedly the architecture this is based on.
Black Forest Labs isn't going to sit on their laurels. Their entire product offering just became worthless and lost traction. They're going to have to answer this.
I'd put $50 on ByteDance releases an open source version of this in three months.
It was removed on Copyright claims before I could order one item myself. After some back and forth they restored it for a day and let me buy one item for personal use.
My point is: Doesn't have to be Sony, doesn't have to be a snitch - overzealous anticipatory obedience by the shop might have been enough.
I used Spreadshirt to print a panel from the Tintin comic on a T-shirt, and I had no problem ordering it (it shows Captain Haddock moving through the jungle, swatting away the mosquitoes harassing him, giving himself a big slap on the face, and saying, 'Take that, you filthy beasts!').
The big advertisers had all furnished us a list of their trademarks and acceptable domains. Any advertiser trying to use one that wasn’t on the allow-list had their ad removed at review time.
I suspect this could be what happened to you. If the platform you were using has any kind of review process for new shops, you may have run afoul of pre-registered keywords.
Nintendo is also famously protective of their IP: to give another anecdote, I just bought one of the emulator handhelds on Aliexpress that are all the rage these days, and while they don't advertise it they usually come preloaded with a buttload or ROMs. Mine did, including a number of Nintendo properties — but nary an Italian plumber to be found. The Nintendo fear runs deep.
A couple years ago, he noticed that the merchandise trademark for "Mythbusters" had lapsed, so he bought it. He, now the legal owner of the trademark Mythbusters for apparel, made shirts that used that trademark.
Discovery sent him a cease and desist and threatened to sue. THEY had let the trademark lapse. THEY had lost the right to the trademark, by law. THEY were in the wrong, and a lawyer agreed.
But good fucking luck funding that legal battle. So he relinquished the trademark.
Buy a walrus plushy cause it's funny: https://allen-pan-shop.fourthwall.com/en-usd/
Note the now "Myth Busted" shirts instead.
Hilariously, a friend of Allen Pan's, from the same "Finding the next mythbuster" show; Kyle Hill, is friends enough with Adam Savage to talk to him occasionally, and supposedly the actual Mythbusters themselves were not empathetic to Allen's trademark claim.
Not sure where you get that from. He doesn't say that in the cease & desist announcement video (though it's worded in a way that lets the viewers speculate that). Also from every time it's brought up on the podcast he's on, it very much seams like he knows that he doesn't have legal ground to stand on.
Just because someone let's a trademark lapse doesn't mean you can rightfully snatch it up with a new registration (as the new registration may be granted in error). It would be a different story if he had bought the trademark rights before them lapsing.
Allen Pan makes entertaining videos, but one shouldn't base ones understanding of how trademarks work based on them.
https://www.suedbynintendo.com/
Is this correct? I would guess Nintendo has some automation/subscription to a service that handles this. I doubt it was some third party snitching.
I think the problem there was being dependent on someone who is a complete pushover, doesn't bother to check for false positives and can kill your business with a single thought.
For further info it was Redbubble.
>Redbubble is a significant player in the online print-on-demand marketplace. In fiscal year 2023, it reported having 5 million customers who purchased 4.8 million different designs from 650,000 artists. The platform attracts substantial web traffic, with approximately 30.42 million visits in February 2025.
Usually there are lawyers letters involved first?
https://www.smh.com.au/business/consumer-affairs/pokemon-hel...
I feel like the less advanced generations, maybe even because of their limitations in terms of size, were better at coming up with something that at least feels new.
In the end, other than for copyright-washing, why wouldn't I just use the original movie still/photo in the first place?
To me, this article is further proof that LLMs are a form of lossy storage. People attribute special quality to the loss (the image isn't wrong, it's just got different "features" that got inserted) but at this point there's not a lot distinguishing a seed+prompt file+model from a lossy archive of media, be it text or images, and in the future likely video as well.
The craziest thing is that AI seems to have gathered some kind of special status that earlier forms of digital reproduction didn't have (even though those 64kbps MP3s from napster were far from perfect reproductions), probably because now it's done by large corporations rather than individuals.
If we're accepting AI-washing of copyright, we might as well accept pirated movies, as those are re-encoded from original high-resolution originals as well.
A new MCU movie is released, its 60 second trailer posted on Youtube, but I don't feel like watching the movie because I got bored after Endgame.
Youtube has very strict anti-scraping techniques now, so I use deep-scrapper to generate the whole trailer from the thumbnail and title.
I use deep-pirate to generate the whole 3 hour movie from the trailer.
I use deep-watcher to summarize the whole movie in a 60 second video.
I watch the video. It doesn't make any sense. I check the Youtube trailer. It's the same video.
To a viewer, a human-made work and an AI-generated one both amount to a series of stimuli that someone else made and you have no control over; and when people pay to see a movie, generally they don't do it with the intent to finance the movie company to make more movies -- they do it because they're offered the option to spend a couple hours watching something enjoyable. Who cares where it comes from -- if it reached us, it must be good, right?
The "special status" you speak of is due to AI's constrained ability to recombine familiar elements in novel ways. 64k MP3 artifacts aren't interesting to listen to; while a high-novelty experience such as learning a new culture or a new discipline isn't accessible (and also comes with expectations that passive consumption doesn't have.)
Either way, I wish the world gave people more interesting things to do with their brains than make a money, watch a movies, or some mix of the two with more steps. (But there isn't much of that left -- hence the concept of a "personal life" as reduced to breaking one's own and others' cognitive functioning then spending lifetimes routing around the damage. Positively fascinating /s)
[0] https://imgur.com/a/wqrBGRF Image captions are the impled IP, I copied the prompts from the blog post.
Recent benchmark on unseen 2025 Math Olympiad shows none of the models can problem solve . They all accidentally or on purpose had prior solutions in the training set.
https://x.com/mbalunovic/status/1907436704790651166
Certainly there's an aspect of people using the chat interface like they use google: describe xyz to try to surface the name of a movie. Just in this case, we're doing the (less common?) query of: find me the picture I can vaguely describe; but it's a query to a image /generating/ service, not an image search service.
So I asked it to make 4 random and generic superheroes. It created Batman, Supergirl, Green Lantern, and Wonder Woman. Then at about 90% finished it deleted the image and said I was violating copyright.
https://imgur.com/a/eG6kmqu
I doubt the model you interact with actually knows why the babysitter model rejects images, but it claims to know why and leads to some funny responses. Here is it's response to me asking for a superhero with a dark bodysuit, a purple cape, a mouse logo on their chest, and a spooky mouse mask on their face.
> I couldn't generate the image you requested because the prompt involved content that may violate policy regarding realistic human-animal hybrid masks in a serious context.
Ironically that's probably because the errors and flaws in those generations at least made them different from what they were attempting to rip off.
I wonder if it's a fine tuning issue where people have overly provided archetypes of the thing that they were training towards. That would be the fastest way for the model to learn the idea but it may also mean the model has implicitly learned to provide not just an instance of a thing but a known archetype of a thing. I'm guessing in most RLHF tests archetypes (regardless of IP status) score quite highly.
ClosedAI doesn't seem to be OK with it, because they are explicitly censoring characters of more popular IPs. Presumably as a fig leaf against accusations of theft.
Hail their father, drip with sweat:
"Fukuyama had deceived us!
History has no end."
So the criminal party here would be OpenAI, since they are selling access to a service that generates copyright-infringing images.
Overall the model is tra
Overfitting is if you didn't exactly describe Indiana Jones and then it still gave Indiana Jones.
It didn't though, it just spat out what is basically a 1:1 copy of some Indiana Jones promo shoot. No where did the prompt ask for it to look like Harrison Ford.
One thing I would say, it's interesting to consider what would make this not so obviously bad.
Like, we could ask AI to assess the physical attributes of the characters it generated. Then ask it to permute some of those attributes. Generate some random tweaks: ok but brawy, short, and a different descent. Do similarly on some clothing colors. Change the game. Hit the "random character" button on the physical attributes a couple times.
There was an equally shatteringly-awful less-IP-theft (and as someone who thinks IP is itself incredibly ripping off humanity & should be vastly scoped down, it's important to me to not rest my arguments on IP violations).... An equally shattering recent incident for me. Having trouble finding it, don't remember the right keywords, but an article about how AI has a "default guy" type that it uses everywhere, a super generic personage, that it would use repeatedly. It was so distasteful.
The nature of 'AI as compression', as giving you the most median answer is horrific. Maybe maybe maybe we can escape some of this trap by iterating to different permutations, by injecting deliberate exploration of the state spaces. But I still fear AI, worry horribly when anyone relies on it for decision making, as it is anti-intelligent, uncreative in extreme, requiring human ingenuity to budge off its rock of oppressive hypernormality that it regurgitates.
Are you telling me that our culture should be deprived of the idea of Indiana Jones and the feelings that character inspires in all of us forever just because a corporation owns the asset?
Indiana Jones is 44 years old. When are we allowed to remix, recreate and expand on this like humanity has done since humans first started sitting down next to a fire and telling stories?
edit: this reminds of this iconic scene from Dr. Strangelove, https://www.youtube.com/watch?v=RZ9B7owHxMQ
I guess we all have to answer to the Walt Disney company.This is a kind of strange comment for me to read. Because imby tone it sounds like a rebuttal? But by content, it agrees with a core thing I said about myself:
> and as someone who thinks IP is itself incredibly ripping off humanity & should be vastly scoped down, it's important to me to not rest my arguments on IP violations
What's just such a nightmare to me is that the tech is so normative. So horribly normative. This article shows that AI again and again reproduced only the known, only the already imagined. Its not that it's IP theft that rubs me so so wrong, it's that it's entirely bankrupt & uncreative, so very stuck. All this power! And yet!
You speak at what disgusts me yourself!
> When are we allowed to remix, recreate and expand on this like humanity has done
The machine could be imagining all kinds of Indianas. Of all different remixed recreated expanded forms. But this pictures are 100% anything but that. They're Indiana frozen in Carbonite. They are the driest saddest prison of the past. And call into question the validity of AI entirely, show something greviously missing.
Sure, assuming the artist has the proper license and franchise rights to make and distribute copies. You can go buy a picture of Indy today that may not be printed by Walt Disney Studios but by some other outfit or artists.
Or, you mean if the artist doesn't have a license to produce and distribute Indiana Jones images? Well they'll be in trouble legally. They are making "copies" of things they don't own and profiting from it.
Another question is whether that's practically enforceable.
> Where did I (or the artist) violate any copyright (or other) laws?
When they took payment and profited from making unauthorized copies.
> It is the artist that is replaced by the AI, not the copyrighted IP.
Exactly, that's why LLMs and the companies which create them are called "theft machines" -- they are reproducing copyrighted material. Especially the ones charging for "tokens". You pay them, they make money and produce unauthorized copies. Show that picture of Indy to a jury and I think it's a good chance of convincing them.
I am not saying this is good or bad, I just see this having a legal "bite" so to speak, at least in my pedestrian view of copyright law.
I don't think this is about reproduction as much as how you got enough data for that reproduction. The riaa sent people to jail and ruined their lives for pirating. Now these companies are doing it and being valued for hundreds of billions of dollars.
You also can’t sell a machine that outputs such material. And that’s how the story with GenAI becomes problematic. If GenAI can create the next Indiana Jones or Star Wars sequel for you (possibly a better one than Disney makes, it has become a low bar of sorts), I think the issue becomes obvious.
Nobody can prevent you from drawing a photo realistic picture of Indy, or taking a photo of him from the internet and hanging it on your fridge. Or asking a friend to do it for you. And let's be honest -- because nobody is looking -- said friend could even charge you a modest sum to draw a realistic picture of Indy for you to hang on your fridge; yes, it's "illegal" but nobody is looking for this kind of small potatos infringement.
I think the problem is when people start making a business out of this. A game developer could think "hey, I can make a game with artwork that looks just like Ghibli!", where before he wouldn't have anyone with the skills or patience to do this (see: the 4-second scene that took a year to make), now he can just ask the Gen AI to make it for them.
Is it "copyright infringement"? I dunno. Hard to tell, to be honest. But from an ethical point of view, it seems odd. And before you actually required someone to take the time and effort to copy the source material, now it's an automated and scalable process that does this, and can do this and much more, faster and without getting tired. "Theft at scale", maybe not so small potatos anymore.
--
edit: nice, downvotes. And in the other thread people were arguing HN is such a nice place for dissenting opinions.
Deleted Comment
I mean... If I go to Google right now and do an image search for "archeologist adventurer who wears a hat and uses a bullwhip," the first picture is a not-even-changed image of Indiana Jones. Which I will then copy and paste into whatever project I'm working on without clicking through to the source page (usually because the source page is an ad-ridden mess).
Perhaps the Internet itself is the hideous theft machine, and AI is just the most efficient permutation of user interface onto it.
(Incidentally, if you do that search, you will also, hilariously, turn up images of an older gentleman dressed in a brown coat and hat who is clearly meant to be "The Indiana Jones you got on Wish" from a photo-licensing site. The entire exercise of trying to extract wealth via exclusive access to memetic constructs is a fraught one).
The hypocrisy is much of the problem. If we're going to have IP laws that severely punish people and smaller companies for reselling the creative works of others without any compensation or permission then those rules should apply to powerful well-connected companies as well.
I hate how it is common to advance a position to just state a conclusion as if it were a fact. You keep repeating the same thing over and over until it seems like a concensus has been reached instead of an actual argument reasoned from first principle.
This is no theft here. Any copyright would be flimsier than software patents. I love Studio Ghibli (including $500/seat festival tickets) but it's the heart and the detail that make them what they are. You cannot clone that. Just some surface similarity. If that's all you like about the movies... you really missed the point.
Imagine if in early cinema someone had tried to claim mustachioed villian, ditsy blonde, or dumb jock? These are just tropes and styles. Quality work goes much much much deeper, and that cannot be synthesised. I can AI generate a million engagement rings, but I cannot pick the perfect one that fits you and your partners love story.
PS- the best work they did was "When Marnie was There". Just fitted together perfectly.
If engagement rings were as ubiquitous and easy to generate as Ghibli images have become, they would lose their value very quickly -- not even just in the monetary sense, but the sentimental value across the market would crash for this particular trinket. It wouldn't be about picking the right one anymore, it would be finding some other thing that better conveys status or love through scarcity.
If you have a 3d printer you'd know this feeling where abundance diminishes the value of something directly. Any pure plastic items you have are reduced to junk very quickly once you know you can basically have anything on a whim (exceptions for things with utility, however these are still printable). If I could print 30 rings a day, my partner wouldn't want any of them as a show of my undying love. Something more special and rare and thoughtful would have to take its place.
This isn't meant to come across as shallow in any way, its just classic supply and demand relating to non monetary value.
I have interacted with the parent account before and have actually looked at the the amount of times they used words like awful, horrific, etc, and I definitely agree, as one should not have such a strong attachment to such words that they feel the need to continue to use (or rather, abuse) them endlessly.
[0] https://hn.algolia.com/?dateRange=all&page=0&prefix=true&que...
You absolutely can and these theft machines are proving that, literally cloning those details with very high precision and fidelity.
If you describe an Indiana Jones character, but no sex, 50/50 via internal call to rand() that it outputs a woman.
Ah, I thought I knew this account from somewhere. It seems surprisingly easy to figure out what account is commenting just based on the words used, as I've commented that only a few active people on this site seem to use such strong words as shown here.
Wouldn’t the more appropriate solution in the case of theft be to remunerate the victims and prevent recidivism?
Instead of making it “not so obviously bad” why not just… make it good? Require AI services to either prove that 100% of their training corpus is either copyright free or properly licensed, or require them to compensate copyright holders for any infringing outputs.
[1] https://scholar.google.com/scholar_case?case=137674209419772...
https://www.copyright.gov/circs/circ01.pdf
“Copyright does not protect • Ideas, procedures, methods, systems, processes, concepts, principles, or discoveries”
Not sure why this is even controversial, this has been the case for a hundred years.
> Hayao Miyazaki’s Japanese animation company, Studio Ghibli, produces beautiful and famously labor intensive movies, with one 4 second sequence purportedly taking over a year to make.
It makes me wonder though - whether it’s more valuable to spend a year on a scene that most people won’t pay that much attention to (artists will understand and appreciate, maybe pause and rewind and replay and examine the details, the casual viewer just enjoy at a glance) or use tools in addition to your own skills to knock it out of the park in a month and make more great things.
A bit how digital art has clear advantages over paper, while many revere the traditional art a lot, despite it taking longer and being harder. The same way how someone who uses those AI assisted programming tools can improve their productivity by getting rid of some of the boilerplate or automate some refactoring and such.
AI will definitely cheapen the art of doing things the old way, but that’s the reality of it, no matter how much the artists dislike it. Some will probably adapt and employ new workflows, others stick to tradition.
In the first case, there's only one static image for an entire scene, scrolled and zoomed, and if they feel generous, there would be an overlay with another static image that slides over the first at a constant speed and direction. It feels dead.
In the second case, each frame is different. There's chaotic motions such as wind and there's character movement with a purpose, even in the background, there's always something happening in the animation, there's life.
I bet that a good animator could make a really impressive 4-second scene if they were given a month, instead of a year. Possibly even if they were given a day.
So if we assume that there is not a binary "cheap animation vs masterpiece" but rather a sort of spectrum between the two, then the question is: at what point do enough people stop seeing the difference, that it makes economic sense to stay at that level, if the goal is to create as much high-quality content as possible?
To be clear, I am not saying it's not valuable, only that to the vast majority, it's not.
In this case, yes it is.
People do pay attention to the result overall. Studio Ghibli has got famous because people notice what they produce.
Now people might not notice every single detail but I believe that it is this overall mindset and culture that enables the whole unique final product.
There are many valid answers.
Maybe you want to create it to tell a story, and you have an overflowing list of stories you're desperate to tell. The animation may be a means to an end, and tools that help you get there sooner mean telling more stories.
Maybe you're pretty good at making things people like and you're in it for the money. That's fine, there are worse ways to provide for your family than making things people enjoy but aren't a deep thing for you.
Maybe you're in it because you love the act of creating it. Selling it is almost incidental, and the joy you get from it comes down to spending huge amounts of time obsessing over tiny details. If you had a source of income and nobody ever saw your creations, you'd still be there making them.
These are all valid in my mind, and suggest different reasons to use or not to use tools. Same as many walks of life.
I'd get the weeds gone in my front lawn quickly if I paid someone to do it, but I quite enjoy pottering around on a sunny day pulling them up and looking back at the end to see what I've achieved. I bake worse bread than I could buy, and could buy more and better bread I'm sure if I used the time to do contracting instead. But I enjoy it.
On the other hand, there are things I just want done and so use tools or get others to do it for me.
One positive view of AI tools is that it widens the group of people who are able to achieve a particular quality, so it opens up the door for people who want to tell the story or build the app or whatever.
A negative side is the economics where it may be beneficial to have a worse result just because it's so much cheaper.
I remember the discourse on HN a few years ago when GitHub Copilot came out, and well, people were missing the fact that the GitHub terms and conditions (T&C) explicitly permits usage of code for analytic purposes (of which training is implicitly part), so it turns out that if one did not want such training, they should not have hosted on GitHub in the first place, because the website T&C was clear as day.
If they didn't spend a year on it they wouldn't be copied now.
Though I am also generally opposed to the notion of intellectual property whatsoever on the basis that it doesn't seem to serve its intended purpose and what good could be salvaged from its various systems can already be well represented with other existing legal concepts, i.e deceptive behaviors being prosecuted as forms of fraud.
(Copied from a comment of mine written more than three years ago: <https://news.ycombinator.com/item?id=33582047>)
Arguments that make a case that NN training is copyright violation are much more compelling to me than this.
Grok is supposed to be "uncensored", but there are very specific words you just can't use when asking it to generate images. It'll just flat out refuse or give an error message during generation.
But, again, if you go in a roundabout way and avoid the specific terms you can still get what you want. So why bother?
Is it about not wanting bad PR or avoiding litigation?
How they then go about implementing those guardrails is pretty telling about their understand and control over what they've build and their line of thinking. Clearly, at no point before releasing their LLMs onto the world did anyone stop and ask: Hey, how do we deal with these things generating unwanted content?
Resorting to blocking certain terms in the prompts is like searching for keywords in spam emails. "Hey Jim, I got another spam email from that Chinese tire place" - "No worry boss, I've configured the mail server to just delete any email containing the words China or tire".
Some journalist should go to a few of these AI companies and start asking questions about the long term effectiveness and viability of just blocking keywords in prompts.
Current generation of AI models can't think of anything truly new. Everything is simply a blend of prior work. I am not saying that this doesn't have economic value, but it means these AI models are closer to lossy compression algorithms than they are to AGI.
The following quote by Sam Altman from about 5 years ago is interesting.
"We have made a soft promise to investors that once we build this sort-of generally intelligent system, basically we will ask it to figure out a way to generate an investment return."
That's a statement I wouldn't even dream about making today.
How could you possibly know this?
Is this falsifiable? Is there anything we could ask it to draw where you wouldn't just claim it must be copying some image in its training data?
We got brass bands with brass instruments, synth music from synths.
We know therefore, necessarily, that they can be nothing novel from an LLM -- it has no live access to novel developments in the broader environment. If synths were invented after its training, it could never produce synth music (and so on).
The claim here is trivially falsifiable, and so obviously so that credulous fans of this technology bake it in to their misunderstanding of novelty itself: have an LLM produce content on developments which had yet to take place at the time of its training. It obviously cannot do this.
Yet an artist which paints with a new kind of black pigment can, trivially so.
How could you possibly know that? Could you prove that an image wasn't copying from images in its training data?
To borrow Chomsky's framework: what makes humans unique and special is our ability to produce an infinite range of outputs that nonetheless conform to a set of linguistic rules. When viewed in this light, human creativity necessarily depends on the "linguistic rules" part of that; without a framework of meaning to work within, we would just be generating entropy, not meaningful expressions.
Obviously this applies most directly to external language, but I hope it's clear how it indirectly applies to internal cognition and--as we're discussing here--visual art.
TL;DR: LLMs are definitely creative, otherwise they wouldn't be able to produce semantically-meaningful, context-appropriate language in the first place. For a more empirical argument, just ask yourself how a machine that can generate a poem or illustration depicting [CHARACTER_X] in [PLACE_Y] doing [ACTIVITY_Z] in [STYLE_S] without being creative!
[1] Covered in the famous Chomsky v. Foucault debate, for the curious: https://www.youtube.com/watch?v=3wfNl2L0Gf8
As an example, let's talk about "vibe coding" - It's a new term describing heavy LLM usage in programming, usually associated with Generation Z.
If I am asking an LLM to generate a German translation for "vibe coder" it comes up with the neutral "Vibe-Programmierer". When asking it to be more creative it came up with "Schwingungsschmied" ("vibration smith"?) - What?
I personally came up with the following words:
* Gefühlsprogrammierer ("A programmer, that focuses on intuition and feeling.")
* Freischnauzeprogrammierer ("Free-mouthed programmer - highlighting straightforwardness and the creative expression of vibe coding." - colloquial)
Interesstingly, LLMs can describe both these terms, they just can't create them naturally. I tested this on all major LLMs and the results were similar. Generating a picture of a "vibe coder" also highlights more of a moody atmosphere instead of the Generation Z aspects that are associated with it on social media nowadays.
Your example disproves itself; that's a madlib. It's not creative, it's just rolling the dice and filling in the blanks. Complex die and complex blanks are a difference of degree only, not creativity.
Dead Comment