We the people shouldn't be wondering if this is violating the law.
We the people should be deciding if this should be allowed or not. The law is ours to write, and we should make our decision what we want it to say and then work to make sure it is implemented as we want.
This is from America, where national laws like copyright are very hard to change, due to lobbyists and both political parties thinking the other one literally works for the Devil. Your point about people deciding their own laws is great but it’s just not how our system works. Instead most law nowadays is made by judges interpreting anything controversial.
You’re both right. It’s true the practical aspects many it hard to change laws, but I still appreciate OP’s starting from first principles.
It’s easy to get caught up in debates over how to interpret current laws and conflate what’s legal with what’s (a) ethical (b) conducive to progress. However, we shouldn’t lose sight of the fact that these laws are merely social contracts and subject to change with enough impetus and savvy.
Ok, but, that system - "how things are" - is exploitative in the extreme to both people and the planet. It's unsustainable, eldritch, profoundly unjust and corrupt.
Using "how things are" (fucked) as a justification to prevent discussion of "how things could be" (amazing) is fucked. I'm sick to the teeth of seeing it.
That incorrectly presumes that you[1] can make any law you want, have it enforced, and not piss off the vast majority of people or damage the fabric of free society.
You can't make any law you want, and keep a free society. You can't tell people they can't train neural nets with publicly available data. You can't tell people they can't run something like stable diffusion without getting output approved by some approved entity.
This is a line in the sand. If lawmakers are coerced into passing such laws, it will consolidate almost all future power (computing, language, speech, images, and video will all be heavily dependent on potentially copyrighted AI output) in the hands of government or corporate authority.
[1] for all the people complaining about this:
a) That's the collective "you", not the parent poster.
b) The majority of people are not involved in Congress's decision to pass or not pass legislation. What moves the needle is the representative's own intuition (often flawed), or relatively small groups of people who have been motivated to contact their legislators, or corporate lobbying.
Arguing if a new tech is legal is like saying the Earth is flat.
Arguing that we can create and alter the laws and should therefore be discussing if we want this tech to be legal, that's like saying the Earth is a sphere.
Both are wrong, but treating them as equivalent is even more wrong than both of them.
> You can't make any law you want, and keep a free society. You can't tell people they can't train neural nets with publicly available data. You can't tell people they can't run something like stable diffusion without getting output approved by some approved entity.
Depends what you count as "a free society".
There are things which I'm not allowed to take photographs of or record videos of, such as the screen of a cinema while it's playing the latest superhero film.
There are communications I am not allowed to make, such as the sequence of bits required to recreate even an approximation of the new song from that famous pop singer everyone likes.
There is data I'm not allowed to be given or store, like a random stranger's medical records.
You're allowed to be an anarchist and say those things should be legal, but most people don't really care very much about any of that — closest we get is people campaigning to "keep memes legal".
Hmm.
I'm no lawyer, but there's probably one here: what even is the legal status of companies selling the service of searchable archives of screenshots with text on them and short video clips? Is that "fair use" or "probably copyright infringement but nobody cares" or "they all need to be licensed, there was a court case"? (I assume it varies by jurisdiction, an answer in any will do).
The truth is that nobody cares. Your "We the people" Braveheart speech is just cognitive dissonance with the rest of the world if you look at the reactions irl and on the web normal people, or better said people not in the tech bubble and overwhelmed artists, are rather excited.
That’s the problem. We, the people, are not in control. We, the people, have given up control to bureaucrats who play god with permission. We, the people, make society function and our society will continue to function without the bureaucrats.
We, the people, should be demanding direct democracy.
It is the same as humans learning from art to imitate one or multiple other artists' styles. The differences are in scale, and flexibility in mixing and matching different styles (a very primitive, not-very-accurate model or description would be a vector of [0,1] values, one for each artist[1]), and which vector an AI art program happens to use as a default style for any prompt that doesn't specify something akin to "in the style of <x>"
[1] I'm not claiming these neural nets are learning models of each artist and linearly combining them, but if you squint hard enough it approaches that.
Everyone knows that the term machine learning doesn't actually have anything to do with human learning, right? It's just a set of words that were catchier than gradient descent or curve fitting.
Asserting legal rights based on a naming quirk seems pretty unfounded.
Let's try that statement again: 'It is the same as executing gradient descent over multiple artists works' That sounds a heck of a lot closer to running a Photoshop filter than learning anything now doesn't it.
I fail to see how that last sentence makes it sound like a photoshop filter. Do you mean if you can express it as an algorithm it cannot be compared to human learning?
What if we can reduce human learning to making predictions based on training data? Then it's not human learning anymore but more like a photoshop filter?
"Everyone knows that the term machine learning doesn't actually have anything to do with human learning, right?"
Do you have sources for that? They're not equal but I see a lot of parallels.
Machine learning is not human learning in any meaningful or legal way because laws are for humans and crucially machine is not human.
It will be the case as long as that machine lacks free will, human rights and all that important stuff. If it gets to a level where a machine is granted human rights this could be reassessed. Until then, whoever operates the machine is to whom laws apply, and in this case they violate copyright by capitalizing on others’ creative work and intellectual property through creating derivative works.
The large interests in question are guys like OpenAI/Microsoft, and it is in their interest that we continue to see AI as some human-like enigma and don’t think about the ethics of what they do. If you ask actual small artists, most do not in fact share this enthusiasm.
Don't think of this in terms of just art. In fact art is probably a confusing example since the creative process is a bit obscure.
Think of creating any immaterial object of value to you or others. A recipe book, a coding blog, a graphic novel, whatever.
Are you really going to go through the pains of creating that and putting it out there, if on the next day some AI startup is going to train its models on it and offer something seemingly different, but ultimately equivalent, commercially?
We're running into a huge problem for creators here, and it'll have to be solved with new laws around training models on original content some way or the other.
If we want to still have a creative industry going forward, that is.
> and offer something seemingly different, but ultimately equivalent, commercially?
> it'll have to be solved with new laws
Tbh, if an AI will fill out a need I had for cheaper than a human would, thn I'd see that as positive. Be it image asset creation, ghost writing or whatever. Restricting that with laws seems like it would only cut innovation because it takes away jobs. Kinda like forbidding robot assemblers in industry.
Tough, take that from a person on the consumer side of creative works.
If we replace a big chunk of our media with derivative works created by copyright laundering engines, we're destroying the incentives to be original and creative.
But we absolutely need originality and creativity!
Living in a world where everything looks sort of like SEO spam sounds like a dystopia to me.
Note, I don't think those technologies should or will be banned. There just have to be rules in place that allow creators to opt-in to having their work used as training data, with appropriate remuneration.
If that makes some AI business models unworkable, that's fine with me as long as we don't destroy our creative and inventive people's livelihoods.
We don't want any creative human input, because it is expensive and gives opportunity to have a middle class. Sam is regularly at Davos. Ask the Davos futurists about their vision of tomorrow in which everything will be fine and dandy.
Did you see Sam's appearance at the Microsoft presentation? It is all about safety, he said. Safety.
Bill Gates says that ChatGPT will change the world, make Jobs more efficient. Weird that people don't feel the same about Stable Diffusion even though it will do the exact same thing
‘Microsoft co-founder believes that his company’s own investment in the AI startup that created ChatGPT will change the world.’
It is open source AI models that are large in parameters, transparent and smaller in size that are the ones that will permanently change the world and will disrupt OpenAI’s closed AI SaaS business plans.
The important thing here is not if jobs are getting more efficient or not, it's that everyone gets the same boost. All companies will get the same AI, and that makes people the differentiating factor. Let's stop with the theories of job loss by AI automation, competition will have humans+AI and beat your company that uses AI alone.
The early Google for example was also an AI. But it was open to everyone, nobody got exclusive Google access. So we all benefited equally and there were no overall job losses.
Bill is obviously reached, a statute of God. He is starting a Climate religion as we speak, he should hire Tom Cruise for a start. He knows a lot about this profitable startup idea.
This is much less of a problem than people make it out to be.
For a lot of people, art is a way to connect to other human minds. If you know there is no human mind directly involved in the piece, you might not value the art the same way. No amount of code will change that.
Having said that, I think the current iteration of AI will be a fantastic tool for artists once it matures. Even if you're making some one-off disposable piece of visual/audio design for business purposes, you will want a professional to look it over.
You’ve summed up my issue with AI art. If no human was involved in creating the piece then as far as I’m concerned, it’s not art. It’s something but art it is not - and I’m not someone who is in to art at all, I just want to know that an actual person was trying to express an idea/opinion when making a piece.
Once I know a human didn’t make something my brain just looses interest. Telling if something was human or AI made us the trick.
To be clear, I’m only applying this thinking to art - if something is drawn or calculated by AI that solved a problem (circuit design, route planning etc) then I’m all for it.
If you write an 'algorithm' that copies someone's art and distributes it - then you are infringing. It doesn't matter if you 'automated it' and the word 'art' doesn't really apply so much either.
> I don't see how any corporation should be able to consume someone's copyrighted work without their permission and make something of it.
I'm not quite sure where I stand yet on this issue. I wonder, though, if an individual artist studies another long-dead artist's work, practices that style of painting/drawing/whatever, and then produces something in that same vein that is clearly inspired by the artist...is there a copyright issue? My instinct would be to answer 'no' as this is what art and inspiration has always been about. No one had to pay to look at Da Vinci's work or study a Renoir - the art was publicly available.
Now, with OpenAI, the capacity to 'study' an artist and produce something in their style is available to all. If the work is publicly available, is a machine studying/learning from it any different to an art student in a museum? And if OpenAI generates an image in the style of Da Vinci, should it be paying someone for doing so?
Again, my instinct is 'no'. I can understand why an artist working today might see things very differently but unless the source material was locked up or otherwise not available for people to see, study and learn from, I am not yet convinced there is a big difference between thousands of humans studying and imitating a master artist and a machine studying and imitating a master artist.
The article points to two problems, one being corporations profiting from other peoples copyrighted works with just an algorithm. Thats awful for sure, but the actual utility of content generated that way is questionable at best (Also, because it's new, there is a bit of a fad for it). The other is more existential and pertains to AI replacing humans in the cultural process. I don't think it's reasonble to assume this will happen anytime soon, given the state of technology which is just nowhere near enough capable for that. By the time it would be, you'd have a fully artifical human mind.
I agree with you for capital-A Art, but it turns out that most of content produced doesn't need to meet that bar - book covers, adverts, filler paintings, and so on. Those are the ways that a lot of people make their living. Is it OK that they'll become outdated? Maybe yes for the production of capital-A-Art, but it sure will suck for a lot of people who enjoy making their living from creative production.
The Miyazaki quote really hit home: “I strongly feel that this is an insult to life itself.”
I think most AI art misses the point entirely. Art is really about the person / people that make the work and their response to the time and context in which they live. It's usually a struggle to bring something new into the world, but it does feel magical and exciting - it seems to come from a very different place than interpolating past influences. AI art takes away all the magic, and without the sense of a person I think it has no artistic or aesthetic value - it just adds visual noise to the world. I think it will, unfortunately, disincentivise people to make new work, as although there is a world of difference between AI art and good art, it can be tricky to tell the difference at first glance, and to make anything good takes years of dedication and effort.
Sure, it can come from a boring, unoriginal place without any investment and it could be bad art. Work like that is just a waste of time for everyone. People wouldn't listen to music that was created with the same indifference, it would sound awful. I agree it's not a very important question though, it's just a shame that we will have to look at the images AI generates.
I believed that corporate theft from citizens was a valuable investment. Why all the commotion? Copyright is obsolete for the average person. They will optimize algorithms for free, after having fed them with user-generated, tag-labeled content for years. Now inside your Windows Desktop. Soon in Apple pods.
> “We’re not litigating image by image, we’re litigating the whole technique behind the system.”
Oh yes, "let's sue AI-anything", no time to sue one by one. Wildcard attack, not against any specific work, but against the algorithm itself. If they wanted that, they should have patented image diffusion.
And when they want to copyright their own styles, the same thing. They want a style to work like a trademark, to block any future work from using it. Another attempt to make copyright more powerful, making current holders have an advantage over newcomers.
Basically they would like copyright to get trademark and patent powers, to cover any future reimplementation or style variation. But AI isn't coming just for art, and diffusion models are useful for many other things. They are not the one and only voice that matters.
The main problem seem to be the inclusion of the artist name in the metadata of the training images. This makes the style of individual, currently active artists exploitable in ways that affect the artists negatively.
One solution could be to allow the use of the images, but disallow the use of the artist name in the training set. From a privacy standpoint it seems reasonable to disagree to having your name and artistic characteristics connected and encoded into an AI model, even if it's legal to use the art itself as training data.
We the people should be deciding if this should be allowed or not. The law is ours to write, and we should make our decision what we want it to say and then work to make sure it is implemented as we want.
It’s easy to get caught up in debates over how to interpret current laws and conflate what’s legal with what’s (a) ethical (b) conducive to progress. However, we shouldn’t lose sight of the fact that these laws are merely social contracts and subject to change with enough impetus and savvy.
Using "how things are" (fucked) as a justification to prevent discussion of "how things could be" (amazing) is fucked. I'm sick to the teeth of seeing it.
You can't make any law you want, and keep a free society. You can't tell people they can't train neural nets with publicly available data. You can't tell people they can't run something like stable diffusion without getting output approved by some approved entity.
This is a line in the sand. If lawmakers are coerced into passing such laws, it will consolidate almost all future power (computing, language, speech, images, and video will all be heavily dependent on potentially copyrighted AI output) in the hands of government or corporate authority.
[1] for all the people complaining about this:
a) That's the collective "you", not the parent poster.
b) The majority of people are not involved in Congress's decision to pass or not pass legislation. What moves the needle is the representative's own intuition (often flawed), or relatively small groups of people who have been motivated to contact their legislators, or corporate lobbying.
OPs "we" _is_ the vast majority of people. You are answering as if OP said "I".
Of course OPs implementation is complicated, but they are not presuming you can make any law you want.
Arguing if a new tech is legal is like saying the Earth is flat.
Arguing that we can create and alter the laws and should therefore be discussing if we want this tech to be legal, that's like saying the Earth is a sphere.
Both are wrong, but treating them as equivalent is even more wrong than both of them.
> You can't make any law you want, and keep a free society. You can't tell people they can't train neural nets with publicly available data. You can't tell people they can't run something like stable diffusion without getting output approved by some approved entity.
Depends what you count as "a free society".
There are things which I'm not allowed to take photographs of or record videos of, such as the screen of a cinema while it's playing the latest superhero film.
There are communications I am not allowed to make, such as the sequence of bits required to recreate even an approximation of the new song from that famous pop singer everyone likes.
There is data I'm not allowed to be given or store, like a random stranger's medical records.
You're allowed to be an anarchist and say those things should be legal, but most people don't really care very much about any of that — closest we get is people campaigning to "keep memes legal".
Hmm.
I'm no lawyer, but there's probably one here: what even is the legal status of companies selling the service of searchable archives of screenshots with text on them and short video clips? Is that "fair use" or "probably copyright infringement but nobody cares" or "they all need to be licensed, there was a court case"? (I assume it varies by jurisdiction, an answer in any will do).
In order to do so, we need the conversation and we need to educate inorder to have the conversation in the first place.
Law is evolving.
"We the people" means nothing. You are not in control.
We, the people, should be demanding direct democracy.
https://www.youtube.com/watch?v=5pIVVpoz5zk
It is the same as humans learning from art to imitate one or multiple other artists' styles. The differences are in scale, and flexibility in mixing and matching different styles (a very primitive, not-very-accurate model or description would be a vector of [0,1] values, one for each artist[1]), and which vector an AI art program happens to use as a default style for any prompt that doesn't specify something akin to "in the style of <x>"
[1] I'm not claiming these neural nets are learning models of each artist and linearly combining them, but if you squint hard enough it approaches that.
Asserting legal rights based on a naming quirk seems pretty unfounded.
Let's try that statement again: 'It is the same as executing gradient descent over multiple artists works' That sounds a heck of a lot closer to running a Photoshop filter than learning anything now doesn't it.
Unfortunately the answer is definitely no. People have heard the terms “learning” and “train” in this context and have anthropomorphized weak AI.
The level of ignorance on display is sad to witness.
What if we can reduce human learning to making predictions based on training data? Then it's not human learning anymore but more like a photoshop filter?
"Everyone knows that the term machine learning doesn't actually have anything to do with human learning, right?" Do you have sources for that? They're not equal but I see a lot of parallels.
Consider the act of adding salt to chicken broth for example but I can quite easily think of dozens of other things.
It will be the case as long as that machine lacks free will, human rights and all that important stuff. If it gets to a level where a machine is granted human rights this could be reassessed. Until then, whoever operates the machine is to whom laws apply, and in this case they violate copyright by capitalizing on others’ creative work and intellectual property through creating derivative works.
The large interests in question are guys like OpenAI/Microsoft, and it is in their interest that we continue to see AI as some human-like enigma and don’t think about the ethics of what they do. If you ask actual small artists, most do not in fact share this enthusiasm.
Small independent copyright interests trying to eke out a living from their art will also argue this.
In truth both sides are a mix of both individuals and monied interests but this rhetorical tactic seems to be irresistible.
Deleted Comment
Are you really going to go through the pains of creating that and putting it out there, if on the next day some AI startup is going to train its models on it and offer something seemingly different, but ultimately equivalent, commercially?
We're running into a huge problem for creators here, and it'll have to be solved with new laws around training models on original content some way or the other.
If we want to still have a creative industry going forward, that is.
> it'll have to be solved with new laws
Tbh, if an AI will fill out a need I had for cheaper than a human would, thn I'd see that as positive. Be it image asset creation, ghost writing or whatever. Restricting that with laws seems like it would only cut innovation because it takes away jobs. Kinda like forbidding robot assemblers in industry.
Tough, take that from a person on the consumer side of creative works.
But we absolutely need originality and creativity!
Living in a world where everything looks sort of like SEO spam sounds like a dystopia to me.
Note, I don't think those technologies should or will be banned. There just have to be rules in place that allow creators to opt-in to having their work used as training data, with appropriate remuneration.
If that makes some AI business models unworkable, that's fine with me as long as we don't destroy our creative and inventive people's livelihoods.
Did you see Sam's appearance at the Microsoft presentation? It is all about safety, he said. Safety.
‘Microsoft co-founder believes that his company’s own investment in the AI startup that created ChatGPT will change the world.’
It is open source AI models that are large in parameters, transparent and smaller in size that are the ones that will permanently change the world and will disrupt OpenAI’s closed AI SaaS business plans.
The important thing here is not if jobs are getting more efficient or not, it's that everyone gets the same boost. All companies will get the same AI, and that makes people the differentiating factor. Let's stop with the theories of job loss by AI automation, competition will have humans+AI and beat your company that uses AI alone.
The early Google for example was also an AI. But it was open to everyone, nobody got exclusive Google access. So we all benefited equally and there were no overall job losses.
For a lot of people, art is a way to connect to other human minds. If you know there is no human mind directly involved in the piece, you might not value the art the same way. No amount of code will change that.
Having said that, I think the current iteration of AI will be a fantastic tool for artists once it matures. Even if you're making some one-off disposable piece of visual/audio design for business purposes, you will want a professional to look it over.
Once I know a human didn’t make something my brain just looses interest. Telling if something was human or AI made us the trick.
To be clear, I’m only applying this thinking to art - if something is drawn or calculated by AI that solved a problem (circuit design, route planning etc) then I’m all for it.
It doesn't matter what 'a lot of people think art is' it matters what artists think, and what our rights are.
I don't see how any corporation should be able to consume someone's copyrighted work without their permission and make something of it.
AI will be fantastic for doing 'art' but it won't be good for 'artists' trying to make a living.
If OpenAI wants to use some artists material they can pay for it.
Otherwise, use the stuff that's not protected, make their own art, etc..
I'm not quite sure where I stand yet on this issue. I wonder, though, if an individual artist studies another long-dead artist's work, practices that style of painting/drawing/whatever, and then produces something in that same vein that is clearly inspired by the artist...is there a copyright issue? My instinct would be to answer 'no' as this is what art and inspiration has always been about. No one had to pay to look at Da Vinci's work or study a Renoir - the art was publicly available.
Now, with OpenAI, the capacity to 'study' an artist and produce something in their style is available to all. If the work is publicly available, is a machine studying/learning from it any different to an art student in a museum? And if OpenAI generates an image in the style of Da Vinci, should it be paying someone for doing so?
Again, my instinct is 'no'. I can understand why an artist working today might see things very differently but unless the source material was locked up or otherwise not available for people to see, study and learn from, I am not yet convinced there is a big difference between thousands of humans studying and imitating a master artist and a machine studying and imitating a master artist.
They definitely disagree with your point of view.
I think most AI art misses the point entirely. Art is really about the person / people that make the work and their response to the time and context in which they live. It's usually a struggle to bring something new into the world, but it does feel magical and exciting - it seems to come from a very different place than interpolating past influences. AI art takes away all the magic, and without the sense of a person I think it has no artistic or aesthetic value - it just adds visual noise to the world. I think it will, unfortunately, disincentivise people to make new work, as although there is a world of difference between AI art and good art, it can be tricky to tell the difference at first glance, and to make anything good takes years of dedication and effort.
https://sphericalbullshit.wordpress.com/2022/12/11/is-ai-art...
Oh yes, "let's sue AI-anything", no time to sue one by one. Wildcard attack, not against any specific work, but against the algorithm itself. If they wanted that, they should have patented image diffusion.
And when they want to copyright their own styles, the same thing. They want a style to work like a trademark, to block any future work from using it. Another attempt to make copyright more powerful, making current holders have an advantage over newcomers.
Basically they would like copyright to get trademark and patent powers, to cover any future reimplementation or style variation. But AI isn't coming just for art, and diffusion models are useful for many other things. They are not the one and only voice that matters.
One solution could be to allow the use of the images, but disallow the use of the artist name in the training set. From a privacy standpoint it seems reasonable to disagree to having your name and artistic characteristics connected and encoded into an AI model, even if it's legal to use the art itself as training data.