I’m convinced AI art is going to be a boon for artists. People always assume that the demand for something will remain static as its production grows and the producers become more productive. But as the cost of something falls, people consume more of it. That’s a law in economics. This is no different.
What’s likely to happen is people are going to invent whole new media that will use this. Incredibly elaborate interactive and personalized experiences will be made possible by this. One person will be able to deliver a huge amount of value, and they will be rewarded accordingly.
Think back to the 1980s. Programmers were in very short supply, and the work they were doing was less accessible and more difficult than it was today. IDEs, version control, better languages and frameworks, easy access to tools to learn to code, Stack Overflow, etc. made programming much more accessible and engineers became much more productive. The number of programmers out there is orders of magnitudes greater than three decades ago. Yet real salaries have still grown tremendously. The value per capita has grown even faster than those salaries because they’re so productive.
The same thing is going to happen here. Artists will build unimaginably huge, complex projects (on the scale of Google/Facebook/Amazon), and the world will gobble them up.
The lower the barriers to entry, the more competition game developers face, often by teams willing to charge little or nothing. Look at the app stores on mobile: 0.01% of apps make back their development cost
Steam had almost 9000 new games added to it last year, and the average revenue of each fell by almost half a couple of years ago
Yes, creating art itself will have a lower barrier to entry, just like learning to code in Python has a lower barrier to entry than Assembly. But any software engineer will tell you that most of the challenges don’t come from writing lines of code, they come from building things that can scale at scales. Building a big thing in a big organization means having good communication and project management systems. Those things haven’t been taken over by AI (yet).
>But as the cost of something falls, people consume more of it. That’s a law in economics. This is no different.
I don't know such a law, but it's simply false.
People have 2 finite budgets they can allocate to entertainment, time and money.
In many ways money is not the limiting factor anymore since people have Steam lists with hundreds of unplayed games and the abundance of Free2Play. The main factor it not to have a saleable product, it is to manage to get in in front of enough eyes, with all marketplaces already being saturated and ads being useless unless you have a very high budget.
I'm a bit worried that it will devaluate indie productions even further because of an over-abundance of AI generated content, further elevating industry production that can afford really competent artists that you can't even start to replace.
I'd draw some parallels to the rise of advertising as well. For most people, the culture industry as entered at "post-scarcity" phase, and the consumption of mass-art is now gated more by eyeballs and less by actual production. The only (reasonable) way to increase eyeballs is through advertisement, paying 3rd parties to force consumers to see (part of) your art.
Yep. I have been gaming on native elf/linux (no msft grade proton plz) for a decade, and when we talked about time, it is a bit more complex: based on the type of game, it depends also on your state of mind (tired, motivated, etc). Games are "demanding", as it is not passively watching some content. I cannot play a game of dota2 if I am too tired (namely the adrenaline won't compensate enough), and you will need a break from it once in a while, if not burnt out for good.
Namely, it is more than time alone, it is "that right time" and there is much, much less of it.
i'ts not a law, but there is also Jevon's Paradox in which the more efficient use of a resource causes overall increase in the amount of overall consumption of the resource
Yeah, people always underestimate/forget the ability of artists to use everything for art in novel ways, I see no reason AI is any exception. I'm sure it's already happening and I just haven't been exposed to it yet.
>I’m convinced AI art is going to be a boon for artists.
Have you talked to any artists? Because in my experience they universally hate AI generated art. Not just a little. I think I've seen more artists who were open to NFT's at this point than AI. Art sharing websites are getting filled to the brim with posts from new accounts sharing AI art which sometimes very directly takes clearly identifiable features and styles from other artists.
AI generated art is beckoning a possible dark age for visual arts. There are many lost arts already, if AI trivializes the costs and efforts why bother honing it. Then we find ourselves in a creative void as the only art generated will be based on art that of which has come before.
Disclaimer, I’m a layperson when it comes to art. I’ve taken one art history class so I could probably talk about various art styles, but I’m no expert.
In my experience of visiting art museums, some of the classical and renaissance art is incredible. There is clearly some profound idea behind the piece, and the skill is the vehicle of that idea. A huge amount of art, though, is simply commissioned portraits of some wealthy businessman or forgettable Duke. The purpose of these paintings is simply to represent reality. There is no idea behind it.
For the artists painting them, the camera must have been devastating. It meant that years of training in studying how to properly draw an eye and how to get the colors inside a shadow right were at risk of being entirely devalued. But the camera also laid bare the fact that art is more than just skill. I imagine a lot of artists who were coasting by on skill alone hated the camera.
We’re at that point now with digital art. The cost of skill in actually drawing a thing is now zero. Anyone can do it. The question is, do they have anything to say?
Professional artist here, and I can tell you there are as many different opinions as there are people, including those who couldn't care less. But if you look at the hot take mic drop platforms, you will of course see either hype or outrage with little nuance.
It's happening very quickly and the outcomes are difficult to predict, but it's clear it's a big deal. We either fear the unknown or are excited by it. I feel most of the controversy is in fact the robots-will-make-our-life-comfortable / robots-will-take-our-jobs discussion. The real problem is elsewhere, not the tech itself.
Photography is if course a good comparison, but don't forget 3d CGI. I can place things exactly how I want them and have the computer do ALL the drawing for me? Outrageous!
It certainly removed a lot of need for menial manual work (and even filming), but has also opened the opportunity for creating entirely new things that before would just not be practical, or enjoyable to make. And there's no doubt now that a lot of skill is required to make things look good.
It can seem like it's easy to get great looking stuff with AI but it's not. What's easy is to churn out tons of samey derivative crap that will age very badly as soon as we get used to it, like with early 3d.
> There are many lost arts already, if AI trivializes the costs and efforts why bother honing it.
Photography can capture reality far better than any artist could achieve even with a lifetime of practice, and today it can do so with nearly zero cost--take out your phone, hold down the camera button, snap a picture. These advances hardly put an end to drawing and painting as skills.
It turned out that there's value in art beyond verisimilitude, and if anything the advent of photography encouraged a resurgence of exploration into what art really is: a movement away from pure verisimilitude in search of feeling.
This is what I expect to happen with AI art today. Yes, the AI models can produce art that looks awesome and wonderful now, but it's frankly already starting to wear thin. Art exists to fulfill humanity's need to experience something new, and AI alone simply cannot keep up with the meta game.
There will need to be human artists guiding the AI to produce something valuable, and there will continue to be human artists working by hand to create the novel works that the AI could never have rendered.
I am an (amateur, non professional) artist and I am a big supporter of AI generated art tools.
The main issue is that anyone can write a simple prompt and get 85% of the way to an image that previously required a decade of experience to create.
But that last 15% is where real artists differentiate themselves. It’s the last 15% that can’t be faked if you don’t have a sense of artistic direction and discretion.
I strongly disagree with “why bother honing it” as well. If anything, being able to get to the “honing” stage so quickly is what makes these tools so powerful. I’m sure if Gordon Ramsey had to butcher his own chicken, grind his own grain for flour, and churn his cream into butter, he wouldn’t spend as much time on the finishing and plating of a chicken sandwich.
I have talked to and worked with a few artists, trying to capture their style using generative AI models.
With the limited subset of artists I have worked with (traditional modern / contemporary painters) they have been very inspired by the creative possibilites of these models and in one case see it as a great opportunity to scale their work to new types of media such as video.
It would have been practically impossible for one of the artists I am working with to draw each frame by hand but now she can generate music videos in her style by tweaking and working with Stable Diffusion and a set of reference images.
Have you looked at the resulting art works? There's an exhibit called "Unsupervised" currently running at the New York Museum of Modern Art [0]. I saw it last week--a fascinating of collage of shifting, waterlike images with delightful trompe l'oeil effects. If this is what AI-generated art can look like I'm all for it.
>AI generated art is beckoning a possible dark age for visual arts.
I don't think the invention of a camera displaced many artists, but I'm quite sure that they were just as angry, probably saying how it doesn't have a soul and such. But in the end it improved the life of humans to an unimaginable levels.
I think same if not more can be or will be achieved with AI generated art. And I don't think it's displace many artists jobs.
Again, the coming of AI art feels like the coming of camera must have felt.
I'm curious how many people have actually tried these networks out. I've dropped a bunch of demos below so you can actually try them and I encourage everyone to do so before they form strong opinions here.
As someone that works in generative modeling I think it is important to note that we are hand selecting results and not showing the average ones. There's been such a hype machine that if we show our average results that we'll get killed in review (publishing in a hyped space unfortunately requires a lot of salesmanship). Not a thing I'm particularly happy about, but I think this is important to note because if you're just looking at the images on Twitter, Blogs, and other social media then you're seeing the top 1% results. Images that take quite a bit of time and effort to produce. Likely not as long as an actual artist to make them, but still a lot of skill is involved.
I do agree that I think it'll be a big boon for artists. I've even been using these to get better at drawing. The results are bad even with prompt engineering, but hey, my human mind can fill in the gaps and it is good for expanding creativity. We'll get better at these models for sure but I'm not quite convinced it'll kill a lot of art jobs. But maybe I don't know what those jobs are that are being killed. But then again, I've even seen artists do much better at these models than me even though I understand the code and math going on. There's also cool things like walking through the latent spaces, but you won't be able to do that with the hugging face code. But that is art that you won't be able to do elsewhere. It is entirely new.
I tried Ghibli diffusion to get some decent cartoon styled results with SD, but it's really bad. Results were all hallucinations from the training data, no interesting new combinations, no novel prompts that would work. I stopped after a few dozens.
I'd still like to get some good cartoony results with SD, but it's clearly not possible yet.
> but the act of training a neural network on datasets like this has already been decided to be legal
?
Has it?
Look, you can argue about whether it’s morally right or not to use models that are fine tuned explicitly to copy the style of someone else, trained on their art without their consent, to make a model that can generate images very similar to the training images.
You can argue about technically of that’s copying, or if lossy compression is copying.
…but legal and moral are different things, and right now, as far as I’m aware:
- it’s only legal because there are no laws specifically making it illegal currently.
- there are active (eg. Copilot) cases challenging this to set a precedent.
- it’s sufficiently ambiguous having a model that anyone can type “a naked picture of a 12 year old” in and get exactly that as output, that stability has nerfed the most recent mode release they’ve done.
- there is a reasonably obvious similarity to other fields where a thing is itself not illegal or bad, but it can enable people to do illegal or bad things, and therefore access, ownership and usage of said things (eg. Hand guns) is heavily legislated.
I think “this has already been decided to be legal” is a blatantly false assumption.
Yeah, we'll see how the courts come down on this one. But If we follow exactly the same process as you describe using a human instead of a computer it seems like that would be fine. Artists copy each other all the time.
Also, this looks to me like a situation where generating art is suddenly so cheap and easy it doesn't matter what the courts decide. People can and will ignore the law, because it is trivial to generate new pictures and it is cost ineffective to enforce any restrictions. We're going to see this tech take off. Who knew that art would be the next thing software took out?
This is basically my take - I'm mostly confused by the people up in arms about this (except that it likely makes their work more of a commodity, so I get that fear).
People look at art and make art in that style all the time, now a machine exists that can do that. Why is that unethical? Because they didn't consent for the machine to look at it/learn from it, but they did for humans? I don't think this argument will be able to hold the wave of change that's coming from this new capability. They'd be better off long term learning how to use it.
Nobody creates purely original things in a vacuum, machines won't either.
It’s really not that simple. If an artist picks 12 unusual colors to paint a sunset and you copy those exact same colors to also paint a sunset in his style then no that’s not ok.
People on HN have very strong options about this stuff without looking at any of the relevant case law.
> This is case with anything…anything not illegal is legal, at least in the US…
I think you're dancing around with words here. Let me be super specific:
There's no law specifically saying I can't use a jelly coated toasted to beat someone to death... but there are existing laws that cover 'beating someone to death'.
If you beat someone to death with a jelly coated toaster, you might argue for some obscure reason, your actions are not covered by the existing legal framework around beating people to death, but mostly likely you will not be protected by the claim you 'didn't know it was illegal' to do that.
...because the courts have not ruled that beating people to death with jelly coated toasters is legal.
ie.
a) There is no specific legal precedent or law around something but other laws related to it exist
and
b) Something has been determined to be legal by some precedent / law / whatever
Are not equivalent.
Regardless of what people want to believe, or have opinions one way or another, the assertion, made in the original article, that (b) was true, is not true.
Only (a) is true, and that is, legally, a much weaker statement.
What I find weird is that humans do the same and no one mentions that; I have 2 professional (they have lived of it for decades) artist friends; one makes Giger art (paintbrush, exactly the same style but no copies, original but if you see it you are going to say Giger) and another one Vermeer, same thing. There is no discussion if that’s moral or legal, what’s the difference? The Giger one has bought a farm of GPUs a few months ago and is adding training to models for animations. He loves it.
Same with copilot; people copy shit from GitHub and SO all the time without mentioning copyrights and a lot of code people cough up is just ‘stolen’ from someone without remembering who/where it was; what’s the difference?
I'd say there's a good chance both of those artists are going to grow "beyond" the artist they are emulating (and by "beyond" here I don't mean better than but rather grow into something unique).
Lots of bands (nearly all?) learn their craft by playing covers. At some point the artists in the group start to find their voice.
In a lot of ways, whether it has or not is moot in the medium-term.
Court cases / legal clarification may outlaw Stable Diffusion or DALL-E for having been trained without the consent of the original artist on their copyrighted artworks. But if this technology proves valuable and viable, the likes of Disney, WPP, and Omnicom Group will pay a few dozen artists to create enough work to seed an engine that can generate 50 million wholly-owned lookalikes.
Copyright protection will ultimately shield the megacorps from competition by the common folk, not artists from competition by corporations.
> it’s only legal because there are no laws specifically making it illegal currently.
No, it's only legal because nobody has litigated a case all the way to the Supreme Court. We all thought that APIs weren't copyrightable, and then, suddenly they were (or worse, were "assumed to be" but not explicitly ruled upon).
Although not being explicitly illegal does make something legal, the phrasing "has been decided to be legal" suggests a debate, a conscious decision, and perhaps an overturned law. That is not the case as far as I know. It is legal (because that is the default), but no decision has been made.
AFAIK, from the discussion here, training the network has been established as fair use on the US. But that doesn't apply to use the network's results in any way.
As far as I know that hasn’t been established yet but is presumably what would be decided (especially in an academic environment). But yes the output is a totally different question.
> ... it’s only legal because there are no laws specifically making it illegal currently.
One might argue that any unlicensed use of copyrighted material is misuse (unless the courts have ruled it is fair use.)
Only the copyright holder may bring action against a potential misuse. Typically, individuals tend not to have deep enough pockets to hire the lawyers to bring the action. And during discovery it may be found that the model was trained on works-for-hire made by humans replicating the original style. Or appeals may continue on at more legal cost, without guarantee of recompense.
Most will decide it’s just not worth it. Until a company similar Getty buys up all the copyrighted material and, in addition to sensible suits, brings spurious legal action against individuals and turns the tables.
Yes. When artists say things like "this really feels like it should not be fair use" we are dismissed with "well it's legal!". Changing what "fair use" covers is a possibility.
Unfortunately art is a hard business to make a lot of money in and the vast majority of the people unhappy about this do not have the money to mount any kind of legal challenge to the corporations developing these databases, so we are basically fucked.
Which is in some ways nothing new, the average Internet user has no concept of creator's rights and will blissfully assume that if something's not behind a paywall, it's free for any use. It feels even shittier and worse when this is being done on an industrial scale by these AIs, though.
Even if artists could sue this is assuming there is someone making enough money to sue.
Open source models will likely be dominant (this was how Stable Diffusion got popular, it's not clear their new neutered 'safe' model will be as popular or a successful business).
This is the new reality even if the bigco gets sued out of business. Artists need to accept that reality eventually. Just like all "panics".
Fortunately AI generators are not a direct replacement for artists, I've seen it used as part of the design process, but that still takes talent. Maybe it will get better but it's not a one-stop shop for business use-cases. More likely it will be AI+artist, not AI replacing artist.
In the EU, training a model on copyrighted data does not infringe copyright. This is explicit, codified law as of the latest EU Copyright Directive[0]. In the US, AI researchers are more or less hoping that Authors Guild v. Google forms controlling precedent against lawsuits against the AI companies. While this is not done-and-dusted case law, I can see the logic. The model is capable of, and is intended to be capable of, creating novel art.
In neither case is it legal to use outputs of an AI that infringe an already-existing copyrighted work. This means anyone using it to generate production-ready artwork is exposing themselves to potential legal liability if their AI winds up regurgitating training set data. This is what I worry about way more than just "are the model weights infringing copyright".
As for morality:
- Absolutely none of the current generative art systems are trained from scratch on ethically-sourced datasets. The AI companies just assume that because they need ungodly amounts of training set data, that they are morally entitled to get it, because they spend more time staring into the eyes of a basilisk[1] than worrying about the world they already live in.
- Multiple companies getting into generative art have gone above and beyond in giving human artists the middle finger. DeviantArt decided to blow all their good-will from defending against NFT nonsense by making their AI training program opt-out, which is NOT HOW CONSENT WORKS. Mimic[2] and Dreambooth are basically artistic impersonation tools that have specific moral implications beyond the general practice of AI art.
Whether or not this becomes actual law is... well, I'll put it to you this way. Stability AI's Dance Diffusion is actually trained on an ethically-sourced, public-domain dataset. Why? Because they're afraid of being sued by the RIAA. Despite the name, copyright maximalists are less about maximizing the rights of artists and more about building moats around large publishers. And generative art does not threaten[3] those publishers, so it will not be banned.
[0] Yes, the same one that mandated upload filters on video sites.
[1] Roko's Basilisk posits the idea of a superintelligent AI - one that can eat everyone's brains and emulate them perfectly - constructing a perfect hell for people who didn't build it as a way to threaten those people in the past to build it.
Longtermists can burn in computer-generated hell. Time discounts exist for a reason.
> Section 23 (Adaptations and rearrangements): "(1) Adaptations or other rearrangements of a work, especially a melody, may only be published or used with the consent of the author. If the newly created work is at a sufficient distance from the work used, it does not constitute an adaptation or rearrangement within the meaning of sentence 1."
This amounts to the same as what TFA says about derivative works in general. If it's too close to the original, it may be infringing copyright. Gauging the boundary between "too close" and "not too close" is the bread and butter of copyright courts.
> Section 44b (Text and data mining): "(1) Text and data mining is the automated analysis of one or more digital or digitized works in order to obtain information, in particular about patterns, trends and correlations.
(2) Duplications of legally accessible works for text and data mining are permitted. The copies are to be deleted when they are no longer required for text and data mining.
(3) Uses according to paragraph 2 sentence 1 are only permitted if the right holder has not reserved them. A reservation of use for works accessible online is only effective if it is in machine-readable form."
Again, IANAL, but in my opinion this covers neural networks. "Obtaining information about patterns, trends and correlations" is as close to a literal description of the purpose and function of artificial neural networks as you're going to get in legalese. Paragraph 2 just means that you have to delete your data-mined stash if you ever deem your model complete (but it should not be too hard to argue that ML training is always an ongoing process and thus the mined data will always be required). Regarding Paragraph 3, if I'm not mistaken, the "machine-readable form" part of paragraph 3 is very very very specifically aimed at robots.txt. That is, if you allow your art to appear in search results (which most artists want), then it's also fair game for data mining.
Once again, IANAL, and this is only German law. But I think it's very easy to make a case here that this existing code of law covers AI training as well, as long as measures are taken to ensure that source images cannot be reproduced with any sort of fidelity (and obviously that's a big "if").
IAAL. Regardless of how the Copilot case develops at the trial court level, anyone who thinks the Roberts' Supreme Court is ultimately going to kneecap this technology needs to reboot.
YEah, I think people also need to give up on saying "well precedence says" with the current SCOTUS roster. They don't actually care about precedent when it doesn't jive with their agenda. It's hard to believe that the mightest lawyers in the land think precedent is unimportant but we've already seen it a few times and this SCOTUS is just getting started.
There is a lot of case law on what a derivative work is. A model that outputs an identically image to the training doesn’t mean the trained model is infringing or a derivative work.
> txt2img prompt crafting is a bit of an art in its own right, as getting the model to spit out what you want isn’t always trivial.
I don't find the output images from a lot of "AI" art to be artistically interesting because an "AI" made them. The output might be novel, or even kind of surprising, but still just the output of code. Someone might say the code of an "AI" art machine could qualify as art. I often find the txt2img prompts artistic in a way that the output images lack, because they are representative of human imagination, often like surreal poetry, or like musique concrete, cutting and pasting segments around on a tape. This is not to deny that the output of "AI" art is impressive. It is, frankly. But as a matter of machine, computational, output rather than human artistic merit.
It comes down to a simple thing: if you showed me two images and you told me a human made one, and "AI" made the other, I will always find the former artistic in a way that I would never find the latter.
This is also not to touch on the licensing issues that arise from these tools. That's its own knot to untangle.
Art Turing test: If you can't distinguish between human-made art and AI-made art, then there's nothing backing up your emotional bias.
It doesn't matter anymore that you emotionally value, or want to emotionally value, human-made art more than AI-made art, if you can't distinguish the two except based on other people's representations of how it was made.
Unless you watched a (non-augmented) human make a piece of art, you have no assurance that a human didn't touch up an AI generated art piece or simply curate a collection of fully AI-generated art.
ETA: I'm not trying to put down recognizable human-created art that's already been created, especially historically notable works. I'm not sure what all those replies are trying to get at. Of course there can continue to be an art-collecting market for human-made works made before the AI-art era.
I think OP understands there may not be a physical difference, but the provenance / "Colour of the Bits" is important to them; and that's a valid position to take (not the only valid position to take).
For example, a woolen blanket made by my mother is much, much dearer to me than an equivalent woolen blanket made in a factory.
I see it very likely that certified-provenance human-art may command high prices in the future; but also for physical part, it's pretty trivial and doesn't have to be so posh and fancy - anything from street art to art galleries and art collectives/workshops.
If a copy of a Van Gogh is next to the original, and you can't which is the original, do they have equal artistic value?
If an artist who studied Van Gogh intensely made an original piece in Van Gogh's style, does it have equal artistic merit as one of Van Gogh's originals?
AI is just going to be another aspect of provenance, and if it's found out an art piece had AI used to make it, it won't be forgotten or ignored.
I can’t tell if someone is selling me something made locally, or is just pawning something made in China as local, but I still try to support local because I believe the underlying knock on effects are greater.
For me personally, it's more about appreciating the act and capabilities possible by a human being.
I find it similar to watching chess. AI/Chess engines will always be better than humans, but it's exciting to watch a human play to see what they can do.
That is not how art works at all though. It's very difficult to distinguish a real van gogh from a fake van gogh. The real will still sell for millions and the fake will not.
There's a spectrum of human agency in art. Where will you draw the line? Where does Warhol fit? Pollock? Bridget Riley, Vasarely or even Malevich?
Whatever line you draw will leave several canonical figures on the wrong side of it. Maybe that's OK and you'll draw your line in the sand. Just don't pretend it's a non-controversial consensus.
I agree. Also, it's not that stable diffusion spontaneously generates art: someone has thought of a prompt, written it down, iterated, and finally decided that a given image was good enough to publish. Is stable diffusion an "artist" or is it a tool?
I find "what is art" discussions impossible to resolve and untangle from prejudices and biases. The least problematic answer I've found comes from John Carey, paraphrasing: "art is whatever someone has decided it to be". In other words, if I decide something is art, then it is.
The problem shifts slightly to a more interesting way to pose this question: why some art has more value than some other art (or even "does some art has more value than some other art")? Equally difficult to resolve but more prone to highlight prejudices and biases mentioned above.
As a historical note, this argument is over 50 years old. In 1965, an engineer at Bell Labs generated pseudo-random artworks inspired by famous artist Piet Mondrian. He found that a) people couldn't tell if the artwork was generated by Mondrian or the computer, and b) people generally liked the computer's art better.
I don't think for "art as art" purposes where we take some painting and say it's worth a million dollars and put it in a museum ai can replace humans. But a lot of art is much more utilitarian: art in games, art in movies and tv, in picture books, etc. In those cases I care that it looks how it is meant to and costs as little as possible to make.
I feel like the majority of "art" we consume falls in the second category.
I have a project that involves publishing an image I generate and publish to an audience everyday. It's not high value art, but the process does often involve me thinking of a concept I want to communicate to my audience and working with the AI to produce something that communicates that concept.
When I started doing this, I was uncomfortable thinking of myself as the author of the image in any way. I'm okay with it now. There's some skill I am expressing, even if not much, and I am connecting with an audience.
This argument could be made 150 years ago about photography. For the sake of an argument: can a selfie be a form of art?
To take a selfie, human intervention is minimal: someone decides they want to take a selfie; take dozens of them, pretty much at random; choose one they like; publish it.
Similarities with the stable diffusion process are striking. And yet, could you consider a selfie "artistic"? Are all selfies artistic? What if Annie Leibovitz or Marina Abramovic take a selfie?
What if Marina Abramovic generates a million images of a rabbit with stable diffusion and papers a whole warehouse with them? Is this art?
It's considered art for the purposes of e.g. copyright law. Recently there was a dispute about whether a selfie taken by a monkey belonged to the owner of the camera, or the monkey. (A judge opined that only humans can have copyrights, and PETA eventually settled out of court.)
If Marina Abramovic were to take a bunch of selfies, or generate a bunch of AI images, and exhibit them, that might be considered high value art. A famous artist can literally tape a banana to a wall and it'd be considered high art:
Sure, and if your 4 year old son draws 2 sloppy stick figures representing him holding your hand, that's going to be valuable to you in a way a stranger's precocious 10-year-old child's drawing isn't (no matter how good it is).
But does that make it more artistic than something generated by AI which was coaxed out by a human applying their expertise?
A couple of years ago I wrote the script for a graphic novel. I would have liked to illustrate it myself (I went to art school after all), but I was too busy running a startup at the time, so I pitched it as a collaboration to about 30 different artists that I thought could do a great job with it. I could not get a single person to bite.
In my pitch I proposed generous royalties and cash compensation to the artists. All of them essentially said that they weren't taking commissions. I'm sure this is in part due to the fact that I have never been involved in the graphic novel world, so they'd be taking a chance on someone new. Still, it seemed there was no amount I could pay to get someone to work with me.
Now I am reviving this graphic novel idea and still looking for someone I can pay to help me work on the project. However, the more synthography advances as a technology the more it seems like I should explore it as a path for this project. It would help me develop the book much faster than I can on my own. At minimum it could help me get the story board in place.
Maybe some day synthography can help artists scale up their output, in the same way Michelangelo developed a workshop of apprentices. I would be more than happy to pay an artist's AI apprentice if I can't work directly with the human.
I think you're wildly underestimating the amount of effort and very specific skills that go into making comics, it's either a labor of immense love for the art form, or getting paid a large amount of money up front to work with an established writer, nobody is looking for an idea guy
Hi, I run http://synapticpaint.com/ and using AI image generation for graphic novels/comics is one of the directions I'm exploring. If you're interested in collaborating to make your graphic novel a reality (I'll provide the tooling in return for product feedback), please email me at the email address in my profile! (I poked around on your site but couldn't find an email address.) Thanks!
If you don't mind answering, I'm really curious how much cash you were offering up front and what the page count you were commissioning was. Been talking to some art consultancies recently and I'm curious to how your experience compared to mine.
I've had similar experiences trying to commission works. Turns out artists are difficult to work with almost by definition. And the few who actually act like they're providing a service you've agreed to pay money for are few and far between.
The major problem is that the AI companies are redirecting the profit from artists to themselves. The creative industry will remain, but artists won't be able to receive even one penny, companies could just 1) grab their work with absolute zero payment, 2) fine-tune a new model 3) profit on the style. Artist as a job will cease to exist very soon as they are becoming free suppliers for AI companies. Like, Uber with remote drivers controlling cars, but all drivers work for free as they claim all cars are capable of "self-driving". How is this acceptable and legal is beyond my understanding.
There is a way we can pit some corporations against the others to help with this though: Train models exclusively on content by megacorps like Disney, then claim it's fair use.
Those guys lobbied to get copyright extended for ages for own profit; for once they could help protect the ordinary artist.
Similar to Moore's law, "transformer" deep neural nets have been found to following scaling laws[0]. This means the faster and more VRAM your GPU's have, the better a model you can train "for free".
Training models from scratch only works with massive (labeled) datasets covering a massive data distribution. With language models, the datasets being used are quickly approaching "all known written text" sizes. Training a model from scratch on Microsoft's internal code, with not only its precious intellectual property, but also its technical debt. Code at Microsoft is not going to get close to covering the broad range of styles that a coder could possibly use. The model will possibly diverge without enough data, as it needs to see a given "example usage" in multiple different contexts before it can learn it.
My current understanding is that deep NN's are quite good at modeling an underlying distribution of data without needing any priors hard-coded about that dataset. But! They need to see a whole lot more of it than an adult human would. Several orders of magnitude more. - and they need to see accurate labels about 75-80% of the time.
He actually did disclose this [1], at least to the art fair administration:
> Olga Robak, a spokeswoman for the Colorado Department of Agriculture, which oversees the state fair, said Mr. Allen had adequately disclosed Midjourney’s involvement when submitting his piece; the category’s rules allow any “artistic practice that uses digital technology as part of the creative or presentation process.” The two category judges did not know that Midjourney was an A.I. program, she said, but both subsequently told her that they would have awarded Mr. Allen the top prize even if they had.
> Stable Diffusion? Well, that’s free-ish. Problem is, you’ll probably want to run it locally, which requires a really, really beefy graphics card. I was struggling to run it on a Vega56 - a GPU that goes for ~$150 used now - so I went out and got a RTX3090 for about $1,000. If you’re already a gamer with a GPU with 8Gb+ of VRAM you’re probably good, but for most people this is a bit absurd.
Nit: 2 weeks ago there was an article about getting SD running on an iPhone. The app isn't very time- or battery-efficient, but it just barely works on my (previous-gen) device: https://news.ycombinator.com/item?id=33539192
I get results in 2 minutes on my iPhone SE. At least, when it doesn't stop execution part way through. When that happens, I have to change the seed, but it happens often enough that it sometimes needs a second change of seed to actually complete.
Still, 2 minutes on my phone. I remember Bryce taking half an hour to render lower resolution images back in the late 90s.
Running on an iPhone-grade hardware is what they refer to as "struggling". You need a lot of iterations in any workflow, be it style transfer or text-to-image, so you want as fast generation and as much CUDA cores as possible, and up to 24GB VRAM for higher resolutions. More importantly, most of the power of this stuff is not just in inference, but in finetuning the model on your custom data, which requires 12GB VRAM for SD 1.4 at the very least, and ultimately multiple server-grade GPUs. And even more importantly, you need full control over your own box, not an iPhone app - as it's still a very experimental field which is only going to be more complex and hard to master.
That said, you can just rent a GPU instead of buying it right away. It's not very expensive, and can be cheaper overall depending on your usage. It's also the only sane way to experiment with training as you need a lot of attempts.
This stuff can really devour any hardware you can possibly throw at it.
> you can just rent a GPU instead of buying it right away
I agree, this is an underrated option and I'm not sure why it doesn't get more attention. I've been using stable diffusion on a cloud machine, and, say $20 will get you very far - probably thousands of images generated and a few sessions of fine tuning. It's way more affordable that its apparent popularity would suggest
When I was in school studying computer science, as we learned about code reuse and libraries and whatnot, one student expressed a concern that all the software the world ever needed would be written, and we would be out of a job. The professor smiled at the concern, and said that it seemed to turn out in practice, no matter how many problems were solved in reusable ways, that there were always new things to do.
This development seems parallel to me: AI does not seem to be inventing, even in the limited sense of closely customizing for a purpose. It is remixing the well established. If machines can help with remixing and reapplying the old, that's all to the good. Humans still own the new.
What’s likely to happen is people are going to invent whole new media that will use this. Incredibly elaborate interactive and personalized experiences will be made possible by this. One person will be able to deliver a huge amount of value, and they will be rewarded accordingly.
Think back to the 1980s. Programmers were in very short supply, and the work they were doing was less accessible and more difficult than it was today. IDEs, version control, better languages and frameworks, easy access to tools to learn to code, Stack Overflow, etc. made programming much more accessible and engineers became much more productive. The number of programmers out there is orders of magnitudes greater than three decades ago. Yet real salaries have still grown tremendously. The value per capita has grown even faster than those salaries because they’re so productive.
The same thing is going to happen here. Artists will build unimaginably huge, complex projects (on the scale of Google/Facebook/Amazon), and the world will gobble them up.
Suddenly, he’s got a cheap option for getting some basic art for a simple turn based game, card game or similar.
If a game of his takes off? He’ll have to hire artists to keep up with the increased workload.
AI will move barriers of entry lower, not higher.
Steam had almost 9000 new games added to it last year, and the average revenue of each fell by almost half a couple of years ago
I don't know such a law, but it's simply false. People have 2 finite budgets they can allocate to entertainment, time and money. In many ways money is not the limiting factor anymore since people have Steam lists with hundreds of unplayed games and the abundance of Free2Play. The main factor it not to have a saleable product, it is to manage to get in in front of enough eyes, with all marketplaces already being saturated and ads being useless unless you have a very high budget.
I'm a bit worried that it will devaluate indie productions even further because of an over-abundance of AI generated content, further elevating industry production that can afford really competent artists that you can't even start to replace.
Namely, it is more than time alone, it is "that right time" and there is much, much less of it.
In any case, it is a bizarre notion with perhaps limited applicability in the real world.
“conversely, as the price of a good decreases (↓), quantity demanded will increase (↑)”
https://en.wikipedia.org/wiki/Law_of_demand
Deleted Comment
https://en.wikipedia.org/wiki/Jevons_paradox
In this case you can think of Artists as the "resource" being used.
Have you talked to any artists? Because in my experience they universally hate AI generated art. Not just a little. I think I've seen more artists who were open to NFT's at this point than AI. Art sharing websites are getting filled to the brim with posts from new accounts sharing AI art which sometimes very directly takes clearly identifiable features and styles from other artists.
AI generated art is beckoning a possible dark age for visual arts. There are many lost arts already, if AI trivializes the costs and efforts why bother honing it. Then we find ourselves in a creative void as the only art generated will be based on art that of which has come before.
In my experience of visiting art museums, some of the classical and renaissance art is incredible. There is clearly some profound idea behind the piece, and the skill is the vehicle of that idea. A huge amount of art, though, is simply commissioned portraits of some wealthy businessman or forgettable Duke. The purpose of these paintings is simply to represent reality. There is no idea behind it.
For the artists painting them, the camera must have been devastating. It meant that years of training in studying how to properly draw an eye and how to get the colors inside a shadow right were at risk of being entirely devalued. But the camera also laid bare the fact that art is more than just skill. I imagine a lot of artists who were coasting by on skill alone hated the camera.
We’re at that point now with digital art. The cost of skill in actually drawing a thing is now zero. Anyone can do it. The question is, do they have anything to say?
It's happening very quickly and the outcomes are difficult to predict, but it's clear it's a big deal. We either fear the unknown or are excited by it. I feel most of the controversy is in fact the robots-will-make-our-life-comfortable / robots-will-take-our-jobs discussion. The real problem is elsewhere, not the tech itself.
Photography is if course a good comparison, but don't forget 3d CGI. I can place things exactly how I want them and have the computer do ALL the drawing for me? Outrageous! It certainly removed a lot of need for menial manual work (and even filming), but has also opened the opportunity for creating entirely new things that before would just not be practical, or enjoyable to make. And there's no doubt now that a lot of skill is required to make things look good. It can seem like it's easy to get great looking stuff with AI but it's not. What's easy is to churn out tons of samey derivative crap that will age very badly as soon as we get used to it, like with early 3d.
Photography can capture reality far better than any artist could achieve even with a lifetime of practice, and today it can do so with nearly zero cost--take out your phone, hold down the camera button, snap a picture. These advances hardly put an end to drawing and painting as skills.
It turned out that there's value in art beyond verisimilitude, and if anything the advent of photography encouraged a resurgence of exploration into what art really is: a movement away from pure verisimilitude in search of feeling.
This is what I expect to happen with AI art today. Yes, the AI models can produce art that looks awesome and wonderful now, but it's frankly already starting to wear thin. Art exists to fulfill humanity's need to experience something new, and AI alone simply cannot keep up with the meta game.
There will need to be human artists guiding the AI to produce something valuable, and there will continue to be human artists working by hand to create the novel works that the AI could never have rendered.
The main issue is that anyone can write a simple prompt and get 85% of the way to an image that previously required a decade of experience to create.
But that last 15% is where real artists differentiate themselves. It’s the last 15% that can’t be faked if you don’t have a sense of artistic direction and discretion.
I strongly disagree with “why bother honing it” as well. If anything, being able to get to the “honing” stage so quickly is what makes these tools so powerful. I’m sure if Gordon Ramsey had to butcher his own chicken, grind his own grain for flour, and churn his cream into butter, he wouldn’t spend as much time on the finishing and plating of a chicken sandwich.
With the limited subset of artists I have worked with (traditional modern / contemporary painters) they have been very inspired by the creative possibilites of these models and in one case see it as a great opportunity to scale their work to new types of media such as video.
It would have been practically impossible for one of the artists I am working with to draw each frame by hand but now she can generate music videos in her style by tweaking and working with Stable Diffusion and a set of reference images.
[0] https://www.moma.org/calendar/exhibitions/5535
I don't think the invention of a camera displaced many artists, but I'm quite sure that they were just as angry, probably saying how it doesn't have a soul and such. But in the end it improved the life of humans to an unimaginable levels.
I think same if not more can be or will be achieved with AI generated art. And I don't think it's displace many artists jobs.
Again, the coming of AI art feels like the coming of camera must have felt.
As someone that works in generative modeling I think it is important to note that we are hand selecting results and not showing the average ones. There's been such a hype machine that if we show our average results that we'll get killed in review (publishing in a hyped space unfortunately requires a lot of salesmanship). Not a thing I'm particularly happy about, but I think this is important to note because if you're just looking at the images on Twitter, Blogs, and other social media then you're seeing the top 1% results. Images that take quite a bit of time and effort to produce. Likely not as long as an actual artist to make them, but still a lot of skill is involved.
I do agree that I think it'll be a big boon for artists. I've even been using these to get better at drawing. The results are bad even with prompt engineering, but hey, my human mind can fill in the gaps and it is good for expanding creativity. We'll get better at these models for sure but I'm not quite convinced it'll kill a lot of art jobs. But maybe I don't know what those jobs are that are being killed. But then again, I've even seen artists do much better at these models than me even though I understand the code and math going on. There's also cool things like walking through the latent spaces, but you won't be able to do that with the hugging face code. But that is art that you won't be able to do elsewhere. It is entirely new.
https://huggingface.co/spaces/akhaliq/Ghibli-Diffusion
https://huggingface.co/lambdalabs/sd-image-variations-diffus...
https://huggingface.co/spaces/shi-labs/Versatile-Diffusion
https://huggingface.co/spaces/akhaliq/openjourney
https://huggingface.co/spaces/anzorq/finetuned_diffusion
https://huggingface.co/lambdalabs/sd-pokemon-diffusers
https://huggingface.co/spaces/hakurei/waifu-diffusion-demo
I'd still like to get some good cartoony results with SD, but it's clearly not possible yet.
?
Has it?
Look, you can argue about whether it’s morally right or not to use models that are fine tuned explicitly to copy the style of someone else, trained on their art without their consent, to make a model that can generate images very similar to the training images.
You can argue about technically of that’s copying, or if lossy compression is copying.
…but legal and moral are different things, and right now, as far as I’m aware:
- it’s only legal because there are no laws specifically making it illegal currently.
- there are active (eg. Copilot) cases challenging this to set a precedent.
- it’s sufficiently ambiguous having a model that anyone can type “a naked picture of a 12 year old” in and get exactly that as output, that stability has nerfed the most recent mode release they’ve done.
- there is a reasonably obvious similarity to other fields where a thing is itself not illegal or bad, but it can enable people to do illegal or bad things, and therefore access, ownership and usage of said things (eg. Hand guns) is heavily legislated.
I think “this has already been decided to be legal” is a blatantly false assumption.
Also, this looks to me like a situation where generating art is suddenly so cheap and easy it doesn't matter what the courts decide. People can and will ignore the law, because it is trivial to generate new pictures and it is cost ineffective to enforce any restrictions. We're going to see this tech take off. Who knew that art would be the next thing software took out?
People look at art and make art in that style all the time, now a machine exists that can do that. Why is that unethical? Because they didn't consent for the machine to look at it/learn from it, but they did for humans? I don't think this argument will be able to hold the wave of change that's coming from this new capability. They'd be better off long term learning how to use it.
Nobody creates purely original things in a vacuum, machines won't either.
People on HN have very strong options about this stuff without looking at any of the relevant case law.
This is case with anything…anything not illegal is legal, at least in the US…
I think you're dancing around with words here. Let me be super specific:
There's no law specifically saying I can't use a jelly coated toasted to beat someone to death... but there are existing laws that cover 'beating someone to death'.
If you beat someone to death with a jelly coated toaster, you might argue for some obscure reason, your actions are not covered by the existing legal framework around beating people to death, but mostly likely you will not be protected by the claim you 'didn't know it was illegal' to do that.
...because the courts have not ruled that beating people to death with jelly coated toasters is legal.
ie.
a) There is no specific legal precedent or law around something but other laws related to it exist
and
b) Something has been determined to be legal by some precedent / law / whatever
Are not equivalent.
Regardless of what people want to believe, or have opinions one way or another, the assertion, made in the original article, that (b) was true, is not true.
Only (a) is true, and that is, legally, a much weaker statement.
Same with copilot; people copy shit from GitHub and SO all the time without mentioning copyrights and a lot of code people cough up is just ‘stolen’ from someone without remembering who/where it was; what’s the difference?
Lots of bands (nearly all?) learn their craft by playing covers. At some point the artists in the group start to find their voice.
I suspect AI never will.
Court cases / legal clarification may outlaw Stable Diffusion or DALL-E for having been trained without the consent of the original artist on their copyrighted artworks. But if this technology proves valuable and viable, the likes of Disney, WPP, and Omnicom Group will pay a few dozen artists to create enough work to seed an engine that can generate 50 million wholly-owned lookalikes.
Copyright protection will ultimately shield the megacorps from competition by the common folk, not artists from competition by corporations.
No, it's only legal because nobody has litigated a case all the way to the Supreme Court. We all thought that APIs weren't copyrightable, and then, suddenly they were (or worse, were "assumed to be" but not explicitly ruled upon).
One might argue that any unlicensed use of copyrighted material is misuse (unless the courts have ruled it is fair use.)
Only the copyright holder may bring action against a potential misuse. Typically, individuals tend not to have deep enough pockets to hire the lawyers to bring the action. And during discovery it may be found that the model was trained on works-for-hire made by humans replicating the original style. Or appeals may continue on at more legal cost, without guarantee of recompense.
Most will decide it’s just not worth it. Until a company similar Getty buys up all the copyrighted material and, in addition to sensible suits, brings spurious legal action against individuals and turns the tables.
Unfortunately art is a hard business to make a lot of money in and the vast majority of the people unhappy about this do not have the money to mount any kind of legal challenge to the corporations developing these databases, so we are basically fucked.
Which is in some ways nothing new, the average Internet user has no concept of creator's rights and will blissfully assume that if something's not behind a paywall, it's free for any use. It feels even shittier and worse when this is being done on an industrial scale by these AIs, though.
Open source models will likely be dominant (this was how Stable Diffusion got popular, it's not clear their new neutered 'safe' model will be as popular or a successful business).
This is the new reality even if the bigco gets sued out of business. Artists need to accept that reality eventually. Just like all "panics".
Fortunately AI generators are not a direct replacement for artists, I've seen it used as part of the design process, but that still takes talent. Maybe it will get better but it's not a one-stop shop for business use-cases. More likely it will be AI+artist, not AI replacing artist.
In neither case is it legal to use outputs of an AI that infringe an already-existing copyrighted work. This means anyone using it to generate production-ready artwork is exposing themselves to potential legal liability if their AI winds up regurgitating training set data. This is what I worry about way more than just "are the model weights infringing copyright".
As for morality:
- Absolutely none of the current generative art systems are trained from scratch on ethically-sourced datasets. The AI companies just assume that because they need ungodly amounts of training set data, that they are morally entitled to get it, because they spend more time staring into the eyes of a basilisk[1] than worrying about the world they already live in.
- Multiple companies getting into generative art have gone above and beyond in giving human artists the middle finger. DeviantArt decided to blow all their good-will from defending against NFT nonsense by making their AI training program opt-out, which is NOT HOW CONSENT WORKS. Mimic[2] and Dreambooth are basically artistic impersonation tools that have specific moral implications beyond the general practice of AI art.
Whether or not this becomes actual law is... well, I'll put it to you this way. Stability AI's Dance Diffusion is actually trained on an ethically-sourced, public-domain dataset. Why? Because they're afraid of being sued by the RIAA. Despite the name, copyright maximalists are less about maximizing the rights of artists and more about building moats around large publishers. And generative art does not threaten[3] those publishers, so it will not be banned.
[0] Yes, the same one that mandated upload filters on video sites.
[1] Roko's Basilisk posits the idea of a superintelligent AI - one that can eat everyone's brains and emulate them perfectly - constructing a perfect hell for people who didn't build it as a way to threaten those people in the past to build it.
Longtermists can burn in computer-generated hell. Time discounts exist for a reason.
[2] https://illustmimic.com/en/
[3] Specifically: even in a world where AI completely outperforms humans on all artistic endeavors, publishers will just buy the AI companies.
I have investigated this a few months ago while researching a radio show segment on AI art. In German law, the relevant statutes appear to be section 23.1 and section 44b of the copyright code. The original text is at https://www.gesetze-im-internet.de/urhg/__23.html https://www.gesetze-im-internet.de/urhg/__44b.html and below is a Google Translate result that I have proof-read.
> Section 23 (Adaptations and rearrangements): "(1) Adaptations or other rearrangements of a work, especially a melody, may only be published or used with the consent of the author. If the newly created work is at a sufficient distance from the work used, it does not constitute an adaptation or rearrangement within the meaning of sentence 1."
This amounts to the same as what TFA says about derivative works in general. If it's too close to the original, it may be infringing copyright. Gauging the boundary between "too close" and "not too close" is the bread and butter of copyright courts.
> Section 44b (Text and data mining): "(1) Text and data mining is the automated analysis of one or more digital or digitized works in order to obtain information, in particular about patterns, trends and correlations. (2) Duplications of legally accessible works for text and data mining are permitted. The copies are to be deleted when they are no longer required for text and data mining. (3) Uses according to paragraph 2 sentence 1 are only permitted if the right holder has not reserved them. A reservation of use for works accessible online is only effective if it is in machine-readable form."
Again, IANAL, but in my opinion this covers neural networks. "Obtaining information about patterns, trends and correlations" is as close to a literal description of the purpose and function of artificial neural networks as you're going to get in legalese. Paragraph 2 just means that you have to delete your data-mined stash if you ever deem your model complete (but it should not be too hard to argue that ML training is always an ongoing process and thus the mined data will always be required). Regarding Paragraph 3, if I'm not mistaken, the "machine-readable form" part of paragraph 3 is very very very specifically aimed at robots.txt. That is, if you allow your art to appear in search results (which most artists want), then it's also fair game for data mining.
Once again, IANAL, and this is only German law. But I think it's very easy to make a case here that this existing code of law covers AI training as well, as long as measures are taken to ensure that source images cannot be reproduced with any sort of fidelity (and obviously that's a big "if").
I had to say it.
What does this mean?
> txt2img prompt crafting is a bit of an art in its own right, as getting the model to spit out what you want isn’t always trivial.
I don't find the output images from a lot of "AI" art to be artistically interesting because an "AI" made them. The output might be novel, or even kind of surprising, but still just the output of code. Someone might say the code of an "AI" art machine could qualify as art. I often find the txt2img prompts artistic in a way that the output images lack, because they are representative of human imagination, often like surreal poetry, or like musique concrete, cutting and pasting segments around on a tape. This is not to deny that the output of "AI" art is impressive. It is, frankly. But as a matter of machine, computational, output rather than human artistic merit.
It comes down to a simple thing: if you showed me two images and you told me a human made one, and "AI" made the other, I will always find the former artistic in a way that I would never find the latter.
This is also not to touch on the licensing issues that arise from these tools. That's its own knot to untangle.
It doesn't matter anymore that you emotionally value, or want to emotionally value, human-made art more than AI-made art, if you can't distinguish the two except based on other people's representations of how it was made.
Unless you watched a (non-augmented) human make a piece of art, you have no assurance that a human didn't touch up an AI generated art piece or simply curate a collection of fully AI-generated art.
ETA: I'm not trying to put down recognizable human-created art that's already been created, especially historically notable works. I'm not sure what all those replies are trying to get at. Of course there can continue to be an art-collecting market for human-made works made before the AI-art era.
For example, a woolen blanket made by my mother is much, much dearer to me than an equivalent woolen blanket made in a factory.
I see it very likely that certified-provenance human-art may command high prices in the future; but also for physical part, it's pretty trivial and doesn't have to be so posh and fancy - anything from street art to art galleries and art collectives/workshops.
If an artist who studied Van Gogh intensely made an original piece in Van Gogh's style, does it have equal artistic merit as one of Van Gogh's originals?
AI is just going to be another aspect of provenance, and if it's found out an art piece had AI used to make it, it won't be forgotten or ignored.
The same holds for AI art.
I find it similar to watching chess. AI/Chess engines will always be better than humans, but it's exciting to watch a human play to see what they can do.
Whatever line you draw will leave several canonical figures on the wrong side of it. Maybe that's OK and you'll draw your line in the sand. Just don't pretend it's a non-controversial consensus.
I find "what is art" discussions impossible to resolve and untangle from prejudices and biases. The least problematic answer I've found comes from John Carey, paraphrasing: "art is whatever someone has decided it to be". In other words, if I decide something is art, then it is.
The problem shifts slightly to a more interesting way to pose this question: why some art has more value than some other art (or even "does some art has more value than some other art")? Equally difficult to resolve but more prone to highlight prejudices and biases mentioned above.
Deleted Comment
As a historical note, this argument is over 50 years old. In 1965, an engineer at Bell Labs generated pseudo-random artworks inspired by famous artist Piet Mondrian. He found that a) people couldn't tell if the artwork was generated by Mondrian or the computer, and b) people generally liked the computer's art better.
https://www.historyofinformation.com/detail.php?entryid=4437
I feel like the majority of "art" we consume falls in the second category.
Would you say the same of general intelligence? IE, general intelligence (artificial or otherwise) requires an agent?
When I started doing this, I was uncomfortable thinking of myself as the author of the image in any way. I'm okay with it now. There's some skill I am expressing, even if not much, and I am connecting with an audience.
To take a selfie, human intervention is minimal: someone decides they want to take a selfie; take dozens of them, pretty much at random; choose one they like; publish it.
Similarities with the stable diffusion process are striking. And yet, could you consider a selfie "artistic"? Are all selfies artistic? What if Annie Leibovitz or Marina Abramovic take a selfie?
What if Marina Abramovic generates a million images of a rabbit with stable diffusion and papers a whole warehouse with them? Is this art?
Edit: syntax.
https://www.npr.org/sections/thetwo-way/2017/09/12/550417823...
But most selfies probably aren't high value art.
If Marina Abramovic were to take a bunch of selfies, or generate a bunch of AI images, and exhibit them, that might be considered high value art. A famous artist can literally tape a banana to a wall and it'd be considered high art:
https://www.vogue.com/article/the-120000-art-basel-banana-ex...
But does that make it more artistic than something generated by AI which was coaxed out by a human applying their expertise?
In my pitch I proposed generous royalties and cash compensation to the artists. All of them essentially said that they weren't taking commissions. I'm sure this is in part due to the fact that I have never been involved in the graphic novel world, so they'd be taking a chance on someone new. Still, it seemed there was no amount I could pay to get someone to work with me.
Now I am reviving this graphic novel idea and still looking for someone I can pay to help me work on the project. However, the more synthography advances as a technology the more it seems like I should explore it as a path for this project. It would help me develop the book much faster than I can on my own. At minimum it could help me get the story board in place.
Maybe some day synthography can help artists scale up their output, in the same way Michelangelo developed a workshop of apprentices. I would be more than happy to pay an artist's AI apprentice if I can't work directly with the human.
There is a way we can pit some corporations against the others to help with this though: Train models exclusively on content by megacorps like Disney, then claim it's fair use.
Those guys lobbied to get copyright extended for ages for own profit; for once they could help protect the ordinary artist.
Training models from scratch only works with massive (labeled) datasets covering a massive data distribution. With language models, the datasets being used are quickly approaching "all known written text" sizes. Training a model from scratch on Microsoft's internal code, with not only its precious intellectual property, but also its technical debt. Code at Microsoft is not going to get close to covering the broad range of styles that a coder could possibly use. The model will possibly diverge without enough data, as it needs to see a given "example usage" in multiple different contexts before it can learn it.
My current understanding is that deep NN's are quite good at modeling an underlying distribution of data without needing any priors hard-coded about that dataset. But! They need to see a whole lot more of it than an adult human would. Several orders of magnitude more. - and they need to see accurate labels about 75-80% of the time.
[0] https://www.lesswrong.com/tag/scaling-laws
He actually did disclose this [1], at least to the art fair administration:
> Olga Robak, a spokeswoman for the Colorado Department of Agriculture, which oversees the state fair, said Mr. Allen had adequately disclosed Midjourney’s involvement when submitting his piece; the category’s rules allow any “artistic practice that uses digital technology as part of the creative or presentation process.” The two category judges did not know that Midjourney was an A.I. program, she said, but both subsequently told her that they would have awarded Mr. Allen the top prize even if they had.
[1] https://www.nytimes.com/2022/09/02/technology/ai-artificial-...
Nit: 2 weeks ago there was an article about getting SD running on an iPhone. The app isn't very time- or battery-efficient, but it just barely works on my (previous-gen) device: https://news.ycombinator.com/item?id=33539192
Still, 2 minutes on my phone. I remember Bryce taking half an hour to render lower resolution images back in the late 90s.
That said, you can just rent a GPU instead of buying it right away. It's not very expensive, and can be cheaper overall depending on your usage. It's also the only sane way to experiment with training as you need a lot of attempts.
This stuff can really devour any hardware you can possibly throw at it.
I agree, this is an underrated option and I'm not sure why it doesn't get more attention. I've been using stable diffusion on a cloud machine, and, say $20 will get you very far - probably thousands of images generated and a few sessions of fine tuning. It's way more affordable that its apparent popularity would suggest
This development seems parallel to me: AI does not seem to be inventing, even in the limited sense of closely customizing for a purpose. It is remixing the well established. If machines can help with remixing and reapplying the old, that's all to the good. Humans still own the new.