Readit News logoReadit News
easyThrowaway · 3 years ago
Another opinion popular with no one: AI will have on artists the same impact that Spotify had on the music industry that is, it will kill any revenue flow for anyone outside of the publishers and big artists/players.

Spotify basically killed any money coming from the physical distribution - Worse than piracy, which was inevitable too at the time, but at least you didn't have to pay your lawyers to renegotiate with your label on top of NOT getting any money.

Adobe, OpenAI, whatever: they want artists to draw for them for peanuts to train their model, sign a waiver saying "I'm ok not getting any money from any AI art made from this", and then resell the output for $$$ on something like Splice[1], at the same time overtraining such models in ways that make extremely obvious whose artist made them in first place.

At the end of the day the model itself is going to be basically irrelevant, while knowing whose works were actually used to train it being the truly differentiating feature.

But you know, "the AI did this picture, so we don't have to pay you."

[1] https://splice.com/features/sounds

vouaobrasil · 3 years ago
I agree completely, and I have been constantly speaking about how AI will be a wealth concentrator, replacing a mass of jobs more diverse than previously seen. Unlike previous machines which can take 1-2 jobs, when humans get REALLY efficient at training AI, it will replacing hundreds en masse.

AI will also have an additional effect: it will be isolating in the sense that the need for other humans will decrease.

These two points alone, strengthened by many others, have led me to conclude that the world is MUCH better off with AI and that tech companies are ruining the world with their abominations.

tivert · 3 years ago
> These two points alone, strengthened by many others, have led me to conclude that the world is MUCH better off with AI and that tech companies are ruining the world with their abominations.

Do you mean "world is MUCH better off without AI."

What you wrote doesn't make much sense withing the context of your comment, but I have to ask because there are some software engineers that find abominations appealing for some reason, or just lack the ability to tell the difference between desirable technology and a technological abomination. I think a big component of the latter is many software engineers' overconfidence in their abilities that makes them easy marks, and the willingness of many kinds of hype men to exploit that to con them with propaganda.

hnhg · 3 years ago
Another side effect: the wealth will be concentrated in rich tax-avoiding corporations and elites, meaning that the tax burden for society will fall even harder on the remaining middle and working classes, who will have to pay for the upkeep of everything.
blibble · 3 years ago
> Unlike previous machines which can take 1-2 jobs, when humans get REALLY efficient at training AI, it will replacing hundreds en masse.

more like hundreds of millions

> AI will also have an additional effect: it will be isolating in the sense that the need for other humans will decrease.

unless there's a complete restructuring of our society then a repeat of the late 18th century seems to be the likely outcome

with their stake in society gone: the peasant class get fed up of eating dirt and storm the bastille

(I really, really hope the AI revolution turns out to be just hype)

badpun · 3 years ago
> Unlike previous machines which can take 1-2 jobs

There are many machines replacing hundreds or even thousands of people- farm equipment, trains, tunnel boring machines etc.

HellDunkel · 3 years ago
Should it say „without AI“? Makes no sense like this..
wwweston · 3 years ago
> Spotify basically killed any money coming from the physical distribution - Worse than piracy, which was inevitable too at the time, but at least you didn't have to pay your lawyers to renegotiate with your label on top of NOT getting any money.

It’s even worse than you say — it was murder on digital retail too, right at the time when it was on track to compete with or exceed old physical sales.

Spotify adopted the economics of piracy and stamped them with the false veneer of legitimacy.

amadeuspagel · 3 years ago
Fundamentally, neither spotify nor piracy matter. People enjoy making music. Today, there are more people able to make and publish music then ever, but the day still only has 24 hours, you can't listen to more music then before. Unlimited supply, limited demand.
rightbyte · 3 years ago
> > Spotify basically killed any money coming from the physical distribution - Worse than piracy, which was inevitable too at the time

> Spotify adopted the economics of piracy and stamped them with the false veneer of legitimacy.

As a side note, in the beginning Spotify used pirated music off The Piratebay without asking for permission from the copyright holders.

radley · 3 years ago
Everyone overlooks the fact that it will still take someone (i.e. a graphic artist) to produce great AI imagery.

First, AI generated art is random and disposable. Yes, you'll get a great image that you can use once, but then what? You can't build a campaign on it.

Second, AI generated art can't be copyrighted, so knockoff competitors are free to use your AI-generated marketing images.

At the very least, you can seed the AI with a paid graphic artist's work (seed-based AI images can be copyrighted). But that artist will do it better than your unpaid intern.

karaterobot · 3 years ago
Mmm, I don't know about this. At the very least AI lowers the bar for how talented a graphic artist needs to be to produce professional work, which means it'll be easier to undercut them, which means it'll get much harder to make a living as a graphic designer. It amounts to the same thing as killing off the profession, as seen from the perspective of someone in the profession as opposed to someone without skin in the game. It's like saying push-button elevators didn't hurt the profession of elevator operator, because somebody's still got to push those buttons.
michaelmrose · 3 years ago
> Second, AI generated art can't be copyrighted, so knockoff competitors are free to use your AI-generated marketing images.

No. First off trademarks exist and they found that work done solely by the machine couldn't be treated as a work for hire copyrighted by the machine and assigned to the operator. There is no reason to believe that work couldn't be treated directly as copyrighted by the human operator who has creative input nor is the matter with the images used to train the model truly settled.

>First, AI generated art is random and disposable. Yes, you'll get a great image that you can use once, but then what? You can't build a campaign on it.

You can already get variations on a them and text driven modification eg make the blank a blank or make the blank blanker.

ranguna · 3 years ago
> Yes, you'll get a great image that you can use once, but then what? You can't build a campaign on it.

Checkout confyui, it has an incredible amount of composability that allows you to generate new images based on others. Like image to image but on steroids.

For example, you can generate a character sheet and use it to generate the same characters on different poses using controlnet. Or you can have a base image for an object and use that to generate the same object from different angles and/or different colours etc.

charlieyu1 · 3 years ago
I agree with you, but the main problem is that illustrators are under-appreciated. We are in a world where management with no technical knowledge are having too much power and stealing paychecks.
prox · 3 years ago
Also people cannot judge great art or imagery. Unless you have had the training. But the average person? Nope. You can tell what you LIKE but that’s not the same.
The_Blade · 3 years ago
sewing machine
soligern · 3 years ago
AI generated art may be disposable but it certainly is very, very good. Midjourney makes plenty of impeccable art and photorealistic images that have no flaws. Also, even if there are flaws a week with some YouTube videos can teach anyone how to fix them, you don’t need someone with five years of deep experience.
d1sxeyes · 3 years ago
Recorded music was going this way whether it was Spotify or someone else that drove the final nail into the coffin.

I remember when I was a child, on a Sunday afternoon, my dad would put on an album and listen to it. Just listen. Very, very few people do that now.

Now we have a lot of demand for “incidental music”. Something you listen to while you do something else. Driving, reading, surfing the net, coding, cleaning…

There was a fundamental shift in how people consumed music that started around the time music became portable. Spotify won the race, but if it hadn’t been Spotify, it would have been someone else.

btilly · 3 years ago
I'm curious. When do you think music became portable?

The transistor radio was invented in the 1950s. And quickly became used as background music as life progressed.

Also incidental music is not a new thing. Tavern musicians as background music have been around for centuries. It is hard to prove, but likely for thousands of years.

throwaway290 · 3 years ago
No no. Incidental music and "music you listen to while doing something" are not the same.

Listening to incidental music all the time devalues music. And we do it not because we wanted it but because Spotify, Apple music etc promote it. Until then "just play random stuff that this ML thinks is similar" was not a thing. But subscriptions make them more money than if they just let you buy albums and stream what you bought. I wish more artists didn't sign up for this but unfortunately big labels did.

But you can listen to non incidental music that you have specifically chosen while doing something. Even your dad could be doing something while listening to music (thinking).

_glass · 3 years ago
A positive effect for performers is that people still want to go to concerts, but less and less people know how to play an instrument. The market is really much better now than even 10 years ago.
SnowdustDev · 3 years ago
> Spotify basically killed any money coming from the physical distribution - Worse than piracy

Any sources for this?

I'm of the impression physical distribution is on the rise compared to the earlier days of digital music. This has nothing to do with Spotify, and all about the digitization of music itself.

Anecdotally many people I know now purchase merchandise and media as a way to support an artist they like, rather than listen to the music they make in a physical format.

notefaker · 3 years ago
This is factually incorrect. If you own your master recordings, you stand to make $3,500 to $5,500 per million streams on Spotify. Apple Music and Tidal pay even better. This is why Taylor Swift is re-recording her entire Big Machine Records catalogue. While Spotify did shift consumers away from buying singles and albums as individual items, they also opened a new revenue source for independent artists.
easyThrowaway · 3 years ago
Can you point me to a current-day independent artist which hasn't been signed to a label that is pulling this amount of money just on streaming?

If you're already big enough that, i.e., XL Recordings can ask you to make a record without getting rights on the master, I wouldn't count it as a good example of "indie artist".

deskamess · 3 years ago
> "the AI did this picture, so we don't have to pay you."

If the court rulings hold and AI works cannot be copyrighted then us end users do not have to pay for it either... but that seems like a race to the bottom. Like the end of a craft. Why would anyone create art if it has no/minimal downstream value?

Artists need to band together in some sort of union or not agree to do art with that AI clause or perhaps only do art with a no-AI use clause. And have an allowed AI-clause that is prohibitively expensive (like in the multi hundred millions per piece). That way 'accidents that happen' have a prescribed recovery amount plus other requirements like pulling the generated artwork. "Hey, we understand it may have been accidental, but here is the bill."

staunton · 3 years ago
It's not the end of a craft, it just means that the prestige of "made by human" will increase even more and be pushed by by companies as a means of making money through copyright. That means that the few artists at the top will be rich while the niche between "art" and "craft" disappears. Professions involving visual art become like the music business.
t0bia_s · 3 years ago
Imagine Spotify, rather than paying to musicans, just invest and publish AI generated music. Sounds like more profitable business to me.

I'm not saying that I agree with this approach.

dsign · 3 years ago
I honestly wonder if people would consume the music they know is AI generated. And by "honestly", I mean "I don't really know but I want to."

I've been watching videos of Guy Michelmore in youtube. Not because I will ever write any orchestral music, but because I like his energy and envy his shed. Would I bother if Guy Michelmore were an AI?

yellow_postit · 3 years ago
And absent some major technology changes Spotify in your example has no way to do credit assignment back to the training set for any attempts at royalties should they be so inclined.
mr_toad · 3 years ago
Without copyright protection anyone could copy their entire library and set up a rival streaming service. It certainly wouldn’t be worth much investing in the AI part of the business.
easyThrowaway · 3 years ago
Also, if you're wondering "Well, I could get better terms for my art" - Like I said, when Spotify arrived and you were signed on a label you HAD to sign the part that said "Yes, you can put my music online on Spotify and I will get paid peanuts" or else, unless you were Madonna or Taylor Swift.

Or, sure, you can also terminate your record deal. Hope you have 500 grands around just for that.

Frankly I don't see it ending much better for visual artists.

munk-a · 3 years ago
There was another path here - collective bargaining. When small individuals are bullied by large corporations it's because those corporations want something from the small individuals... they certainly don't care about one or two small artists walking away from the platform - but if artists can organize and bargain as a group they can ensure a fair outcome.

I think the modern world has become too complacent in terms of labor organization - the time of plenty left a lot of people content to take whatever was given to them because there was such a glut of excess that it was freely shared. That sharing is coming to an end and we're returning to a time when we need to demand fair and equitable treatment.

biogene · 3 years ago
Whenever there are implications to people's lively hood, its always a serious matter - but I hope people are able to transition to other roles.

I think Gen AI will commoditize the mundane and "typical", and heavily push people into creating something extraordinarily unique. I think there is the same pressure even without AI, when as a creator you have to standout amongst the sea of people vying for people's attention.

I believe GenAI can be useful in a way too. For e.g. If I'm an artist looking for inspiration, I can have a GenAI tool create some "random" works that I can get inspired from.

Vt71fcAqt7 · 3 years ago
The music industry has always had a long tail. Its very much a go big or go home industry. Do you have any data around revenue change for small artists before and after spotify?
easyThrowaway · 3 years ago
Sorry, no hard data. Mostly the perception of the industry at the time. Lots of tales of people quitting, moving, or going on "indefinite hiatus".

TBH "Fly or Die" was way more common on the US side of the industry. And even in the USA by the late '90s to the end of the 2010 it was somewhat doable if you were skilled enough to make a living solo (we're talking 60-80K/year max) as a "jobber" opening for bigger acts on local venues.

Like, the entire NYC indie scene got a start from this premise. If you get a chance, give a look to "Meet Me in the Bathroom"[1], which is a documentary specifically of this timeframe.

[1]https://www.youtube.com/watch?v=n71c1Szjv08&themeRefresh=1

wslh · 3 years ago
The Spotify example is similar to the Google impact: the last mile is the search engine UI that controls your access to content. Spotify is another UI as they are streaming services, etc.

Seems like a natural iteration in the ordering of complex systems. Beyond legal regulations it would be great to start to think about new solutions, if they ever exist.

pud · 3 years ago
> “Spotify killed revenue flow for anyone outside of the publishers and big artists/ players. Spotify basically killed any money coming from the physical distribution”

To the contrary, physical distribution was only available to big artists/players. There was mostly no way for independent artists to get into the record stores.

Streaming (via distributors like DistroKid) made it possible for millions of independent artists to make money from their music.

Source: DistroKid founder here. Hi.

gamblor956 · 3 years ago
AI will have on artists the same impact that Spotify had on the music industry that is, it will kill any revenue flow for anyone outside of the publishers and big artists/players.

then resell the output for $$$ on something like Splice

This is silly. The USPTO and Courts have repeatedly stated that AI-generated media is not subject to copyright protection, so there are no licensing revenue opportunities for the big publishers/artists/whatever. This means: AI-generated content is not protected by copyright, so anyone can use a piece of AI-generated art however they want without a license and unless the law changes AI has no value to the content industries.

EDIT: Also, the USPTO has noted that the use of AI-generated content in a work will mean that the entire work will be presumed AI-generated except for the portions the content owner can demonstrate were generated by humans. The backend costs of maintaining AI-supplemented works will almost as expensive and burdensome as the costs associated with patents.

Also, I think people on HN have a very glorified view of how much money musicians make from streaming or cd/album sales: basically zilch, unless they're popular enough to be in repeat on the radio. Most musicians made their money from performing: generally a little bit from ticket sales or venue incentives (like % of booze sales) but the real money for the performers was from the sales of band merch, which is why it gets pushed so heavily.

At the end of the day the model itself is going to be basically irrelevant, while knowing whose works were actually used to train it being the truly differentiating feature.

Yes, by lawyers, when they sue the owners of the AI model for copyright infringement, because this would not be a use protected by fair use doctrine. This will actually make human-generated works more valuable because now every work used to generate an AI work is now worth at least $75,000, even if its market value would be significantly less (or even commercially worthless) today.

Due to the costs associated with licensing of human works, if AI-content becomes a thing, it will probably be more expensive than hiring a human to do the same thing, because the model will have to account for the cost of paying a license fee for every work that was incorporated into a specific output.

adamc · 3 years ago
Spotify has been a disaster, but unless the artists walk away (very hard to do), I don't see our political system as caring enough to do anything about it.
franl · 3 years ago
> Another opinion popular with no one: AI will have on artists the same impact that Spotify had on the music industry that is, it will kill any revenue flow for anyone outside of the publishers and big artists/players.

Maybe I’m misunderstanding you, but how much money do you think the 7500 creators on Spotify making $100k+ [1] would be making without Spotify or other streaming platforms? My guess is closer to zero than 100k.

Also 0.09 percent of 8 million creators making 100k+ [1] sounds horrible, but my guess is that should be taken with a grain of salt. How many folks are included in that 8 million who registered, but uploaded nothing? How many uploaded once or twice? How many uploaded and did ZERO promo of themselves? How many are just plain terrible musicians?

A number of years ago when I stumbled on him, Russ was pulling in a few hundred thousand per year from streaming. Looks like he’s making 100k per week as of a couple of years ago [2]. Yes, he’s probably an outlier. But he works his butt off on his craft, handles production and writing himself, and markets himself well.

Headlines like “Big tech and AI destroying the indie music industry” get more clicks and attention than “Streaming platforms provide income where once there was none” so shrug.

[1] https://www.digitalmusicnews.com/2021/02/24/spotify-artist-e... [2] https://twitter.com/russdiemon/status/1325853093074923520

franl · 3 years ago
Found the video where Russ says he was making around $100k per month before any label involvement: https://youtu.be/OebNTkTfzHU. I know this is the land of “that’s just survivorship bias!” And I certainly agree that luck and timing plays a massive role in billionaire level startup success, but this guy in particular is a few orders of magnitude of success below that (even if he’s still an outlier). I’m sure he still benefited from luck and timing, but he also was methodical about creating music non-stop, getting better at production, rapping, and writing, and marketing himself. My point being show me someone who has worked as hard and as smart as he has, who picked a niche of music that has large audiences (aka high Total Addressable Market), and who released as much content as him, and I will show you someone who is having non-trivial streaming success - again maybe not $1M+ annually - but something material beyond just scraping by. That doesn’t mean Big Tech is absolved of sin in how it distributes profits or exerts monopoly control or whatever, but I think we often overlook the opportunities these networks have provided for people that would otherwise live in obscurity with no audience whatsoever.
bandrami · 3 years ago
One thing that will really matter is that the output of AI cannot be copyrighted. If producers really go all-in on generation we're going to rapidly see a situation where huge amounts of material will enter the public domain all at once, and we don't really have a precedent for what happens then.
karaterobot · 3 years ago
No offense, I don't think this is an unpopular opinion. The comparison to Spotify is apt though.
Andaith · 3 years ago
A fun solution to this would be to remove copyright protections for anything generated by an AI. Now nobody profits!
soligern · 3 years ago
A human looking at someone’s artwork is “training a model”. It’s bullshit and anti progressive to say someone or something that is creating derivative works is stealing.
Retric · 3 years ago
The second half of derivative worlds is creating an imitation of the original not just looking at it, but this isn’t some grey area.

Even just training the model requires someone to copy the original work from somewhere and store it into a database to use to train the model. If they don’t have permission to make that copy then it’s commercial copyright infringement independent of anything done by the model after that point.

Thus the companies themselves are frequently breaking the sale even if nobody ever uses these systems.

leeoniya · 3 years ago
not stealing the work, just stealing the revenue...for very little investment.

> A human looking at someone’s artwork is “training a model

sure, except that model often takes months or years to train (wall clock years, not 1000-core cpu-years). and the end result is not a human that can stamp out new/competing artwork every 100ms.

for any kind of creative/performance/art work, these are watershed times. us coders are not super far behind.

orbital-decay · 3 years ago
Posts like that nearly always assume the text-to-image and "prompt engineering" being used, usually due to the lack of experience with those models. This is categorically not the way to do it outside of having fun. The way it's done for predictability and control looks much more like "draw the rest of the owl, in a manner similar to my other hand-drawn owl" combined with photobashing and manual fixing/compositing. It's a hybrid area similar to 3D CGI that requires both artistic and technical skills if you want to create something non-boring.

This has nothing to do with the model's poor understanding of natural language, and will not change until we have something that could reasonably pass for AGI, and likely not even then. Your text prompts simply don't have enough semantic capacity.

jefftk · 3 years ago
You might be interested in the "Commercial illustrators will keep their jobs, but will mostly need to learn to use AI as a part of their workflow to maintain a higher pace of work" section of the article, which gets into this more.
orbital-decay · 3 years ago
You're right! I've stumbled upon the prompt engineering part and rolled my eyes, which was clearly too soon.
makeitdouble · 3 years ago
The more plausible evolution is that people drawing the base concept are not "commercial illustrators" nor have art training.

If a magazine editor with run of the mill drawing skill can feed the prompt a sketch with stick figures and object outlines, and get back a good enough rendition with an improved composition, the job of the illustrator will be a side job of that editor.

I'm partial to the argument that being able to fix the generated image in post is a valuable skill, but on that part we already have decades of progress and people are usually more comfortable with editing tools than drawing tools.

danenania · 3 years ago
"This has nothing to do with the model's poor understanding of natural language, and will not change until we have something that could reasonably pass for AGI, and likely not even then. Your text prompts simply don't have enough semantic capacity."

I don't think it's going to take AGI to get to this point. It's 'just' going to take a top-tier model adding robust multi-modal input imho. A detailed prompt plus a bunch of examples of the style you're looking for seems like it would be enough.

That's not to say it isn't really hard, but it doesn't seem like it requires fundamental innovations to do this. The building blocks that are needed already exist.

jwells89 · 3 years ago
The biggest problem I see with LLM-generated imagery is a near total inability to get details right, which makes perfect sense when one considers how they work.

LLMs pick out patterns in the data they're trained on and then regurgitates them. This works great for broad strokes, because those have relatively little variance between training pieces and have distinct visual signatures that act as anchors.

Details on the other hand differ dramatically between pieces and have no such consistent visual anchor. Take limbs for example, which are notoriously problematic for LLMs: there are so many different ways that arms, legs, and especially hands and fingers can look between their innumerable possible articulations, positions relative to the rest of the body, clothing, objects obscuring them, etc etc and the LLM, not actually understanding the subject matter, is predictably terrible at drawing the connections between all of these disparate states and struggles to draw them without human guidance.

You see this effect in other fine details, too. Jewelry, chain-link fences, fishing nets, chainmail, lace, etc are all near-guaranteed disasters for these things.

orbital-decay · 3 years ago
There are two problems with this: a) natural language is inherently poor at giving artistic directions compared to higher-order ways like sketching and references, even if you got a human on the other end of the wire, and b) to create something conceptually appealing/novel, the model has to have much better conceptualizing ability than is currently possible with the best LLMs, and those already need some mighty hardware to run. Besides, tweaking the prompt will probably never be stable, partly due to the reasons outlined in the OP; although you could optimize for that, I guess.

That said, better understanding is always welcome. DeepFloyd IF tried to pair a full-fledged transformer with a diffusion part (albeit with only 11B parameters). It improved the understanding of complex prompts like "koi fish doing a handstand on a skateboard", but also pushed the hardware requirements way up, and haven't solved the fundamental issues above.

florbo · 3 years ago
The post actually goes into a bit of detail on that process.
yieldcrv · 3 years ago
This was essentially my post

Its like you take your AI to school, or do a Matrix-style data upload into your AI so its up to speed on a new concept

Professionals will learn how to do that, the market will cater to people that want to do that

norswap · 3 years ago
> Your text prompts simply don't have enough semantic capacity.

Mostly, current tools are abysmal at maxing the semantic capacity. Midjourney is great a generating things that look good, but terrible at piecing scenes together.

Recent example I tried: a robot playing magic the gathering seated across a human.

Even getting the human in the picture is a challenge, but then the model doesn't know enough about MTG (it correctly pattern matches to "board game" or "card game").

Some pictures generated are much better than other, and it would be great to take e.g. the table setup from one picture and the robot from another, but doing is not really possible atm ("blend" doesn't work for that).

I have no doubt this will improve, but I'm wondering if there's something underlying this that could be a more general limitation? Maybe simply not enough data (a google image search for magic the gathering is also pretty disapppointing).

Another example: a glowing blue <company logo> carved into a stone monolith. It sometimes got the logo to be carved (rarely), it was never glowing blue (usually the whole monolith, or a part of it).

orbital-decay · 3 years ago
Sorry I noticed your post too late, not sure you'll read it.

The reason it happens is that the models are far too small (parameter count-wise) and the prompt understanding part is simple, usually it's either CLIP or in the best case a small and dumb transformer. (But regardless of the current capabilities, text is just not a great tool to express artistic intent)

Generally, what you want can be done by giving the model higher-order hints like sketches and pose skeletons; see controlnets for Stable Diffusion for example. The overall idea here is to use a custom model created specifically to guide the diffusion model, based on the non-textual input. The problem is that MidJourney can't do this, you have to use SD.

Another thing is photobashing/compositing. Avoid fitting the entire composition into one generation, it will make the model lose track of your scene. Using multiple passes helps a lot. It's best to inpaint the objects or img2img them based on non-textual guidance to add objects and details in the specific spot.

Check my other comment for an example of a complex scene workflow. https://news.ycombinator.com/item?id=37140233

thelazyone · 3 years ago
Well put. Big fan of the "Commercial illustrators will keep their jobs, but will mostly need to learn to use AI as a part of their workflow to maintain a higher pace of work" part.

I'm a sometimes-illustrator (but my style is pretty far from what Generative AI is doing), and I recently published a 1.1 of a game manual which uses Midjourney images. I'm currently investing in a "proper" illustrator because the MDJ images lack character, but it's also true that in a few months from now this might change: I'll stick with the illustrator to have more consistency in the images, but probably the AI could do a fancier job there.

Besides, the "things will change in 2 months" point is a good one, but it's been used since a year and a half and things haven't changed yet. Sure, the quality of the produced images improved, but not in a qualitative scale.

Side note: the link civitai to leads to https://sambleckley.com/writing/civitai.com/images which is a dead link.

rcarr · 3 years ago
> I'm a sometimes-illustrator (but my style is pretty far from what Generative AI is doing)

Why not train your own personal AI on your artwork? Corridor Digital did this in the latest attempt to automatise animation, they hired an illustrator to create an animation style for them, then trained the AI on their drawings.

Link: https://youtu.be/FQ6z90MuURM?t=329

woolion · 3 years ago
I've actually done it [0], I'd like to have an AI assistant that I could directly use the results from, and the results were really terrible, mostly laughably terrible. I think it was too far from what the models handled correctly at the time, and given that issue it was not enough training images. Although I had also tried with a model that was better at handling stylised 2D. I'd like it to work, but I don't think it's viable for most people.

[0] https://woolion.art/2022/11/16/SDDB.html

toasted-subs · 3 years ago
Seems kind of shady imo. I know businesses is businesses but that's seems a bit too mean for my tastes.
thelazyone · 3 years ago
That's an interesting take! Currently I see two reasons why I wouldn't do that:

1 - Since I'm either working for game companies or for my own project (https://fsd-wargame.com/) using AI-generated things is kinda damaging in terms of marketing. You never know when some uproar could arise against a project/game solely based on more or less petty outcries against AI. I generally sympathize with artists, but sometimes it's just whiny.

2 - My illustrations are line-art and cartography (https://www.artstation.com/thelazyone) , which are not the easiest to handle with AI. I'm sure that with enough effort there's gonna be a good model, but I haven't seen any so far.

atleastoptimal · 3 years ago
The question is, since commercial illustrators can be more efficient using AI, will the total number of jobs in the space lower, or will the expectation for commercial illustration increase, thus increasing the workload and keeping the number of jobs the same.
satvikpendem · 3 years ago
In all of human history, work has always increased. This is akin to Parkinson's Law, where work expands to fill the time (and now resources) available.
canvascritic · 3 years ago
My partner and I run a handful of small internet side businesses. One of our content-driven D2C businesses heavily relied on bespoke illustrations for our display ad creatives. we found that our ctr was decent, pretty average, but the CPC was killing us and ROAS really sucked.

Several months ago we decided to A/B test SD against our usual illustrators. In our case the results were pretty dramatic, we actually found that the ctr shot up by almost 20% and cvr showed a consistent uptick. I don't agree with the blog post's claim that AI generated images work best in businesses where the content doesn't actually matter; this particular venture is a fantastic counter example. In our case the AI-generated images seemed to resonate more with our target audience, as we were able to achieve much more granular personalization at lower cost than before. not only did it reduce the CPA significantly, but the tight control we had over creative variations meant we could optimize in realtime based on audience segmentation.

Not to mention that our time-to-market for launching new campaigns went down by half. no more back-and-forths over design nuances, missed deadlines, or creative blocks.

And I do feel a bit mixed about the diminishing role of human touch in creative processes. But from a purely growth-hacking POV, this was a gamechanger, and we have the numbers to prove it.

Overall I think this is a net win, especially because I don't think this needs to be the end of the road for human illustrators, but this will force them to adapt and bring more sensitivity to the needs of their clients. It makes no sense for even a content business to be subject to so much friction in the procurement of creatives, and this forces more consideration to our needs

Anywho there's efficiency, and then there's soul. Hats off to the robots for (mostly) nailing the former, and sometimes surprising with the latter.

thwarted · 3 years ago
no more back-and-forths over design nuances, missed deadlines, or creative blocks.

This evoked, for me, the "can I get the icon on cornflower blue" scene in Fight Club.

How much of this reduction in back-and-forth is influenced by the immediate/interactive response (dealing with fewer humans) and how much is due to a level of trust-of/delegation-to the machine? "A machine generated this icon based on my description, there's no need for me to question its choice of colors." — really the classic problem of considering machines as infallible and more expert than humans.

It's probably some of both.

chefandy · 3 years ago
I think your usage of "matter" and theirs is different. It's furniture. Furniture "matters" in a restaurant and having the wrong furniture can hurt your business— but compared to the food, it's essentially inconsequential.

There's a spectrum of how much furniture matters in any given place ranging from very short stay waiting areas to architect's offices, and commercial art is no different. If that image was truly inconsequential, you wouldn't need one there. Non-informational graphics on most non-professionally designed power point decks likely matter less. I'd say there's about a zero percent chance of a two page spread opening a feature article in a magazine being ai-generated unless it's an article about ai-generated images, and even then, it probably took professionals longer to massage it into shape than all of the rest of them. Specificity and per-pixel control is just so important in professional graphics workflows and despite what a huge stack of people who aren't professional designers will tell you, they are simply the wrong tool for the job. It's fundamentally the wrong interface. Maybe what Adobe or another player who knows what the industry needs will nail it, but it won't look like Midjourney— that's for sure.

satvikpendem · 3 years ago
What is your type of business and what kinds of images did you generate? Curious as I was thinking of doing something similar for mine.
thebooktocome · 3 years ago
> Overall I think this is a net win, especially because I don't think this needs to be the end of the road for human illustrators, but this will force them to adapt and bring more sensitivity to the needs of their clients.

The advantages of AI that you crow over simply can’t be met by any human professional artist. A human can’t do hundreds of revisions profitably. There’s increased “sensitivity” and then there’s needing to read the client’s mind.

If you think this isn’t a death knell for human illustrators in this particular market, you’re deluding yourself.

jononor · 3 years ago
A professional artist that is proficient in the latest generative image models can increase their ability to attend to client needs.
arvidkahl · 3 years ago
Ive tried letting AI write my articles. It was horrible. I tried ignoring AI-powered tools (such as grammar checkers, summarizers, rewriters, speech-to-text apps), and the writing process felt sluggish.

The middle ground is what works best for me. I use generative AI exlusively mid-process, but neither for input (ideas) nor output (actual drafts.)

Here's how I write:

- I source my ideas from contemplation or conversations on social media. Topics discussed there have at least some pre-validated relevance - I sit down for ten minutes and dictate my thoughts into a tool like AudioPen (no affiliation, just a fan) which summarizes my 10 minutes in 5 or 6 paragraphs. THIS is the AI step. The tool suggests a few paragraph structures that I cycle through until I find a good one. - From there, I write my draft, following that outline. No more AI tools here other than grammar checking at the end.

AI is a great writing partner. It's a horrible writer.

earthboundkid · 3 years ago
That’s been my experience so far too. It’s good for giving you some suggestions for things to structure and things you may have missed, but it is a terrible writer ATM.
satvikpendem · 3 years ago
> Commercial illustrators will keep their jobs, but will mostly need to learn to use AI as a part of their workflow to maintain a higher pace of work.

This is exactly what I've found to be the case too. People outside of this AI media generation community still think it's entering some text and getting some output. In reality, there are entire workflows constructed to get the exact type of image one wants.

Look at: https://old.reddit.com/r/StableDiffusion/comments/14ye2eg/co...

The second image is the output image, but the first is even more interesting. It is a node based interface more commonly seen in game development tools like Unreal Engine which has a similar interface [0]. It is akin to hooking up APIs together to get the resultant image. I see the future of image generation being more akin to backend programming than actually drawing anything, which is to be expected as the actual drawing part is getting automated while the creativity now rests in the workflow itself (at least until we automate the workflow part too, but that's a far ways off as computers can't read minds yet to even know what the user wants).

[0] https://docs.unrealengine.com/5.2/en-US/nodes-in-unreal-engi...

01100011 · 3 years ago
Post seems very biased towards the now. Stable diffusion et al are very successful with a certain technique but it is foolish to think that is a method which will simply be improved indefinitely.

Generative "AI" will take many forms. Ultimately it will likely remove much of the "technique" element to creation, depriving artists and content owners of income and relevance.

Will this happen overnight? No. I suspect over the next, say, decade, AI will be a beneficial tool more than a threat.

At some point, I expect generative AI to become multi-sensory(sight, sound, touch). Such systems will work from physical models of subjects/environments to produce novel and accurate representations based on rich descriptions and deep contextual awareness of culture. These systems will not think in pixels but in objects and relationships which are then simulated, rendered and filtered to match the desires of the users.

I do applaud efforts of the writers and actors to protect themselves from competition but I believe it will ultimately be in vain. It will be interesting to watch the legal developments in this space. It may be necessary for future generative systems to provide an audit trail showing how they gained an understanding of the world to prove no unauthorized training was performed. This merely raises the bar slightly and does not prevent future generative systems from deriving important relationships via other means, such as 'clean room', high-level descriptions being given(perhaps by other automated processes).

For example, while it may be illegal to train an AI to reproduce Harrison Ford using his copyrighted works or even images captured in a public space, I can reduce Harrison Ford to a set of characteristics which can be passed to a generative system to produce something indistinguishable from the real Harrison Ford. If I am able to document this procedure I see few ways for the legal system to prevent it but then again I am no expert in this area.

For what it's worth, I'm not a fan of current "AI". I have found LLMs to be particularly unreliable and mostly useless. I also find most "AI" generated art to be either boring, inaccurate, or in some way not compelling. That said, I think the trend is becoming clearer.

joshstrange · 3 years ago
> Finally, an opinion popular with no one: Commercial illustrators will keep their jobs, but will mostly need to learn to use AI as a part of their workflow to maintain a higher pace of work.

> This doesn’t mean illustrators will stop drawing and become prompt engineers. That will waste an immense amount of training and gain very little. Instead, I foresee illustrators concentrating even more on capturing the core features of an image, letting generative AI fill in details, and then correcting those details as necessary.

I'm not sure why they think this is unpopular with no one. This seems like the logical path forward. In the same way that CoPilot isn't going to replace me but it's makes certain boilerplate much less painful and avoids the "blank page"/"writers lock" that can happen when I go to write a function sometimes. It's just nicer to start from something then modify it until I have what I need (even if I end up replacing 80-99% of it).

In the same way I imagine it would be nice for an artist to see a couple of examples of what their line drawing could be which will spark some creativity and then they can do what they want.

simonw · 3 years ago
Right, that opinion is popular with me: I love the idea that commercial illustrators can add generative AI to their toolbox. Those are the illustrators I most want to work with: people who can produce the best possible images using the whole suite of tools available to them.