Readit News logoReadit News
bredren · 3 months ago
This article seems to be three to six months past due. As in the insights are late.

>One animator who asked to remain anonymous described a costume designer generating concept images with AI, then hiring an illustrator to redraw them — cleaning the fingerprints, so to speak. “They’ll functionally launder the AI-generated content through an artist,” the animator said.

This seems obvious to me.

I’ve drawn birthday cards for kids where I first use gen AI to establish concepts based on the person’s interests and age.

I’ll get several takes quickly but my reproduction is still an original and appreciated work.

If the source of the idea cheapens the work I put into it with pencils and time, I’m not sure what to say.

> “If you’re a storyboard artist,” one studio executive said, “you’re out of business. That’s over. Because the director can say to AI, ‘Here’s the script. Storyboard this for me. Now change the angle and give me another storyboard.’ Within an hour, you’ve got 12 different versions of it.” He added, however, if that same artist became proficient at prompting generative-AI tools, “he’s got a big job.”

This sounds eerily similar to the messaging around SWE.

I do not see a way past this—-one must rise past prompting and into orchestration.

pseudalopex · 3 months ago
A prediction does not make a report late. And many things obvious to some people are not obvious to a general audience.

> If the source of the idea cheapens the work I put into it with pencils and time, I’m not sure what to say.

This is a straw man. The idea is part of the product.

> This sounds eerily similar to the messaging around SWE.

Yes. This similarity included how the executive's imagination and the individual contributor's experience differed.

bredren · 3 months ago
> This is a straw man. The idea is part of the product.

It is a part of the product but I’m not sure I agree it is a straw man. (Side note I just realized the …absolutely no one… meme labels this fallacy!)

In music, few artists can count themselves as capable songwriters, producers, instrumentalist and vocalist recording artists and live performers.

Yet the front page of Rolling Stone is generally reserved for those on the tail end of the full stack of talent described above.

To continue the parallel, in SWE engineers are generally compensated better than non-technical product folks.

jjulius · 3 months ago
>... my reproduction is still an original...

I don't know how accurate that is.

simonw · 3 months ago
The Lord of the Rings: The Return of the King back in 2003 used early AI VFX software MASSIVE to animate thousands of soldiers in battle: https://en.wikipedia.org/wiki/MASSIVE_(software) - I don't think that was controversial at the time.

According to that Wikipedia page MASSIVE was used for Avengers: Endgame, so it's had about a 20 year run at this point.

101008 · 3 months ago
The problem is not AI per se (which are only a mix of algorithms). The problem is that this new wave of AI is trained in propietary content, and the owners/creators didn't allow it in the first place.

If this AI worked without training, no one would say anything.

brookst · 3 months ago
> If this AI worked without training, no one would say anything.

I don’t believe that for one second.

People are rightfully scared of professional and economic disruption. OMG training is just a convenient bit of rhetoric to establish the moral high ground. If and when AIs appear that are entirely trained on public domain and synthetic data, there will be some other moral argument.

CuriouslyC · 3 months ago
People would still be griping about how it devalues the hard work artists have put in, "isn't real art" and all the other things. The only difference is the public at large would be telling them to put a sock in it, rather than having some sympathy because of deceptive articles about how big tech is stealing from hardworking artists.
unstablediffusi · 3 months ago
their consent was not required. https://en.wikipedia.org/wiki/Transformative_use

petabytes of training data are transformed into mere gigabytes of model weights. no existing copyright laws are violated. until new laws declare that permission is required, this is a non-argument.

>If this AI worked without training, no one would say anything.

adobe firefly was trained on licensed content, and rest assured, the anti-AI zealots don't give it a pass.

the copyright is just one of the many angles they use to decry the thing that threatens their jobs.

bobxmax · 3 months ago
I don't know how they verify it, but the article claims the model mentioned ("Moonvalley") trained an entirely clean/licensed data model.
sho_hn · 3 months ago
I'd say the comparison points at misunderstanding the current controversy, though I realize you are doing so deliberately to ask "Is it really that different if you think about it?"

But I'll bite. MASSIVE is a crowd simulation solution, the assets that go into the sim are still artist-created. Even in 2003, people were already used to this sort of division of labor. What the new AI tools do is shift the boundary between artists providing input parameters and assets vs. computer doing what its good at massively and as a big step change. It's the magnitude of the step change causing the upset.

But there's also another reason that artists are upset, which I think is the one that most tech people don't really understand. Of course industrial-scale art does lean on priors (sample and texture banks, stock images, etc.) but by and large operations still have a sort if point of pride to re-do things from scratch where possible for a given production rather than re-use existing elements, also because it's understood that the work has so many variables it will come out a little different and add unique flavor to the end product. Artists see generative AI as regurgitation machines, interrupting that ethic of "this was custom-made anew for this work".

This is typically not an idea that software engineers share much. We are comfortable and even advised to re-use existing code as is. At most we consider "I rewrote this myself though I didn't need to" a valuable learning exercise, but not good professional practice (cf. ridicule for NIHS).

This is one of the largest difference in the engineering method vs. the artist's method. If an artist says "we went out there and recorded all this foley again by hand ourselves for this movie", it's considered better art for it. If a programmer says "I rolled my own crypto for my password manager SaaS", they're in incredibly poor judgement.

It's a little like convincing someone that a lab-grown gemstone is identical to one dug up at the molecular level, even: Yes, but the particular atoms, functionally identical or not, have a different history to them. To some that matters, and to artists the particulars of the act of creation matters a lot.

I don't think the genie can be put back in the bottle and most likely we'll all just get used to things, but I think capturing this moment and what it did to communities and trades purely as a form of historical record is somehow valueable. I hope the future history books do the artists' lament justice, because there is certainly something happening to the human condition here.

simonw · 3 months ago
I really like your comparison there between reused footage and reused code, where rolling your own password crypto is seen as a mistake.

There's plenty of reuse culture in movies and entertainment too - the Wilhelm scream, sampling in music - but it's all very carefully licensed and the financial patterns for that are well understood.

bobxmax · 3 months ago
This is just shifting the goal posts though. I remember people making similar arguments in the early days of Photoshop, digital camera (and what constitutes a "real" photographer), CGI, etc.

I agree the magnitude of the step change is upsetting, though.

mwkaufma · 3 months ago
This is AI in the gamedev sense, not the present-hype sense.
est31 · 3 months ago
Tech companies love to show off that they are using AI, how they are embracing it, etc. Among engineers, there is also a growing community of folks who embrace tools like Cursor, Chat GPT, Gemini, v0, etc.

When it comes to artists, I have less insight but what I see is that they are extremely critical of it and don't like it at all.

It's interesting to see that gap in reactions to AI between artists and tech companies.

taylorius · 3 months ago
Tech people like it because it isn't good enough to completely replace them yet. The sophisticated, coherent architecture of a well designed system is (for now) still beyond the LLMs, so for tech people, it's still just a wonderful tool. But give it another year, and the worm will turn.
dingnuts · 3 months ago
lumberjacks didn't go away when chainsaws were invented; demand for wood rose to meet the falling cost of wood and lumberjacks kept cutting down trees. don't see why it won't be any different for programmers.
ordinaryradical · 3 months ago
I’m an artist and also work in tech. Enjoy using AI for work, no interest in using it for my art.

Using AI for art is an idiotic proposition for me. If I was going to use AI to write my novel, I would literally be robbing myself of the pleasure of making it myself. If you don’t enjoy perfecting the sentence, maybe don’t be a writer?

That’s why there’s a disconnect. I make art for personal fulfillment and the joy of the creative act. To offload that to something that helps me do it “faster” has exactly zero appeal.

dmarcos · 3 months ago
AI will probably enable new workflows and forms of expressions. “Old” ways will still likely be around in some form. Photography didn’t kill portrait painting or movies theater.
danielbln · 3 months ago
But using AI for art doesn't have to mean "AI does all the art from start to finish", does it? Somehow the discourse about AI art is always weirdly maximalist.
username223 · 3 months ago
> If I was going to use AI to write my novel, I would literally be robbing myself of the pleasure of making it myself.

The same would be true if I were going to use AI to read it. If we just wanted to trade Clif Notes around, why bother with novels at all?

Cyber-Leo-Tolstoy types a three-page summary of "War and Peace" into ChatGPT and tells it to generate an 800-page novel. Millions of TikTok-addled students ask ChatGPT to summarize the 800-page novel into three pages (or a five-paragraph essay). What is the point of any of this?

antithesizer · 3 months ago
Or between artists in private and artists in public
mattl · 3 months ago
And then there are the rest of us, indie developers who are building for and want to keep building things for the artists.

We don’t want any of this and are working to build around it.

It’s being really pushed by a lot of the same people who were pushing Web3 and NFTs and blockchain grifts.

jmugan · 3 months ago
I would imagine that if it is shameful among the established players to use AI, what will happen is that entirely new players will come in. For me, it's the story that matters, and if they can tell a better story with AI, then many people will naturally flock to them.
AndrewKemendo · 3 months ago
I would absolutely love being able to create the movies I’ve always wanted to be made and them be plausibly good.

I wonder who is making the oss version of these tools? So you can specify all the hundreds of parts needed to just compose a decent framework

bredren · 3 months ago
The action I see is happening in comfy-ui workflows. That software is progressing rapidly and adapts to whatever sota models are available.

Heavy emphasis is on making cutting edge models work with limited local compute.

bobxmax · 3 months ago
Do you know any resources for getting started with ComfyUI? Last time I looked into these tools ages ago it was a complex mess
bobxmax · 3 months ago
Wasn't Stability working on open sourced models? I wonder what happened to them, I remember some issues with their founder
throw_m239339 · 3 months ago
IMHO, before the end of the decade you will absolutely be able to generate entire long form movies just by writing a paragraph in a prompt. And people will not be able to tell the difference.

Hollywood might save money on the short run but they are doomed to irrelevance on the long run, because you'll have access to the exact same tools as they do.

Is it good or bad? I don't know, it just is...

jplusequalt · 3 months ago
>Is it good or bad?

It's bad. Look at what social media and cellphones have done to society and human attention spans.

There will be a lot of bad shit that will come out of this that won't truly be appreciated until it's already too late to reverse course.

throwaway743 · 3 months ago
Hiding it? Are they supposed to slap on a disclaimer? Feels like one could safely assume they've been using it.

Anyways, AI generated media is gonna lead to hyper-personalized, on-demand, generated media for people to consume. Sure, hollywood will still be around, but once consumer computing power and the models catch up, there are gonna be a ton of people choosing their own worlds than the ones curated by an industry.

aerostable_slug · 3 months ago
I think the really interesting part will be the illusion of choice presented by these systems. You'll think you've got the reins, but really it's a choose-your-own-adventure book that's effectively constraining you to the experience They want you to have.

The only way out of this will be HN types who roll their own, and those will probably suck in comparison to the commercial systems filled with product placement and mindblowing amounts of information harvesting.

yahoozoo · 3 months ago
They have to hide it because of the unions.
sherdil2022 · 3 months ago
Reminds me of: “NO CGI" is really just INVISIBLE CGI

https://m.youtube.com/watch?v=7ttG90raCNo