Readit News logoReadit News
ink404 · 2 years ago
542458 · 2 years ago
This seems to introduce levels of artifacts that many artists would find unacceptable: https://twitter.com/sini4ka111/status/1748378223291912567

The rumblings I'm hearing are that this a) barely works with last-gen training processes b) does not work at all with more modern training processes (GPT-4V, LLaVA, even BLIP2 labelling [1]) and c) would not be especially challenging to mitigate against even should it become more effective and popular. The Authors' previous work, Glaze, also does not seem to be very effective despite dramatic proclamations to the contrary, so I think this might be a case of overhyping an academically interesting but real-world-impractical result.

[1]: Courtesy of /u/b3sn0w on Reddit: https://imgur.com/cI7RLAq https://imgur.com/eqe3Dyn https://imgur.com/1BMASL4

kmeisthax · 2 years ago
The screenshots you sent in [1] are inference, not training. You need to get a Nightshaded image into the training set of an image generator in order for this to have any effect. When you give an image to GPT-4V, Stable Diffusion img2img, or anything else, you're not training the AI - the model is completely frozen and does not change at all[0].

I don't know if anyone else is still scraping new images into the generators. I've heard somewhere that OpenAI stopped scraping around 2021 because they're worried about training on the output of their own models[1]. Adobe Firefly claims to have been trained on Adobe Stock images, but we don't know if Adobe has any particular cutoffs of their own[2].

If you want an image that screws up inference - i.e. one that GPT-4V or Stable Diffusion will choke on - you want an adversarial image. I don't know if you can adversarially train on a model you don't have weights for, though I've heard you can generalize adversarial training against multiple independent models to really screw shit up[3].

[0] All learning capability of text generators come from the fact that they have a context window; but that only provides a short term memory of 2048 tokens. They have no other memory capability.

[1] The scenario of what happens when you do this is fancifully called Habsburg AI. The model learns from it's own biases, reinforcing them into stronger biases, while forgetting everything else.

[2] It'd be particularly ironic if the only thing Nightshade harms is the one AI generator that tried to be even slightly ethical.

[3] At the extremes, these adversarial images fool humans. Though, the study that did this intentionally only showed the images for a small period of time, the idea being that short exposures are akin to a feed-forward neural network with no recurrent computation pathways. If you look at them longer, it's obvious that it's a picture of one thing edited to look like another.

scheeseman486 · 2 years ago
Hey you know what might not be AI generated post-2021? Almost everything run through Nightshade. So given it's defeated, which is pretty likely, artists have effectively tagged their own work for inclusion.
KTibow · 2 years ago
Correct me if I'm wrong but I understand image generators as relying on auto-labeled images to understand what means what, and the point of this attack to make the auto-labelers mislabel the image, but as the top-level comment said it's seemingly not tricking newer auto-labelers.
webmaven · 2 years ago
Even if no new images are being scraped to train the foundation text-to-image models, you can be certain that there is a small horde of folk still scraping to create datasets for training fine-tuned models, LoRAs, Textual Inversions, and all the new hotness training methods still being created each day.
GaggiX · 2 years ago
If it doesn't work during inference I really doubt it will have any intended effect during training, there is simply too much signal and the added adversarial noise works on the frozen and small proxy model they used (CLIP image encoder I think) but it doesn't work on a larger model and trained on a different dataset, if there is any effect during training it will probably just be the model learning that it can't take shortcuts (the artifacts working on the proxy model showcase gaps in its visual knowledge).

Generative models like text-to-image have an encoder part (it could be explicit or not) that extract the semantic from the noised image, if the auto-labelers can correctly label the samples then the encoded trained on both actual and adversarial images will learn to not take the same shortcuts that the proxy model has taken making the model more robust, I cannot see an argument where this should be a negative thing for the model.

ptdn · 2 years ago
The context windows of LLMs are now significantly larger than 2048 tokens, and there are clever ways to autopopulate context window to remind it of things.
jerbear4328 · 2 years ago
[3] sounds really interesting - do you have a link?
brucethemoose2 · 2 years ago
Yeah. At worst a simple img2img diffusion step would mitigate this, but just eyeballing the examples, traditional denoisers would probably do the job?

Denoising is probably a good preprocessing step anyway.

achileas · 2 years ago
It’s a common preprocessing step and I believe that’s how glaze (this lab’s previous work) was defeated.
pimlottc · 2 years ago
I can’t really see any difference in those images on the Twitter example when viewing it on mobile
vhcr · 2 years ago
The animation when you change images makes it harder to see the difference, I opened the three images each in its own tab and the differences are more apparent when you change between each other instantly.
fenomas · 2 years ago
At full size it's super obvious - I made a side-by-side:

https://i.imgur.com/I6EQ05g.png

josefx · 2 years ago
Something similar to jpeg artifacts on any surface with a normally smooth color gradient, in some cases rather significant.
0xcde4c3db · 2 years ago
I didn't see it immediately either, but there's a ton of added noise. The most noticeable bit for me was near the standing person's bent elbow, but there's a lot more that becomes obvious when flipping back and forth between browser tabs instead of swiping on Twitter.
Keyframe · 2 years ago
look at the green drapes to the right, or any large uniform colored space. It looks similar to bad JPEG artifacts.
pxc · 2 years ago
I don't have great vision, but me neither. They're indistinguishable to me (likewise on mobile).
jquery · 2 years ago
It's really noticeable on desktop, like compressing an 800kb jpeg to 50kb. Maybe on mobile you won't notice, but on desktop the image looks blown out.
milsorgen · 2 years ago
It took me a minute too but on the fast you can see some blocky artifacting by the elbow and a few spots elsewhere like curtain upper left.
charcircuit · 2 years ago
The gradient on the bat has blocks in it instead of being smooth.
gedy · 2 years ago
Maybe it's more about "protecting" images that artists want to publicly share to advertise work, but it's not appropriate for final digital media, etc.
sesm · 2 years ago
In short, anti-AI watermark.
kjs3 · 2 years ago
Seems obvious that the people stealing would be adjusting their process to negate these kinds of countermeasures all the time. I don't see this as an arms race the artists are going to win. Not like the LLM folks can consider actually paying their way...the business plan pretty much has "...by stealing everything we can get our hands on..." in the executive summary.
h0p3 · 2 years ago
Sir /u/b3nsn0w is courteous, `/nod`.
GaryNumanVevo · 2 years ago
The artifacts are a non-issue. It's intended images with nightshade are intended to be silently scrapped and avoid human filtering.
minimaxir · 2 years ago
The artifacts are extremely an issue for artists who don't want their images damaged for the possibility of them not being trained by AI.

It's a bad tradeoff.

the8472 · 2 years ago
do you mean scrapped or scraped?
soulofmischief · 2 years ago
> The artifacts are a non-issue.

According to which authority?

gfodor · 2 years ago
Huge market for snake oil here. There is no way that such tools will ever win, given the requirements the art remain viewable to human perception, so even if you made something that worked (which this sounds like it doesn’t) from first principles it will be worked around immediately.

The only real way for artists or anyone really to try to hold back models from training on human outputs is through the law, ie, leveraging state backed violence to deter the things they don’t want. This too won’t be a perfect solution, if anything it will just put more incentives for people to develop decentralized training networks that “launder” the copyright violations that would allow for prosecutions.

All in all it’s a losing battle at a minimum and a stupid battle at worst. We know these models can be created easily and so they will, eventually, since you can’t prevent a computer from observing images you want humans to be able to observe freely.

AJ007 · 2 years ago
The level of claims accompanied by enthusiastic reception from a technically illiterate audience make it sound, smell, and sound like snake oil without much deep investigation.

There is another alternative to the law. Provide your art for private viewing only, and ensure your in person audience does not bring recording devices with them. That may sound absurd, but it's a common practice during activities like having sex.

Gormo · 2 years ago
That doesn't sound like a viable business model. There seems to be a non-trivial bootstrap problem involved -- how do you become well-known enough to attract audiences to private venues in sufficient volume to make a living? -- and would in no way diminish demand for AI-generated artwork which would still continue to draw attention away from you.
wraptile · 2 years ago
The thing is people want the benefits of having their stuff public but not bear the costs. Scraping has been mostly a solved problem especially when it comes to broad crawling. Put it under a login, there, no more AI "stealing" your work.
Art9681 · 2 years ago
This would just create a new market for art paparazzis who would find any and all means to inflitrate such private viewings with futuristic miniature cameras and other sensors and selling it for a premium. Less than 24 hours later the files end up on hundreds or thousands of centralized and decentralized servers.

I'm not defending it. Just acknowledging the reality. The next TMZ for private art gatherings is percolating in someone's garage at the moment.

gfodor · 2 years ago
True I can imagine that kind of thing becoming popular.
thfuran · 2 years ago
>There is no way that such tools will ever win, given the requirements the art remain viewable to human perception

On the other hand, the adversarial environment might push models towards a representation more aligned with human perception, which is neat.

aqfamnzc · 2 years ago
Reubend · 2 years ago
> Huge market for snake oil here.

This tool is free, and as far as I can tell it runs locally. If you're not selling anything, and there's no profit motive, then I don't think you can reasonably call it "snake oil".

At worst, it's a waste of time. But nobody's being deceived into purchasing it.

autoexec · 2 years ago
If this is a danger from "snake oil" of this type, it'd be from the other side, where artists are intentionally tricked into believing that tools like this mean that AI isn't or won't be a threat to their copyrights in order to get them to stop opposing it so strongly, when in fact the tool does nothing to prevent their copyrights from being violated.

I don't think that's the intention of Nightshade, but I wouldn't put past someone to try it.

Biganon · 2 years ago
There's an academic paper being published.

Snake oil for the sake of getting published is a very real problem that does exist.

golol · 2 years ago
Religion is also deceptive and snake-oil even if it does not involve profit driven motivations.
spaceman_2020 · 2 years ago
This is the hard reality. There is no putting this genie back in the bottle.

The only way to be an artist now is to have a unique style of your own, and to never make it online.

hutzlibu · 2 years ago
"and to never make it online."

So then of course, you also cannot sell your work, as those might put it online. And you cannot show your art to big crowds, as some will make pictures and put it online. So ... you can become a literal underground artists, where only some may see your work. I think only some will like that.

But I actually disagree, there are plenty of ways to be an artist now - but most should probably think about including AI as a tool, if they still want to make money. But with the exception of some superstars, most artists are famously low on money - and AI did not introduce this. (all the professional artists I know, those who went to art school - do not make their income with their art)

Deleted Comment

jedberg · 2 years ago
Everything old is new again. It's the same thing with any DRM that happens on the client side. As long as it's viewable by humans, someone will figure out a way to feed that into a machine.
honkycat · 2 years ago
"A law, ie, leveraging state backed violence to deter the things they don’t want."

We all know what a law is you don't need to clarify. It makes your prose less readable.

gfodor · 2 years ago
Other people pointed out they appreciated this prose. It’s easy to forget what exactly people are asking for when they talk about regulating the training of machine learning models.
jMyles · 2 years ago
> leveraging state backed violence to deter the things they don’t want

I just want to say: I really appreciate the stark terms in which you've put this.

The thing that has come to be called "intellectual property" is actually just a threat of violence against people who arrange bytes in a way that challenges power structures.

mihaaly · 2 years ago
I heard that flooding the net with AI generated art would do much much more harm to generative AI than this whatever is this. Yes, this must be some snake oil salesman, those take it seriously turn AIs own weapon against AI.
vmirnv · 2 years ago
I'm thinking — is it possible to create something on a global level similar to what they did in Snapchat: some sort of image flickering that would be difficult to parse, but still acceptable for humans?
nihilius · 2 years ago
Sorry i do not use Snapchat and with googeling "Snapchat image flickering" i did not find a good result. Could you elaborate this a bit more or provide me with a link where this is described? Thank you very much. :)
int_19h · 2 years ago
If humans can process it, you can train a model to do the same.
elzbardico · 2 years ago
You don’t need it to visible. You only need it to be scrapped to poison the models. I think that’s the idea.
AlfeG · 2 years ago
My guess. Is that at some poi t of time You will not be able to use any generated image or video in commercial. Because of 100% copyright claim for using parts of copyrighted image. Like YouTube those days. When some random beeps matches with someone music...
abrarsami · 2 years ago
It should be like that. I agree
minimaxir · 2 years ago
A few months ago I made a proof-of-concept on how finetuning Stable Diffusion XL on known bad/incoherent images can actually allow it to output "better" images if those images are used as a negative prompt, i.e. specifying a high-dimensional area of the latent space that model generation should stay away from: https://news.ycombinator.com/item?id=37211519

There's a nonzero chance that encouraging the creation of a large dataset of known tampered data can ironically improve generative AI art models by allowing the model to recognize tampered data and allow the training process to work around it.

smrtinsert · 2 years ago
Great lora post, thanks for sharing this again! Not sure how I missed as I'm especially interested in sd content.
eigenvalue · 2 years ago
This seems like a pretty pointless "arms race" or "cat and mouse game". People who want to train generative image models and who don't care about what artists think about it at all can just do some basic post-processing on the images that is just enough to destroy the very carefully tuned changes this Nightshade algorithm makes. Something like resampling it to slightly lower resolution and then using another super-resolution model on it to upsample it again would probably be able to destroy these subtle tweaks without making a big difference to a human observer.

In the future, my guess is that courts will generally be on the side of artists because of societal pressures, and artists will be able to challenge any image they find and have it sent to yet another ML model that can quickly adjudicate whether the generated image is "too similar" to the artist's style (which would also need to be dissimilar enough from everyone else's style to give a reasonable legal claim in the first place).

Or maybe artists will just give up on trying to monetize the images themselves and focus only on creating physical artifacts, similar to how independent musicians make most of their money nowadays from touring and selling merchandise at shows (plus Patreon). Who knows? It's hard to predict the future when there are such huge fundamental changes that happen so quickly!

johnnyanmac · 2 years ago
>Or maybe artists will just give up on trying to monetize the images themselves and focus only on creating physical artifacts, similar to how independent musicians make most of their money nowadays from touring and selling merchandise at shows (plus Patreon).

As is, art already isn't a sustainable career for most people who can't get a job in industry. The most common monetization is either commissions or hiding extra content behind a pay wall.

To be honest I can see more proverbial "Furry artists" sprouting up in a cynical timeline. I imagine like every other big tech that the 18+ side of this will be clamped down hard by the various powers that be. Which means NSFW stuff will be shielded a bit by the advancement and you either need to find underground training models or go back to an artist. .

Gigachad · 2 years ago
>need to find underground training models

It's not particularly that hard. The furry nsfw models are already the most well developed and available models you can get right now. And they are spitting out stuff that is almost indistinguishable from regular art.

raincole · 2 years ago
> This seems like a pretty pointless "arms race" or "cat and mouse game".

If there is any "point" of this, it's that's going to push the AI models to become better at capturing how humans see things.

jMyles · 2 years ago
> musicians make most of their money nowadays from touring and selling merchandise at shows

Be reminded that this is - and has always been - the mainstream model of the lineages of what have come to be called "traditional" and "Americana" and "Appalachian" music.

The Grateful Dead implemented this model with great finesse, sometimes going out of their way to eschew intellectual property claims over their work, in the belief that such claims only hindered their success (and of course, they eventually formalized this advocacy and named it "The Electronic Frontier Foundation" - it's no coincidence that EFF sprung from deadhead culture).

mihaaly · 2 years ago
It is a funny appearance (weird viewpoint) that artists are furious loosing their monopily in stealing and cloning components from other artists, recomposing into a similar but new thing.

And that OpenArt on the analogy of OpenSource is a non-existing thing (I know, I know, different things, source code is not for the generic audience and can be hidden on will, unlike art, just having some generative thoughts artefact here ;) )

hackernewds · 2 years ago
the point is you could circumvent one nightshade, but as long as the cat and mouse game continues there can be more
marcinzm · 2 years ago
This feels like it'll actually help make AI models better versus worse once they train on these images. Artists are basically, for free, creating training data that conveys what types of noise does not change the intended meaning of the image to the artist themselves.
r3trohack3r · 2 years ago
The number of people who are going to be able to produce high fidelity art with off the shelf tools in the near future is unbelievable.

It’s pretty exciting.

Being able to find a mix of styles you like and apply them to new subjects to make your own unique, personalized, artwork sounds like a wickedly cool power to give to billions of people.

kredd · 2 years ago
In terms of art, population tends to put value not on the result, but origin and process. People will just look down on any art that’s AI generated in a couple of years when it becomes ubiquitous.
petesergeant · 2 years ago
> population tends to put value not on the result, but origin and process

I think population tends to value "looks pretty", and it's other artists, connoisseurs, and art critics who value origin and process. Exit Through the Gift Shop sums this up nicely

Aerroon · 2 years ago
I disagree. I definitely value modern digital art more than most historical art, because it just looks better. If AI art looks better (and in some cases it does) then I'll prefer that.
redwall_hp · 2 years ago
This is already the case. Art is a process, a form of human expression, not an end result.

I'm sure OpenAI's models can shit out an approximation of a new Terry Pratchett or Douglas Adams novel, but nobody with any level of literary appreciation would give a damn unless fraud was committed to trick readers into buying it. It's not the author's work, and there's no human message behind it.

Theodores · 2 years ago
https://en.wikipedia.org/wiki/Labor_theory_of_value

According to Marx, value is only created with human labour. This is not just a Marxist theory, it is an observation.

There may be lots of over-priced junk that makes you want to question this idea. But let's not nit-pick on that.

In two years time people will not see any value in AI art, quite correctly because there is not much human labour in creating it.

MacsHeadroom · 2 years ago
Nope, but I already look down on artists who refuse to integrate generative AI into their processes.
falcolas · 2 years ago
> Being able to find a mix of styles you like and apply them to new subjects to make your own unique, personalized, artwork sounds like a wickedly cool power to give to billions of people.

And in the process, they will obviate the need for Nightshade and similar tools.

AI models ingesting AI generated content does the work of destroying the models all by itself. Have a look at "Model Collapse" in relation to generative AI.

23B1 · 2 years ago
It'll be about as wickedly tool as the ability to get on the internet, e.g. commoditized, transactional, and boring.
sebzim4500 · 2 years ago
I know this is an unpopular thing to say these days, but I still think the internet is amazing.

I have more access to information now than the most powerful people in the world did 40 years ago. I can learn about quantum field theory, about which pop star is allegedly fucking which other pop star, etc.

If I don't care about the law I can read any of 25 million books or 100 million scientific papers all available on Anna's Archive for free in seconds.

__loam · 2 years ago
And we only had to alienate millions of people from their labor to do it.
r3trohack3r · 2 years ago
Absolutely agree we should allow people to accumulate equity through effective allocation of their labor.

And I also agree that we shouldn’t build systems that alienate people from that accumulated equity.

DennisAleynikov · 2 years ago
Yeah, sadly those millions of people don’t matter in the grand scheme of things and were never going to profit off their work long term
mensetmanusman · 2 years ago
Is this utilitarianism?
BeFlatXIII · 2 years ago
Worth it.
password54321 · 2 years ago
Not really. There is a reason why we find realistic painting to be more fascinating than a photo and why some still practice it. The effort put in by another artist does affect our enjoyment.
wruza · 2 years ago
For me it doesn’t. I’m generating images, realistic, 2.5d, 2d and I like them as much. I don’t feel (or miss) what you described. Or what any other arts guy describes, for that matter. Arts people are different, because they were trained to feel something a normal person wouldn’t. And that’s okay, a normal person without training wouldn’t see how much beauty and effort there is in an algorithm or a legal contract as well.
dartharva · 2 years ago
The word "we" is doing a lot of heavy lifting here. A large majority of consumers can't even tell apart AI-generated from handmade, let alone care who or what made the thing.
chris-orgmenta · 2 years ago
I want progressive fees on copyright/IP/patent usage, and worldwide gov cooperation/legislation (and perhaps even worldwide ability to use works without obtaining initial permission, although let's not go into that outlandish stuff)

I want a scaling license fee to apply (e.g. % pegged to revenue. This still has an indirect problem with different industries having different profit margins, but still seems the fairest).

And I want the world (or EU, then others to follow suit) to slowly reduce copyright to 0 years* after artists death if owned by a person, and 20-30 years max if owned by a corporation.

And I want the penalties for not declaring usage** / not paying fees, to be incredibly high for corporations... 50% gross (harder) / net (easier) profit margin for the year? Something that isn't a slap on the wrist and can't be wriggled out of quite so easily, and is actually an incentive not to steal in the first place.)

[*]or whatever society deems appropriate.

[**]Until auto-detection (for better or worse) gets good enough.

IMO that would allow personal use, encourages new entrants to market, encourages innovation, incentivises better behaviour from OpenAI et al.

Dylan16807 · 2 years ago
> And I want the world (or EU, then others to follow suit) to slowly reduce copyright to 0 years* after artists death if owned by a person, and 20-30 years max if owned by a corporation.

Why death at all?

It's icky to trigger soon after death, it's bad to have copyright vary so much based on author age, and it's bad for many works to still have huge copyright lengths.

It's perfectly fine to let copyright expire during the author's life. 20-30 years for everything.

wraptile · 2 years ago
Extremely naive to think that any of this could be enforced to any adequate level. Copyright is fundamentally broken and putting some plasters on it is not going to do much especially when these plasters are several decades too late.

Deleted Comment