Diffusion models learn how to denoise noise into images that are similar to training data. But what if the training data itself is noisy? Then the model will learn to produce noisy images also.
There's a reason why they try to remove noisy/blurry/bad q images, because simply put, what you put in is what you get out. While I don't agree with intentionally destroying the quality of images (ruins it for humans as well as AI), I don't see why this wouldn't work.
This kind of watermark makes images bad for everyone, not just the AI. Do you really want to see this on every website you visit?
Regulation is constructive, deregulation is destructive.
Got anything else?
There's a reason why they try to remove noisy/blurry/bad q images, because simply put, what you put in is what you get out. While I don't agree with intentionally destroying the quality of images (ruins it for humans as well as AI), I don't see why this wouldn't work.
Dead Comment