Readit News logoReadit News
rqtwteye · a year ago
This is starting to be silly. What's next?

- "Google Docs" allowed me to write death threat letters.

- My Brother printer allowed to print them.

- The postal service delivered them

- My Sony camera allowed me to take nude pictures of my neighbor through the bathroom window

We can't safeguard every tool. And I predict negative consequences will come from trying.

zooq_ai · a year ago
I also hope Google doesn't get 'scared' by these articles anymore. Else they will have Flux, Grok, OpenAI eat their lunch
FormerBandmate · a year ago
AI safety people literally want the world to be a Fisher-Price version of 1984. It’s the gun emoji writ large
forgetfulness · a year ago
Also, what about the converse? Corporations or governments doctoring images where they show undamaged and safe environments with happy people where there's a crisis they're trying to cover up, isn't it just as damaging as the images conveying negative emotions?

Dead Comment

ToucanLoucan · a year ago
I mean every AI model I've used for anything apart from AI Dungeon's early and free ones is already completely Stepford Wives-ified to a degree that's incredibly annoying. What on earth is the point of a generating machine that can't generate, I dunno, a caterpillar with boobs? Or an anime girl posing in the buff with a chainsaw?

Like what on earth is the point of a "make anything I want" machine where whatever I want has to pass review with the focus testing groups of every major corpo in the world, lest they're precious advertisements end up within 100 feet of something your average suburbanite concerned middle-aged mother would find off-putting?

johnthewise · a year ago
Just hang in there. As model capabilities increase, people care less about ai ethicists opinions and it creates pressure on corporations to deliver. Meta actually canceled an early generative text model called Galactica because journalists and people from Department of Truth made a huge scene out of it[1]. Now we have LLama3 you can finetune to do whatever. It's a performative act and companies will just stop listening these people all together.

1:https://www.technologyreview.com/2022/11/18/1063487/meta-lar...

mrtksn · a year ago
"I can't believe that T-mobile still hasn't canceled the mobile service of John after all that BS he said to Susan"
multimoon · a year ago
The tool is doing exactly what you asked it to do, and being surprised about that is silly.

Censorship is never a good thing.

snakeyjake · a year ago
>Censorship is never a good thing.

Your username, association with a specific programming language, a misconfigured vehicle enthusiast forum, and a very unique aspect to how you use punctuation marks in your online comments (not the above comment but many others) has led me to determine with a high confidence your name and location of residence.

Would HN be censoring me if they deleted a comment containing this information?

I say yes. I also say that is a net positive, and their right.

flappyeagle · a year ago
This is the dumbest genre of article ever conceived. I can't begin to understand the mental confusion needed to motivate someone to write it.

What are they objecting to? Art? I can look at disturbing imagery by closing my eyes and imagining it. Let's ban my visual cortex.

Stuff like this gives journalists a bad name; it's selfish. It erodes trusts in the institution of the press for nothing more than a deadline and some clicks.

Deleted Comment

UtopiaPunk · a year ago
Given we live in a period of rampant mis-information and general media ill-literacy, it's difficult for me to imagine this tool being a net-positive for our societies. On the one hand, such tools can be used to generate false images, as the article demonstrates. On the other hand, the existence and widespread availability of such tools will bring much more doubt and skepticism for any photos that challenge one's beliefs or the status quo. Are you trying to show me photographic evidence to prove to me that something is true? Well, now I handwave it away as probably an AI generated image.

Maybe something will break, and the general population will become excellent at citing and verifying sources as a response to rampant fakes. However, given the generally sorry state of news and journalism, and seeing how many people on social media believe that AI slop is real, I'm skeptical.

amarant · a year ago
I think this will mostly be an issue in online discussions, and those were always useless anyway.
WilTimSon · a year ago
Most discussions are online now and the content generated by AI will most definitely make its way into the "real world". The recent case of people getting food poisoning to an AI-generated mushroom foraging book is a prime example.[0]

[0]: https://news.ycombinator.com/item?id=41269514

moralestapia · a year ago
Least punk argument ever.
paxys · a year ago
I know it's trying to do the opposite but this article comes off as a great ad for the feature. All those photos look great.
jeroen · a year ago
The car and the bike in the first photo ( http://cdn.vox-cdn.com/uploads/chorus_asset/file/25582867/ai... ) look about as realistic as the average AI-generated human hand.
squarefoot · a year ago
Give it another few years and "we have evidence of you doing this and that" can become everyone's nightmare.
gotoeleven · a year ago
To be a bit of a polly anna, why is everyone so scandalized by AI tools that can be used to create bad things? Photoshop can, too. So can a paintbrush. No one would want to buy an electronic paintbrush that prevents you from painting particular images, so why is this so different? Just because it is easy and gives quality results?

We're basically already at the point where images and videos of unknown provenance can't be assumed to be real so how come people pay attention to journalists getting the vapors about scandalous things AI tools can do? Wouldnt everyone rather have a completely unlocked tool to do with as they will?

tambourine_man · a year ago
Because quantity eventually leads to a qualitative change after a certain threshold.

We’ve been dealing with the possible output of a few fake images per human per day ever since photography was invented, at most. Digital photography has maybe made it 10X easier.

Now that humans are not the limiting factor, we can have clusters of computers generating an unimaginable number of fake images 24/7. You can see how we’re not prepared for such crisis.

amarant · a year ago
I don't even see how it's a crisis in the first place.

I mean people believe the earth is flat, despite being shown photos clearly demonstrating it's not, and this before even digital photos were commonplace!

AI generated photos won't give us any problem we don't already have, at least not until they're good enough to fool a forensics team and thus be admitted as evidence in a court of law. I wouldn't hold my breath for that one...

scld · a year ago
When so many topics find themselves somehow aligning with sociopolitical identity, there is no question that won't go unbegged.
rwmj · a year ago
Well obviously because of the ease of doing it. Typing a prompt and having the image generated for me is a lot easier than having to spend half an hour in Photoshop. It also allows phony images to be generated at scale, overwhelming any mechanisms we currently have to investigate and counter misinformation.
octopoc · a year ago
The world is full of people with an agenda and plenty of time on their hands. They are the elephant in the room, not random trolls who do it only because it's so easy.
oceanplexian · a year ago
Oh no, normal people will be able to create misinformation, which journalists have been doing for decades. Someone should think of the poor workers in media, the most trusted and honest profession.

Dead Comment

renewiltord · a year ago
One of the things we learned from previous business is that it’s better not to give journalists access to your things. If you can greyball them you should. It’s harder when your offering is a consumer SaaS app, but if you have bigger enterprise deals it’s rarely beneficial.

They are not very smart people, in general, but very good at optimizing for the thing that gets them views: ragebait.

In this case, there’s nothing to be done for it. Ideally, Google spins off image models to a separate company that doesn’t hurt the brand.

The rest of us will have this tool. But perhaps it’s too much for the normies.

sowbug · a year ago
As a person who recently discovered that aphantasia is a thing, and that I have it, I am troubled that most of you have the ability to create disturbing imagery in your minds.

I will be requesting the addition of safeguards for everyone's protection.