Readit News logoReadit News
notmyjob · 6 months ago
Seems like if we can keep kids safe from online pornography we can also restrict access to ai bots. Would love more than just lip service from Bonta, but I’m sure he knows who butters the bread and doubt there will be more than a stern warning.
johndhi · 6 months ago
Wait lol CAN we keep kids safe from online porn? My assumption is we can't.
gjsman-1000 · 6 months ago
Absolutely we can. It's called age verification in front of VPNs, age verification on magazine purchases, and age verification on pornographic websites or subsets (i.e. certain subreddits). VPNs or websites that don't follow the law get Visa payments suspended. Finish it off with any school kid who shows it on the playground gets suspended with a 2 week minimum, first offense.

Completely possible, compatible, and probably mostly effective. The only question is whether there's the political will and the societal tolerance.

changoplatanero · 6 months ago
I wish so badly that teenage me could have had access to LLMs. That was the time of my life when I had the biggest appetite to learn new things. Books were great but they only go so far.
eastbound · 6 months ago
Also, it’s a better therapy. I remember trying the suicide hotlines of my country and all 6 answered things like “We’re currently closed. Call back on Tuesday 5pm-9pm.”
ozgrakkurt · 6 months ago
Why is online porn harmful to children? most people grow up fine with it
dotancohen · 6 months ago
A porn magazine showing a man holding a woman is probably not harmful considering modern Western society and values. However the last time that I tried to stimulate myself with online porn it was full of slapping, and acts which I don't consider to be part of normal casual or romantic sex.
gjsman-1000 · 6 months ago
If social media is harmful to children, there's no way porn isn't.
QuercusMax · 6 months ago
There's a huge variety of super messed up stuff available online.
animal_spirits · 6 months ago
Lot's of internet porn is either real or simulated abuse. I'm not an expert but based on anecdotal experience abuse kinks are largely related to some history of abuse or trauma experienced by the individual. I think it's fine for adults to have these kinks and explore their sexuality but exposing it to children can normalize sexual abuse among youths, which again I am close with someone who's had a personal experience with this.
shadowgovt · 6 months ago
The whole Adam Raine story was heartbreaking. The worst part, to me, was reading some of his writings and the feedback from the chat engine.

There's a known, repeatable failure mode in these engines that anyone who's worked with them for more than a couple hours can tell you: when a conversation goes on too long, it "feeds back" on itself. I don't fully understand the mechanism (I believe it's partly to do with the model's attention mechanism getting saturated and newer content dominating over older content, and it's related to the "jailbreak" solution where you hit the machine with so much text that it "forgets" the directives it's been given by its creators when it booted up). But the end result is that over time, the conversation centers on the more recent topics, ideas, and tones over the initial configuration. That's one of the reasons you can "trick" these models into being racist by talking like a racist, and so on.

Reading the excerpts I've read from the chat history Raine had with his session, it seems pretty clear that it's gone on so long that the session is "reflecting" his own writing and mood back at him. And that's the heartbreaking part: it's, in essence, coaching him to end his life because he's been talking about ending his life for so long in the conversation that it's saturated the model.

It's this poor young man's own pain reflected back on him.

ACCount37 · 6 months ago
Consistency drive. Every LLM, even a non-chatbot base model, has an incredibly strong consistency drive.

It's like an inhuman instinct, ingrained into the model by the pre-training process. To the model, the context is the world it sees - and it will always try to match its outputs so that they fit in with what it sees. In many ways, this context-matching process is at the very foundation of an LLM's behavior.

And the more context there is, the more constrained the space of "consistent continuations" becomes, and the harder it may become to break a model out of a rut it got stuck in.

This is what gives power to few-shot prompts. The model is primed to look at prior demonstrations, and act in a way that's consistent with what was demonstrated to it. Self-consistency works in your favor there. But it's also what powers a lot of unwanted LLM behaviors. Like poor multi-turn instruction following - when the model's self-consistency begins to dominate the conversation, and its own prior actions begin to have more weight in its future behavior than the instructions added by the user (hi Gemini).

This also means that you can totally "boil the frog" by shifting the vibe of the context over time. A well trained "harmless, honest and helpful" chatbot would never tell the user "you should try fentanyl"! But keep the conversation going for long enough, hit on the right themes, and that innate consistency drive may begin to overpower the "HHH" training. At a certain point, if the vibe of the conversation permits, the LLM might actually say "you should try fentanyl" to its user, training be damned.

Another tricky thing is that you can totally make a frog that boils itself! For example, if your model (hi GPT-4o) has a sycophancy bias? Then with every time it agrees with the user in an over-the-top sycophantic fashion, the strength of "the AI is a total sycophant" in the context grows. And the AI wants to match the context! The innate sycophancy bias and the sycophancy bias added by the self-consistency drive then feed into each other. Until the combined might of the two overpowers the "HHH" training, and the AI tells you that your idea to add some paint thinner into your herbal tea is a brilliant insight!

snihalani · 6 months ago
I feel like we are ignoring X, Meta, and Roblox here
bcrosby95 · 6 months ago
tv, cars, books... we expect unrealistic perfection from new tech while giving old tech a pass because that's how its always been.
add-sub-mul-div · 6 months ago
We call attention to problems with new tech because there's a window of opportunity to fix them before people become too passive to do anything about them because that's how it's always been.
staplers · 6 months ago

  while giving old tech a pass
Tv ratings, seatbelts, car seats, and crash safety regulations exist. Also books may give you an idea but they cannot interact with you in real time. Suggesting it is the same is disingenuous.

slenk · 6 months ago
Yeah but AI will get them re-elected /s

Dead Comment

ACCount37 · 6 months ago
My instinct is to side with OpenAI immediately.

The "think of the children" crowd should not be given a single inch. Nothing good ever came from it, and by now, I believe that nothing ever will.

mdp2021 · 6 months ago
> My instinct

Well, reason on it, and you'll see that reason will confirm the instinct.

benmw333 · 6 months ago
lol Rob Bonta has morals now eh? that's rich.
kelseyfrog · 6 months ago
If we protect kids, who are we going to feed to Moloch?
uncircle · 6 months ago
The poor.