AI if unregulated could be a lot more worse than social media. My kid after his first chat with AI said, he is my new friend. I was alarmed and explain him but what about the parents and guardians who are unaware how the kids are befriending the AI. Part of the problem is also how it is trained to be nice and encouraging. I am sure there are researchers who are talking about it but the question is are the policy makers listening to them ?
With the current acceleration of technology this is a repeating pattern. The new thing popular with kids is not understood by the parents before it is too late.
It kind of happened for me with online games. They were a new thing, and no one knew to what degree they could be addicting and life damaging. As a result I am probably over protective of my own kids when it comes to anything related to games.
We are already seeing many of the effects of the social media generation and I am not looking forward to what is going to happen to the AI natives whose guardians are ill-prepared to guide them. In the end, society will likely come to grips with it, but the test subjects will pay a heavy price.
To be fair,you said it yourself: the problem are ONLINE games, why did you generalized that to all videogames?
I'm with you that those are addicting in a bad way, I was there too. But single player games have no incentive in entertaining you forever to generate more money.
I have no problems with my kids playing single or local coop games.
“Regulate”? We can’t, and shouldn’t, regulate everything. Policymakers should focus on creating rules that ensure data safety, but if someone over 18 years wants to marry a chatbot… well, that’s their (stupid) choice.
Instead of trying to control everything, policymakers should educate people about how these chatbots work and how to keep their data safe. After all, not everyone who played Doom in the ’90s became a real killer, or assaults women because of YouPorn.
Society will adapt to these ridiculous new situations…what truly matters is people’s awareness and understanding.
We can, and we should, regulate some things. AI has, quite suddenly, built up billions of dollars worth of infrastructure and become pervasive in people's daily lives. Part of how society adapts to ridiculous new situations is through regulations.
I'm not proposing anything specifically, but the implication that this field should not be regulated is just foolish.
We also don't want to regulate everything. Have you seen that someplace, or even here? Or it's an imaginary argument? The topic was regulating AI, and about that I like your thought: humans should be better educated and better informed. Should we, maybe, make a regulation to ensure that?
I understand what you’re saying but it’s a difficult balance. Not saying everything needs to be regulated and not saying we should be full blown neoliberalism. But think of some of “social” laws we have today (in the US). No child marriages, no child labor, no smoking before 19, and no drinking before 21. These laws are in place because we understand that those who can exploit will do the exploiting. Those who can be exploited will be exploited. That being said, I don’t agree with any of the age verification policies here with adult material. Honestly not sure what the happy medium is.
There is a massive difference between a stuffed animal and an LLM. In fact, they have next to nothing in common. And as such, yes any reasonable parent would react differently to a close friendship suddenly formed with any online service.
The Stanford Prison Experiment only had 24 participants and implementation problems that should have concerned anyone with a pulse. But it’s been taught for decades.
A lot of psych research uses small samples. It’s a problem, but funding is limited and so it’s a start. Other researchers can take this and build upon it.
Anecdotally, watching people meltdown over the end of ChatGPT 4o indicates this is a bigger problem that 0.1%. And business wise, it would be odd if OpenAI kept an entire model available to serve that small a population.
The outcry when 4o was discontinued was such that open AI kept it on paying subscriptions. There are at least enough people attached to certain AI voices that it warrants a tech startup spending the resources to keep an old model around. That’s probably not an insignificant population.
I have two grandkids, one's 3 years old and one's 9 months old.
I feel like I'm not really ready for everything that's going to be vying for their attention in the next couple of decades. My daughter and her husband have good practices in place already IMHO but it's going to be a pernicious beast.
It feels like a more evolved version of those who have what they consider to be relationships with anime characters or virtual idols in Japan. Often treating a doll or lifesize pillow replica of that character as someone the person can interact and spend time with. Obviously like the AI, the fact it is so common does suggest that it must be filling a unmet need in the person and I guess the key focus needs to be how do we help those stuck in that situation to become unstuck and how do we help those feel that unmet need is fulfilled?
"Study finds..." feels clickbaity to me whenever the study is just "we found some randos on social media doing a thing". With little effort a study could find just about any type of person you want on the Internet.
Of course the loneliest 5% are going to do something like this. If it weren't for AI they'd be writing twilight fan-fic and roleplaying it on some chatroom, or giving all their money to a "saudi prince."
Seems like nothing new, just a better or more immersive form of fantasy for those who can't have the life they fantasize about.
I'd argue it'd be psychologically healthier to roleplay in a chatroom with people who are human on the other end (if that could be guaranteed, which it no longer can).
Humans can potentially be much nastier than a chatbot. There are lonely vulnerable who can be exploited, but there are also people who get off on manipulating other people and convincing them to make profoundly self destructive and life altering choices.
It kind of happened for me with online games. They were a new thing, and no one knew to what degree they could be addicting and life damaging. As a result I am probably over protective of my own kids when it comes to anything related to games.
We are already seeing many of the effects of the social media generation and I am not looking forward to what is going to happen to the AI natives whose guardians are ill-prepared to guide them. In the end, society will likely come to grips with it, but the test subjects will pay a heavy price.
How do we know which era of AI we're in?
I'm with you that those are addicting in a bad way, I was there too. But single player games have no incentive in entertaining you forever to generate more money.
I have no problems with my kids playing single or local coop games.
Instead of trying to control everything, policymakers should educate people about how these chatbots work and how to keep their data safe. After all, not everyone who played Doom in the ’90s became a real killer, or assaults women because of YouPorn.
Society will adapt to these ridiculous new situations…what truly matters is people’s awareness and understanding.
I'm not proposing anything specifically, but the implication that this field should not be regulated is just foolish.
This is not about regulating everything.
This is about realizing adverse effects and regulating for those.
Just like no one is selling you toxic youghurt.
Deleted Comment
Dead Comment
You have to be careful to not overreact to things.
How do we know if these examples aren’t just the 0.1% of the population that is, for all intend and purposes, “out there”?
So much of “news” is just finding these corner cases that evoke emotion, but ultimately have no impact.
A lot of psych research uses small samples. It’s a problem, but funding is limited and so it’s a start. Other researchers can take this and build upon it.
Anecdotally, watching people meltdown over the end of ChatGPT 4o indicates this is a bigger problem that 0.1%. And business wise, it would be odd if OpenAI kept an entire model available to serve that small a population.
But it’s hard to study users having these relationships without studying the users who have these relationships I reckon.
Dead Comment
I feel like I'm not really ready for everything that's going to be vying for their attention in the next couple of decades. My daughter and her husband have good practices in place already IMHO but it's going to be a pernicious beast.
Perfection is not required.
Deleted Comment
Dead Comment
Seems like nothing new, just a better or more immersive form of fantasy for those who can't have the life they fantasize about.
So what? We don't live in the "should" universe. We live in this one.
Deleted Comment