Readit News logoReadit News
wongarsu · 7 months ago
A decade ago, when chat bots were a lot less useful, a common piece of etiquette was that it's fine for a bot to pretend to be a human or God or whatever, but if you directly ask it if it's a bot it has to confirm that. Basically of the bot version of that myth about undercover cops.

I don't see a downside in requiring public-facing bots to do that

Not sure if that's what the proposal is about though, it's currently down

tenpies · 7 months ago
It bothers me that they didn't consider that this should be bilateral: a bot must confirm that it is a bot, and a human must confirm it is a human.

I wouldn't want humans pretending to be bots, for a variety of reasons.

rapind · 7 months ago
> I wouldn't want humans pretending to be bots, for a variety of reasons.

It would be so embarrassing if your AI Girlfriend / Boyfriend turned out to be real.

comex · 7 months ago
A law like that would probably be unconstitutional if it applied broadly to speech in general. Compare United States v. Alvarez, where the Supreme Court held that the First Amendment gives you the right to lie about having received military medals.

It might work in more limited contexts, like commercial speech.

yjftsjthsd-h · 7 months ago
> I wouldn't want humans pretending to be bots, for a variety of reasons.

I don't have an opinion yet, but I can't think of a specific reason to object to that (other than a default preference for honesty). Could you give an example or two?

hunter2_ · 7 months ago
banku_brougham · 7 months ago
So compel speech from a person? "Congress shall make no law..." Really the most basic civics education would benefit us all, I think there are some youtube videos about this.
EFreethought · 7 months ago
To paraphrase a line in the Bible: Are the bots here for the benefit of man, or is man here for the benefit of the bots?
BobbyTables2 · 7 months ago
I’ve had a number of encounters with ISP tech support where the humans seemed a lot like bots…

Deleted Comment

Galatians4_16 · 7 months ago
Freeze Peach says I can be a bot if I want to.
chrisco255 · 7 months ago
Some humans pull it off very well.
mjbale116 · 7 months ago
> I don't see a downside in requiring public-facing bots to do that

Your statement attempts to give an impression of a middle ground, but what it actually does is delegating the action to the human - who has limited energy and has to make hundreds of other decisions.

Your statement sounds like what a lobbyist might whisper to a regulator in an attempt to effectively neuter the bill.

People not versed in technology do not - and do not have to - know what an LLM is or what it can do.

These matters need be resolved at the source and we must not allow hopeful libertarian technologist DDoS the whole society.

nico · 7 months ago
Archive links on another comment: https://news.ycombinator.com/item?id=42968477
mixxit · 7 months ago
ah the good ol '/finger [Nick|Address]
geor9e · 7 months ago
I remember the opposite. The chat bots popular in 2015 were trying to pass the Turing Test and would deny being a bot. The 2025 chat bots popular now are pretty good about explaining they are a bot. Of course, that's just generalizing the popular ones - anyone can make their own do as they please.

Dead Comment

Dead Comment

skylerwiernik · 7 months ago
This is clearly undisclosed promotion for vetto.app. alexd127's only other account activity is on this thread [https://news.ycombinator.com/item?id=42901553] for the exact same bill.
lupire · 7 months ago
scottbez1 · 7 months ago
Or the official California legislature site: https://leginfo.legislature.ca.gov/faces/billTextClient.xhtm...
novok · 7 months ago
how is it better?
wilg · 7 months ago
is that against the hn guidelines or something?
PhilippGille · 7 months ago
Probably these guidelines are related:

> Please submit the original source. If a post reports on something found on another site, submit the latter.

> Please don't use HN primarily for promotion. It's ok to post your own stuff part of the time, but the primary use of the site should be for curiosity.

From https://news.ycombinator.com/newsguidelines.html

soheil · 7 months ago
No but still good to know in this case specially as the website is broken and I had to refresh a few times.

Deleted Comment

cebert · 7 months ago
I wish this legislation would also apply to AI generated emails, sales outreach, and LinkedIn messages.
hedora · 7 months ago
Also, political SMS messages and the messages you get from some other random number acknowledging the "STOP" message you just sent.

(Especially if it were in a machine readable form.)

huevosabio · 7 months ago
For political stuff you need a human to press send. So it's a semi automated system.

I've volunteered before, and at least in CA, you basically have a gui that prepolutes messages and guides you through each number one by one.

Its really weird

zoky · 7 months ago
I don’t think those are actually bots, unfortunately. There’s a law preventing automated text messages, but there’s a loophole if the message is sent by an actual human being. So campaigns and PACs just get teams of volunteers to send out messages, likely with some software that lets them send messages to a list of numbers with a single click.
advisedwang · 7 months ago
I think it does. The wording proposed is:

> It shall be unlawful for any person to use a bot to communicate or interact with another person in California online. A person using a bot shall not be liable under this section if the person discloses that it is a bot. A person using a bot shall disclose that it is a bot if asked or prompted by another person.

(see https://legiscan.com/CA/text/AB410/2025 for definitions and source)

email, sales outreach and LinkedIn messages are all communications or interactions.

AznHisoka · 7 months ago
I already assume 99.9% of these are all generated by bots.
romanovcode · 7 months ago
Have you been to LinkedIn recently. It is the "dead internet theory" in practice. Every single post, every single comment. I do not understand what's the point of it.

I miss when it was about job/candidate search and that's it.

seattle_spring · 7 months ago
LinkedIn would be in shambles if this became law.
romanovcode · 7 months ago
It would become much better website to be honest if only thing you can post is if you are recruiting or if you are searching for a job. The slop is just ruining the experience and diluting the purpose of the website.

Deleted Comment

newsclues · 7 months ago
And social media.
cuteboy19 · 7 months ago
it should not apply to non interacting “bots”
rappatic · 7 months ago
The requirement doesn't kick in until 10 million monthly US users. I don't see why this shouldn't apply to smaller businesses.
godelski · 7 months ago
My understanding is that the requirement is for __platforms__ with 10m+ monthly users. That is, like Twitter but (probably) not Hacker News. And that really it is more that these platforms need to provide an interface in which bots can identify themselves and do a good faith effort in identifying bots

  > Online platforms with over 10 million monthly U.S. visitors would need to ensure bot operators on their services comply with these expanded disclosure rules.
So even if the bot is from a small business, they still must identify themselves as long as they are on a platform like Twitter, Facebook, Reddit, etc. This feels reasonable, even if we disagree on a threshold. It doesn't make sense to enforce this for small niche forums. This would put undue burden on small players and similarly be a waste of government resources, especially because any potential damage is, by definition, smaller.

Big players love regulation because it's gatekeeping and they can always step over the gate. But the gate keeps out competition. More specifically, it squashes competition before they can even become good competitors. So I think it definitely is a good idea to regulate in this fashion. Remember that the playing field is always unbalanced, big players can win with worse products because they can leverage their weight.

somenameforme · 7 months ago
There's a practical reason with two sides to it. Most small companies simply won't know this rule even exists, if it passes. And as various other jurisdictions pass various other laws relating to AI, this will gradually turn into hundreds of laws, very possibly incompatible, spattered across countless jurisdictions - regularly changing with all sorts of opaque precedent defining what they exactly mean. You'll literally need a regulatory compliance department to keep up to date.

And such departments, staffed with lawyers, tend to be expensive. Have these laws affect small business and you greatly imperil the ability of small companies to even exist, which is one reason big companies in certain industries tend to actively lobby for regulations - a pretext of 'safety' with a reality of anticompetitive behavior. But by the time a company has 10 million regular users, it should be able to comfortably fund a compliance department.

diebeforei485 · 7 months ago
Small companies don't have to implement the ability to stop marketing or political texts if the customer replies STOP. Twilio, Amazon SNS, and other companies further down the stack do it automatically.

I assume foundational models will include it in all text they emit somehow.

Just how Zoom tells everyone "recording in progress" as soon as you press the record button, to ensure compliance. Or indeed Apple's newish call recording feature.

saucymew · 7 months ago
Contrarianly, startups should have a little maneuverability to be naughty. Any slight edge against incumbents is directionally sound policy, imho.
cogman10 · 7 months ago
I think there's an easy middle ground, 10M is huge. 1k would be much more reasonable. That gives startups more than enough runway to be naughty while also making sure they fix things up before becoming a problem.
nine_k · 7 months ago
This maneuverability already exists; see the operations of Uber, WeWork, OpenAI, etc.
advisedwang · 7 months ago
Incorrect. They 10M requirement is part of the definition of an "online platform" [1], which is only mentioned in the existing statue as them NOT having an obligation [2] and is not mentioned at all in the proposed law [3] other than a formatting fix.

[1] https://law.justia.com/codes/california/code-bpc/division-7/...

[2] https://law.justia.com/codes/california/code-bpc/division-7/...

[3] https://legiscan.com/CA/text/AB410/2025

nine_k · 7 months ago
Run 10 subsidiaries via a chain of shell companies, each carefully staying under, say, 8M monthly users, all relaying approximately the same messages, both by pure coincidence, and by admittedly blatant imitation of each other!
jabroni_salad · 7 months ago
It's a horse trading thing. You are less likely to get your bill passed if it will impact small businesses. think less about SV startups who know what they're doing and more about some indy barber who buys an off the shelf scheduling assistant -- should they have to bury themselves in legal code first?

Deleted Comment

giancarlostoro · 7 months ago
For the same reason GDPR should have not applied to smaller businesses, lots of people who had otherwise perfectly fine small sites that were useful and reasonably secure could not afford the overhead due to various factors, being self-bootstrapped, too small of a budget, hobbyist projects, etc the things that always make the internet great. The fines are in the millions at a MINIMUM, its ridiculous.

After GDPR became the law in the EU, we saw here on HN numerous announcements of smaller sites / companies just shutting their doors. Meanwhile, bigger sites and companies can afford all the red tape, and they win all these smaller companies customers by default.

evil-olive · 7 months ago
since the Veeto website seems to be struggling, here's the official CA legislature page for the bill: https://leginfo.legislature.ca.gov/faces/billTextClient.xhtm...

seems fairly narrowly written - it looks like it's removing the requirement that bot usage is illegal only if there's "intent to mislead". it seems like that'd be very difficult to prove and would result in the law not really being enforced. instead there's a much more bright-line rule - it's illegal, unless you disclose that it's a bot, and as long as you do that, you're fine.

once I was able to load the Veeto page, I noticed there's a "chat" tab with "Ask me anything about this bill! I'll use the bill's text to help answer your questions." - so somewhat ironically it seems like the bill would directly effect this Veeto website as well, because they're using a chatbot of some kind.

card_zero · 7 months ago
I enjoyed all the corrections of "Internet Web site" to "internet website".
nico · 7 months ago
Interesting. I’m afraid this won’t really go anywhere, but it’s a good conversation to have

On one hand, judging by the comments, there’s quite a bit of interest on disclosure

On the other hand, corporations and big advertisers (spammers?) might not really want it. Or is there a positive aspect in disclosure for them?

johnnyanmac · 7 months ago
>I’m afraid this won’t really go anywhere

California tends to be pretty good (well, "good". relative to federal and other states) at getting consumer friendly bills passed. They don't always work, but the intent for many bills feel focused on the people.

>On the other hand, corporations and big advertisers (spammers?) might not really want it.

Of course they don't. I would love one day seeing how many bots there truly are on Reddit and seeing how close to Dead Internet we are.

I don't think HN hits the 10m threshold to require disclosure. But I also doubt many bots are on here.

soheil · 7 months ago
As bots get smarter we need to give them more access not less, people have been used as useful idiots and puppets for far too long I don't see why we should make an exception for bots.
Spivak · 7 months ago
If having to disclose to your users/customers that they're interacting with a bot makes them stop interacting then that sucks for you. I work in this space and we proudly advertise when you're talking to a bot and our users actually choose it over the option to connect to a human.

Our staff do better work but the bot is instant. It seems people would rather go back and forth a few times and be in the drivers seat than wait on a person.

somenameforme · 7 months ago
Because of scale. Fool one person and you're a conman, fool a million and you're a politician. But with software anybody can jump to arbitrarily high scales, limited only by money.
tzury · 7 months ago
Industry will get there pretty soon regardless of that bill or another, since there is a paradigm shift.

The conversation is no longer about scraping bots versus genuine human visitors. Today’s reality involves legitimate users leveraging AI agents to accomplish tasks—like scanning e-commerce sites for the best deals, auto-checking out, or running sophisticated data queries. Traditional red flags (such as numerous rapid requests or odd navigation flows) can easily represent honest customer behavior once enhanced by an AI assistant.

see what I have posted a couple of weeks ago -

https://blog.tarab.ai/p/bot-management-reimagined-in-the

kijin · 7 months ago
I think you still care too much about the visitor's identity and agency.

Step back a bit and ask why anyone ever tried to throttle or block bots in the first place. Most of the time, it's because they waste the service operator's resources.

From a service operator's point of view, there is no need to distinguish an AI agent that rapidly requests 1000 pages to find the best deal from a dumb bot that scrapes the same 1000 pages for any other purpose. Even a human with fast fingers can open hundreds of tabs in a minute, with the same impact on your AWS bill. You have every right to kick them all out if they strain your resources and you don't want their business. Whether they carry a token of trust is as irrelevant as whether they are human. The problem has always been about behavior, not agency.

doctorpangloss · 7 months ago
Should PUBG mobile players be told they are winning against bots?

Should psychics tell you they cannot really speak for the dead?

spankalee · 7 months ago
> Should psychics tell you they cannot really speak for the dead?

Yes?

II2II · 7 months ago
Is there much of a point in telling psychics that they must tell people they cannot really speak for the dead? Outside of a few outliers, e.g. those who admit that it is for entertainment or those who have psychological problems, those who practice it know it is a scam.

I'm not sure whether those selling AI are in the same boat. One the one hand, the technology does produce results. On the other hand, the product clearly isn't what people think of as intelligence.

tbrownaw · 7 months ago
That really does sound like a useful rule.

/s

johnnyanmac · 7 months ago
>Should PUBG mobile players be told they are winning against bots?

They don't already? Games tend to be one of the better platforms of informing of an AI opponent vs. a Human one.

>Should psychics tell you they cannot really speak for the dead

We have to prove they are robots first.

bhaney · 7 months ago
Yes and yes
doctorpangloss · 7 months ago
I'm mocking the idea of a consistent answer. Of course you can be consistent, but you'll wind up with a bunch of rules that are stupid.

Do we make it illegal to pretend Santa is a real person? Do you see? It's exactly the same thing.

But I don't think that's a good idea, about Santa. OChatbots disclosing they're bots makes sense, but not because it's logical or whatever. And I'm not even sure if it makes sense to do in the context of entertainment, where suspending disbelief is common.