A decade ago, when chat bots were a lot less useful, a common piece of etiquette was that it's fine for a bot to pretend to be a human or God or whatever, but if you directly ask it if it's a bot it has to confirm that. Basically of the bot version of that myth about undercover cops.
I don't see a downside in requiring public-facing bots to do that
Not sure if that's what the proposal is about though, it's currently down
A law like that would probably be unconstitutional if it applied broadly to speech in general. Compare United States v. Alvarez, where the Supreme Court held that the First Amendment gives you the right to lie about having received military medals.
It might work in more limited contexts, like commercial speech.
> I wouldn't want humans pretending to be bots, for a variety of reasons.
I don't have an opinion yet, but I can't think of a specific reason to object to that (other than a default preference for honesty). Could you give an example or two?
So compel speech from a person? "Congress shall make no law..." Really the most basic civics education would benefit us all, I think there are some youtube videos about this.
> I don't see a downside in requiring public-facing bots to do that
Your statement attempts to give an impression of a middle ground, but what it actually does is delegating the action to the human - who has limited energy and has to make hundreds of other decisions.
Your statement sounds like what a lobbyist might whisper to a regulator in an attempt to effectively neuter the bill.
People not versed in technology do not - and do not have to - know what an LLM is or what it can do.
These matters need be resolved at the source and we must not allow hopeful libertarian technologist DDoS the whole society.
I remember the opposite. The chat bots popular in 2015 were trying to pass the Turing Test and would deny being a bot. The 2025 chat bots popular now are pretty good about explaining they are a bot. Of course, that's just generalizing the popular ones - anyone can make their own do as they please.
This is clearly undisclosed promotion for vetto.app. alexd127's only other account activity is on this thread [https://news.ycombinator.com/item?id=42901553] for the exact same bill.
> Please submit the original source. If a post reports on something found on another site, submit the latter.
> Please don't use HN primarily for promotion. It's ok to post your own stuff part of the time, but the primary use of the site should be for curiosity.
I don’t think those are actually bots, unfortunately. There’s a law preventing automated text messages, but there’s a loophole if the message is sent by an actual human being. So campaigns and PACs just get teams of volunteers to send out messages, likely with some software that lets them send messages to a list of numbers with a single click.
> It shall be unlawful for any person to use a bot to communicate or interact with another person in California online. A person using a bot shall not be liable under this section if the person discloses that it is a bot. A person using a bot shall disclose that it is a bot if asked or prompted by another person.
Have you been to LinkedIn recently. It is the "dead internet theory" in practice. Every single post, every single comment. I do not understand what's the point of it.
I miss when it was about job/candidate search and that's it.
It would become much better website to be honest if only thing you can post is if you are recruiting or if you are searching for a job. The slop is just ruining the experience and diluting the purpose of the website.
My understanding is that the requirement is for __platforms__ with 10m+ monthly users. That is, like Twitter but (probably) not Hacker News. And that really it is more that these platforms need to provide an interface in which bots can identify themselves and do a good faith effort in identifying bots
> Online platforms with over 10 million monthly U.S. visitors would need to ensure bot operators on their services comply with these expanded disclosure rules.
So even if the bot is from a small business, they still must identify themselves as long as they are on a platform like Twitter, Facebook, Reddit, etc. This feels reasonable, even if we disagree on a threshold. It doesn't make sense to enforce this for small niche forums. This would put undue burden on small players and similarly be a waste of government resources, especially because any potential damage is, by definition, smaller.
Big players love regulation because it's gatekeeping and they can always step over the gate. But the gate keeps out competition. More specifically, it squashes competition before they can even become good competitors. So I think it definitely is a good idea to regulate in this fashion. Remember that the playing field is always unbalanced, big players can win with worse products because they can leverage their weight.
There's a practical reason with two sides to it. Most small companies simply won't know this rule even exists, if it passes. And as various other jurisdictions pass various other laws relating to AI, this will gradually turn into hundreds of laws, very possibly incompatible, spattered across countless jurisdictions - regularly changing with all sorts of opaque precedent defining what they exactly mean. You'll literally need a regulatory compliance department to keep up to date.
And such departments, staffed with lawyers, tend to be expensive. Have these laws affect small business and you greatly imperil the ability of small companies to even exist, which is one reason big companies in certain industries tend to actively lobby for regulations - a pretext of 'safety' with a reality of anticompetitive behavior. But by the time a company has 10 million regular users, it should be able to comfortably fund a compliance department.
Small companies don't have to implement the ability to stop marketing or political texts if the customer replies STOP. Twilio, Amazon SNS, and other companies further down the stack do it automatically.
I assume foundational models will include it in all text they emit somehow.
Just how Zoom tells everyone "recording in progress" as soon as you press the record button, to ensure compliance. Or indeed Apple's newish call recording feature.
I think there's an easy middle ground, 10M is huge. 1k would be much more reasonable. That gives startups more than enough runway to be naughty while also making sure they fix things up before becoming a problem.
Incorrect. They 10M requirement is part of the definition of an "online platform" [1], which is only mentioned in the existing statue as them NOT having an obligation [2] and is not mentioned at all in the proposed law [3] other than a formatting fix.
Run 10 subsidiaries via a chain of shell companies, each carefully staying under, say, 8M monthly users, all relaying approximately the same messages, both by pure coincidence, and by admittedly blatant imitation of each other!
It's a horse trading thing. You are less likely to get your bill passed if it will impact small businesses. think less about SV startups who know what they're doing and more about some indy barber who buys an off the shelf scheduling assistant -- should they have to bury themselves in legal code first?
For the same reason GDPR should have not applied to smaller businesses, lots of people who had otherwise perfectly fine small sites that were useful and reasonably secure could not afford the overhead due to various factors, being self-bootstrapped, too small of a budget, hobbyist projects, etc the things that always make the internet great. The fines are in the millions at a MINIMUM, its ridiculous.
After GDPR became the law in the EU, we saw here on HN numerous announcements of smaller sites / companies just shutting their doors. Meanwhile, bigger sites and companies can afford all the red tape, and they win all these smaller companies customers by default.
seems fairly narrowly written - it looks like it's removing the requirement that bot usage is illegal only if there's "intent to mislead". it seems like that'd be very difficult to prove and would result in the law not really being enforced. instead there's a much more bright-line rule - it's illegal, unless you disclose that it's a bot, and as long as you do that, you're fine.
once I was able to load the Veeto page, I noticed there's a "chat" tab with "Ask me anything about this bill! I'll use the bill's text to help answer your questions." - so somewhat ironically it seems like the bill would directly effect this Veeto website as well, because they're using a chatbot of some kind.
California tends to be pretty good (well, "good". relative to federal and other states) at getting consumer friendly bills passed. They don't always work, but the intent for many bills feel focused on the people.
>On the other hand, corporations and big advertisers (spammers?) might not really want it.
Of course they don't. I would love one day seeing how many bots there truly are on Reddit and seeing how close to Dead Internet we are.
I don't think HN hits the 10m threshold to require disclosure. But I also doubt many bots are on here.
As bots get smarter we need to give them more access not less, people have been used as useful idiots and puppets for far too long I don't see why we should make an exception for bots.
If having to disclose to your users/customers that they're interacting with a bot makes them stop interacting then that sucks for you. I work in this space and we proudly advertise when you're talking to a bot and our users actually choose it over the option to connect to a human.
Our staff do better work but the bot is instant. It seems people would rather go back and forth a few times and be in the drivers seat than wait on a person.
Because of scale. Fool one person and you're a conman, fool a million and you're a politician. But with software anybody can jump to arbitrarily high scales, limited only by money.
Industry will get there pretty soon regardless of that bill or another, since there is a paradigm shift.
The conversation is no longer about scraping bots versus genuine human visitors. Today’s reality involves legitimate users leveraging AI agents to accomplish tasks—like scanning e-commerce sites for the best deals, auto-checking out, or running sophisticated data queries. Traditional red flags (such as numerous rapid requests or odd navigation flows) can easily represent honest customer behavior once enhanced by an AI assistant.
I think you still care too much about the visitor's identity and agency.
Step back a bit and ask why anyone ever tried to throttle or block bots in the first place. Most of the time, it's because they waste the service operator's resources.
From a service operator's point of view, there is no need to distinguish an AI agent that rapidly requests 1000 pages to find the best deal from a dumb bot that scrapes the same 1000 pages for any other purpose. Even a human with fast fingers can open hundreds of tabs in a minute, with the same impact on your AWS bill. You have every right to kick them all out if they strain your resources and you don't want their business. Whether they carry a token of trust is as irrelevant as whether they are human. The problem has always been about behavior, not agency.
Is there much of a point in telling psychics that they must tell people they cannot really speak for the dead? Outside of a few outliers, e.g. those who admit that it is for entertainment or those who have psychological problems, those who practice it know it is a scam.
I'm not sure whether those selling AI are in the same boat. One the one hand, the technology does produce results. On the other hand, the product clearly isn't what people think of as intelligence.
I'm mocking the idea of a consistent answer. Of course you can be consistent, but you'll wind up with a bunch of rules that are stupid.
Do we make it illegal to pretend Santa is a real person? Do you see? It's exactly the same thing.
But I don't think that's a good idea, about Santa. OChatbots disclosing they're bots makes sense, but not because it's logical or whatever. And I'm not even sure if it makes sense to do in the context of entertainment, where suspending disbelief is common.
I don't see a downside in requiring public-facing bots to do that
Not sure if that's what the proposal is about though, it's currently down
I wouldn't want humans pretending to be bots, for a variety of reasons.
It would be so embarrassing if your AI Girlfriend / Boyfriend turned out to be real.
It might work in more limited contexts, like commercial speech.
I don't have an opinion yet, but I can't think of a specific reason to object to that (other than a default preference for honesty). Could you give an example or two?
Deleted Comment
Your statement attempts to give an impression of a middle ground, but what it actually does is delegating the action to the human - who has limited energy and has to make hundreds of other decisions.
Your statement sounds like what a lobbyist might whisper to a regulator in an attempt to effectively neuter the bill.
People not versed in technology do not - and do not have to - know what an LLM is or what it can do.
These matters need be resolved at the source and we must not allow hopeful libertarian technologist DDoS the whole society.
Dead Comment
Dead Comment
https://legiscan.com/CA/text/AB410/2025
> Please submit the original source. If a post reports on something found on another site, submit the latter.
> Please don't use HN primarily for promotion. It's ok to post your own stuff part of the time, but the primary use of the site should be for curiosity.
From https://news.ycombinator.com/newsguidelines.html
Deleted Comment
(Especially if it were in a machine readable form.)
I've volunteered before, and at least in CA, you basically have a gui that prepolutes messages and guides you through each number one by one.
Its really weird
> It shall be unlawful for any person to use a bot to communicate or interact with another person in California online. A person using a bot shall not be liable under this section if the person discloses that it is a bot. A person using a bot shall disclose that it is a bot if asked or prompted by another person.
(see https://legiscan.com/CA/text/AB410/2025 for definitions and source)
email, sales outreach and LinkedIn messages are all communications or interactions.
I miss when it was about job/candidate search and that's it.
Deleted Comment
Big players love regulation because it's gatekeeping and they can always step over the gate. But the gate keeps out competition. More specifically, it squashes competition before they can even become good competitors. So I think it definitely is a good idea to regulate in this fashion. Remember that the playing field is always unbalanced, big players can win with worse products because they can leverage their weight.
And such departments, staffed with lawyers, tend to be expensive. Have these laws affect small business and you greatly imperil the ability of small companies to even exist, which is one reason big companies in certain industries tend to actively lobby for regulations - a pretext of 'safety' with a reality of anticompetitive behavior. But by the time a company has 10 million regular users, it should be able to comfortably fund a compliance department.
I assume foundational models will include it in all text they emit somehow.
Just how Zoom tells everyone "recording in progress" as soon as you press the record button, to ensure compliance. Or indeed Apple's newish call recording feature.
[1] https://law.justia.com/codes/california/code-bpc/division-7/...
[2] https://law.justia.com/codes/california/code-bpc/division-7/...
[3] https://legiscan.com/CA/text/AB410/2025
Deleted Comment
After GDPR became the law in the EU, we saw here on HN numerous announcements of smaller sites / companies just shutting their doors. Meanwhile, bigger sites and companies can afford all the red tape, and they win all these smaller companies customers by default.
seems fairly narrowly written - it looks like it's removing the requirement that bot usage is illegal only if there's "intent to mislead". it seems like that'd be very difficult to prove and would result in the law not really being enforced. instead there's a much more bright-line rule - it's illegal, unless you disclose that it's a bot, and as long as you do that, you're fine.
once I was able to load the Veeto page, I noticed there's a "chat" tab with "Ask me anything about this bill! I'll use the bill's text to help answer your questions." - so somewhat ironically it seems like the bill would directly effect this Veeto website as well, because they're using a chatbot of some kind.
On one hand, judging by the comments, there’s quite a bit of interest on disclosure
On the other hand, corporations and big advertisers (spammers?) might not really want it. Or is there a positive aspect in disclosure for them?
California tends to be pretty good (well, "good". relative to federal and other states) at getting consumer friendly bills passed. They don't always work, but the intent for many bills feel focused on the people.
>On the other hand, corporations and big advertisers (spammers?) might not really want it.
Of course they don't. I would love one day seeing how many bots there truly are on Reddit and seeing how close to Dead Internet we are.
I don't think HN hits the 10m threshold to require disclosure. But I also doubt many bots are on here.
Our staff do better work but the bot is instant. It seems people would rather go back and forth a few times and be in the drivers seat than wait on a person.
The conversation is no longer about scraping bots versus genuine human visitors. Today’s reality involves legitimate users leveraging AI agents to accomplish tasks—like scanning e-commerce sites for the best deals, auto-checking out, or running sophisticated data queries. Traditional red flags (such as numerous rapid requests or odd navigation flows) can easily represent honest customer behavior once enhanced by an AI assistant.
see what I have posted a couple of weeks ago -
https://blog.tarab.ai/p/bot-management-reimagined-in-the
Step back a bit and ask why anyone ever tried to throttle or block bots in the first place. Most of the time, it's because they waste the service operator's resources.
From a service operator's point of view, there is no need to distinguish an AI agent that rapidly requests 1000 pages to find the best deal from a dumb bot that scrapes the same 1000 pages for any other purpose. Even a human with fast fingers can open hundreds of tabs in a minute, with the same impact on your AWS bill. You have every right to kick them all out if they strain your resources and you don't want their business. Whether they carry a token of trust is as irrelevant as whether they are human. The problem has always been about behavior, not agency.
Should psychics tell you they cannot really speak for the dead?
Yes?
I'm not sure whether those selling AI are in the same boat. One the one hand, the technology does produce results. On the other hand, the product clearly isn't what people think of as intelligence.
/s
They don't already? Games tend to be one of the better platforms of informing of an AI opponent vs. a Human one.
>Should psychics tell you they cannot really speak for the dead
We have to prove they are robots first.
Do we make it illegal to pretend Santa is a real person? Do you see? It's exactly the same thing.
But I don't think that's a good idea, about Santa. OChatbots disclosing they're bots makes sense, but not because it's logical or whatever. And I'm not even sure if it makes sense to do in the context of entertainment, where suspending disbelief is common.