A friend of mine co-runs a semi-popular semi-niche news site (for now more than a decade), and complains that recently traffic rose with bots masquerading as humans.
How would they know? Well, because Google, in its omniscience, started to downrank them for faking views with bots (which they do not do): it shows bot percentage in traffic stats, and it skyrocketed relative to non-bot traffic (which is now less than 50%) as they started to fall from the front page (feeding the vicious circle). Presumably, Google does not know or care it is a bot when it serves ads, but correlates it later with the metrics it has from other sites that use GA or ads.
Or, perhaps, Google spots the same anomalies that my friend (an old school sysadmin who pays attention to logs) did, such as the increase of traffic along with never seen before popularity among iPhone users (who are so tech savvy that they apparently do not require CSS), or users from Dallas who famously love their QQBrowser. I’m not going to list all telltale signs as the crowd here is too hype on LLMs (which is our going theory so far, it is very timely), but my friend hopes Google learns them quickly.
These newcomers usually fake UA, use inconspicuous Western IPs (requests from Baidu/Tencent data center ranges do sign themselves as bots in UA), ignore robots.txt and load many pages very quickly.
I would assume bot traffic increase would apply to feeds, since they are of as much use for LLM training purposes.
My friend does not actually engage in stringent filtering like Rachel does, but I wonder how soon it becomes actually infeasible to operate a website with actual original content (which my friend co-writes) without either that or resorting to Cloudflare or the like for protection because of the domination of these creepy-crawlies.
Edit: Google already downranked them, not threatened to downrank. Also, traffic rose but did not skyrocket, but relative amount of bot traffic skyrocketed. (Presumably without downranking the traffic would actually skyrocket.)
Are you saying that Google down-ranked them in search engine rankings for user behaviour in AdWords? Isn't that an abuse of monopoly? It still surprises me a little bit.
It's not that hard to dominate bots. I do it for fun, I do it for profit. Block datacenters. Run bot motels. Poison them. Lie to them. Make them have really really bad luck. Change the cost equation so that it costs them more than it costs you.
You're thinking of it wrong, the seeds of the thinking error are here: "I wonder how soon it becomes actually infeasible to operate a website with actual original content".
Bots want original content, no? So what's the problem with giving it to them? But that's the issue, isn't it? Clearly, contextually, what you should be saying is "I wonder how soon it becomes actually infeasible to operate a website for actual organic users" or something like that. But phrased that way, I'm not sure a CDN helps (I'm not sure they don't suffer false positives which interfere with organic traffic when they intermediate, more security theater because hangings and executions look good, look at the numbers of enemy dead).
Take measures that any damn fool (or at least your desired audience) can recognize.
Reading for comprehension, I think Rachel understands this.
That much is clear, yeah. The VPN they use may not be a service advertised to public and featured in lists, however.
Some of the new traffic did come directly from Tencent data center IP ranges and reportedly those bots signed themselves in UA. I can’t say whether they respect robots.txt because I am told their ranges were banned along with robots.txt tightening. However, US IP bots that remain unblocked and fake UA naturally ignore robot rules.
I'm seeing some address ranges in the US clearly serving what must be VPN traffic from Asia, and I'm also seeing an uptick in TOR traffic looking for feeds as well as WP infra.
At my company we have seen a massive increase in bot traffic since LLMs have become mainstream. Blocking known OpenAI and Anthropic crawlers has decreased traffic somewhat so I agree with your theory.
I don’t think it’s a bot thing. Traffic is down for everyone and especially smaller independent websites. This year has been really rough for some websites.
Feed readers should be sending the If-Modified-Since header and web sites should properly recognize it and send the 304 Unmodified response. This isn’t new tech.
The people who already know that a "conditional request" means a request with an If-Modified-After header aren't the ones who need to learn this information.
If your feed reader is refreshing every 20 minutes for a blog that is updated daily, nearly 99% of the data sent is identical. It looks like Rachel's blog is updated (roughly) weekly, so that jumps to 99.8%. It's not the least efficient thing in the world of computers, but it is definitely incurring unnecessary costs.
I opened the xml file she provides in the blog and it seems very long but okay. Then I decided it is a good blog to subscribe so I went and tried to add to my freshrss selfhosted instance (same ip obviously) and I couldn't because I got blocked/rate limited. So yes it is aggressive for different reasons.
It should be a timeboxed block if anything. Most RSS users are actual readers and expecting them to spend lots of time figuring out why clicking "refresh" twice on their RSS app got them blocked is totally unreasonable. I've got my feeds set up to refresh every hour. Considering the small number of people still using RSS and how lightweight it is, it's not bad enough to freak out over. At some point all Rachel's complaining and investigating will be more work than her simply interacting directly with the makers of the various readers that cause the most traffic.
There are a lot of very valid use cases where defaulting to deny for an entire 24 hour cycle after a single request is incredible frustrating for your downstream users (shared IP at my university means I will never get a non-429 response... And God help me if I'm testing new RSS readers...)
It's her server, so do as you please, I guess. But it's a hilariously hostile response compared to just returning less data.
If there were a widely supported standard for pagination in RSS, then it would make sense to limit the number of posts. As there isn't, sending 500kB seems eminently reasonable, and RSS readers that send conditional requests are fine.
Yes that's right. Most blogs that are popular enough to have this problem send you the last 10 post titles and links or something. THAT is why people refresh every hour, so they don't miss out.
If you understand what rate limiting is, you block them for a period of time. Let's stop being pedantic here.
72 requests per day is nothing and acting like it's mayhem is a bit silly. And for a lot of people would result in them getting possible news slower. Sure OP won't publish that often but their rate limiting is an edge case and should be treated as such. If they're blocked until the next day and nothing gets updated then the only person harmed is OP for being overly bothered by their HTTP logs.
Sure it's their server and they can do whatever they want. But all this does is hurts the people trying to reach their blog.
But it's not a "light" protocol when you're serving 36MB per day, when 500KB would suffice. RSS/Atom is light weight, if clients play by the rules. This could also have been a news website, imagine how much traffic would be dedicated to pointless transfers of unchanged data. Traffic isn't free.
A similar problem arise from the increase in AI scraper activities. Talking to other SREs the problem seems pretty wide spread. AI companies will just hoover up data, but revisit so frequently and aggressively that it's starting to affect the transit feeds for popular websites. Frequently user-agents wouldn't be set to something unique, or deliberately hidden, and traffic originates from AWS, making it hard to target individual bad actors. Fair enough that you're scraping websites, that's part of the game when your online, but when your industry starts to affect transit feeds, then we need to talk compensation.
That’s a bit disingenuous. 429s aren’t “blocking”, they’re telling the requester that they’re done too many requests and to try again later (with a value in the header). I assume the author configured this because they know how often the site is going to change typically. That the web server eventually stops responding if the client ignores requests isn’t that surprising, but I doubt it was configured directly too.
Semantics. 429 is an error code. Rate limiting...blocking...too many requests...ignoring...call it whatever you like but it amounts to the same, namingly server isn't serving the requested content.
Like how "unlimited traffic, but will slow down to 1bps if you use more than 100gb in a month" is technically "unlimited traffic".
But for all intents and purposes, it's limited. And 429 are blocking. They include a hint towards the reason why you are blocked and when the block might expire (retry-after doesn't promise that you'll be successful if you wait), but besides that, what's the different compared to 403?
I would say it's disingenuous to claim sending HTTP status and body that is not expected for a period of time is not blocking them for that period of time. You can be pedantic and claim "but they can still access the server" but in reality that client is blocked for a period of time.
I would argue that HTTP statuses are a bad design decision, because they are intended to be consumed by apps, but are not app-specific. They are effectively a part of every API automatically without considerations whether they are needed.
People often implement error handling using constructs like regexp matching on status codes, while with domain-specified errors it would be obvious what exactly is the range of possible errors.
Moreover, when people do implement domain errors, they just have to write more code to handle two nested levels of branching.
> I would argue that HTTP statuses are a bad design decision, because they are intended to be consumed by apps, but are not app-specific.
Perhaps put the app-specific part in the body of the reply. In the RFC they give a human specific reply to (presumably) be displayed in the browser:
HTTP/1.1 429 Too Many Requests
Content-Type: text/html
Retry-After: 3600
<html>
<head>
<title>Too Many Requests</title>
</head>
<body>
<h1>Too Many Requests</h1>
<p>I only allow 50 requests per hour to this Web site per
logged in user. Try again soon.</p>
</body>
</html>
> because they are intended to be consumed by apps, but are not app-specific
Well, good luck designing any standard app-independent protocol that works and doesn't do that.
And yes, you must handle two nested levels of branching. That's how it works.
The only improvement possible to make it clearer is having codes for API specific errors... what 400 and 500 aren't exactly. But then, that doesn't gain you much.
A colleague who should’ve known better argued that a 404 response to an API call was confusing because we were, in fact, successfully returning a response to the client. We had a long talk about that afterward.
I like Rachel's writing, but I don't understand this recent crusade against RSS readers. Sure, they should work properly and optimizations can be made to reduce bandwidth and processing power.
But... why not throw a CDN in front of your site and focus your energy somewhere else? I guess every problem has to be solved by someone, but this just seems like a very strange hill to die on.
Because this is how the open web dies - one website at a time. It's already near-dead on the client side - web browsers are not really "user" agents, but agents of oligopolist corporations, that have a stake in abusing you[1].
It's been attempted before with WAP[2], then AMP. But effectively, we're almost there.
It’s a RSS feed. In that case, wait until the specified time and try again and any missed article will appear then. If it is constantly crashing so articles never get loaded, fix that.
That’s a great point: the client software isn’t listening to the server, so the server software should break the loop by escalating to the human reader. The message response should probably be even more direct with a call to action about their feed reader (naming it, if possible) causing server problems.
My RSS reader YOShInOn subscribes to 110 RSS feeds through Superfeedr which absolves me of the responsibility of being on the other side of Rachel's problem.
With RSS you are always polling too fast or too slow; if you are polling too slow you might even miss items.
When a blog gets posted Superfeedr hits an AWS lambda function that stores the entry in SQS so my RSS reader can update itself at its own pace. The only trouble is Superfeedr costs 10 cents a feed per month which is a good deal for an active feed such as comments from Hacker News or article from The Guardian but is not affordable for subscribing to 2000+ indy blogs which YOShInOn could handle just fine.
I might yet write my own RSS head end, but there is something to say for protocols like ActivityPub and AT Protocol.
How would they know? Well, because Google, in its omniscience, started to downrank them for faking views with bots (which they do not do): it shows bot percentage in traffic stats, and it skyrocketed relative to non-bot traffic (which is now less than 50%) as they started to fall from the front page (feeding the vicious circle). Presumably, Google does not know or care it is a bot when it serves ads, but correlates it later with the metrics it has from other sites that use GA or ads.
Or, perhaps, Google spots the same anomalies that my friend (an old school sysadmin who pays attention to logs) did, such as the increase of traffic along with never seen before popularity among iPhone users (who are so tech savvy that they apparently do not require CSS), or users from Dallas who famously love their QQBrowser. I’m not going to list all telltale signs as the crowd here is too hype on LLMs (which is our going theory so far, it is very timely), but my friend hopes Google learns them quickly.
These newcomers usually fake UA, use inconspicuous Western IPs (requests from Baidu/Tencent data center ranges do sign themselves as bots in UA), ignore robots.txt and load many pages very quickly.
I would assume bot traffic increase would apply to feeds, since they are of as much use for LLM training purposes.
My friend does not actually engage in stringent filtering like Rachel does, but I wonder how soon it becomes actually infeasible to operate a website with actual original content (which my friend co-writes) without either that or resorting to Cloudflare or the like for protection because of the domination of these creepy-crawlies.
Edit: Google already downranked them, not threatened to downrank. Also, traffic rose but did not skyrocket, but relative amount of bot traffic skyrocketed. (Presumably without downranking the traffic would actually skyrocket.)
New administration is going to be monopoly friendly.
I was honestly pleased that Gaetz was nominated for AG solely because he's big on antitrust. Or has been.
You're thinking of it wrong, the seeds of the thinking error are here: "I wonder how soon it becomes actually infeasible to operate a website with actual original content".
Bots want original content, no? So what's the problem with giving it to them? But that's the issue, isn't it? Clearly, contextually, what you should be saying is "I wonder how soon it becomes actually infeasible to operate a website for actual organic users" or something like that. But phrased that way, I'm not sure a CDN helps (I'm not sure they don't suffer false positives which interfere with organic traffic when they intermediate, more security theater because hangings and executions look good, look at the numbers of enemy dead).
Take measures that any damn fool (or at least your desired audience) can recognize.
Reading for comprehension, I think Rachel understands this.
Some of the new traffic did come directly from Tencent data center IP ranges and reportedly those bots signed themselves in UA. I can’t say whether they respect robots.txt because I am told their ranges were banned along with robots.txt tightening. However, US IP bots that remain unblocked and fake UA naturally ignore robot rules.
Lmao!
https://github.com/hroost/icloud-private-relay-iplist/blob/m...
(There is also a list of ranges on apples site, but I forget where…)
Edit: found it https://mask-api.icloud.com/egress-ip-ranges.csv
That seems hilariously aggressive to me, but her server her rules I guess.
So it means 30 months of blog posts content in single request.
Sending 0.5MB in single rss request is more crime than those 2 hits in 20 minutes.
There are a lot of very valid use cases where defaulting to deny for an entire 24 hour cycle after a single request is incredible frustrating for your downstream users (shared IP at my university means I will never get a non-429 response... And God help me if I'm testing new RSS readers...)
It's her server, so do as you please, I guess. But it's a hilariously hostile response compared to just returning less data.
Clever.
That’s my kind of humor.
I might be getting old, but 500KB in a single response doesn't feel "light" to me.
500KB is horrible for RSS.
72 requests per day is nothing and acting like it's mayhem is a bit silly. And for a lot of people would result in them getting possible news slower. Sure OP won't publish that often but their rate limiting is an edge case and should be treated as such. If they're blocked until the next day and nothing gets updated then the only person harmed is OP for being overly bothered by their HTTP logs.
Sure it's their server and they can do whatever they want. But all this does is hurts the people trying to reach their blog.
A similar problem arise from the increase in AI scraper activities. Talking to other SREs the problem seems pretty wide spread. AI companies will just hoover up data, but revisit so frequently and aggressively that it's starting to affect the transit feeds for popular websites. Frequently user-agents wouldn't be set to something unique, or deliberately hidden, and traffic originates from AWS, making it hard to target individual bad actors. Fair enough that you're scraping websites, that's part of the game when your online, but when your industry starts to affect transit feeds, then we need to talk compensation.
Like how "unlimited traffic, but will slow down to 1bps if you use more than 100gb in a month" is technically "unlimited traffic".
But for all intents and purposes, it's limited. And 429 are blocking. They include a hint towards the reason why you are blocked and when the block might expire (retry-after doesn't promise that you'll be successful if you wait), but besides that, what's the different compared to 403?
People often implement error handling using constructs like regexp matching on status codes, while with domain-specified errors it would be obvious what exactly is the range of possible errors.
Moreover, when people do implement domain errors, they just have to write more code to handle two nested levels of branching.
Perhaps put the app-specific part in the body of the reply. In the RFC they give a human specific reply to (presumably) be displayed in the browser:
* https://datatracker.ietf.org/doc/html/rfc6585#section-4* https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/429
But if the URL is specific to an API, you can document that you will/may give further debugging details (in text, JSON, XML, whatever).
Well, good luck designing any standard app-independent protocol that works and doesn't do that.
And yes, you must handle two nested levels of branching. That's how it works.
The only improvement possible to make it clearer is having codes for API specific errors... what 400 and 500 aren't exactly. But then, that doesn't gain you much.
Oh the horror. I would assume the practice is encourage by "RESTful" people?
Deleted Comment
But... why not throw a CDN in front of your site and focus your energy somewhere else? I guess every problem has to be solved by someone, but this just seems like a very strange hill to die on.
And she posts on it lots because she has a bunch of RSS clients pointed at her writing, because she's rather popular.
And she'd rather people writing this stuff just learn HTTP properly, at least out of professionalism, if not courtesy.
Hey, you might not, I might not, but we all choose our hills to die on.
My personal hill is "It's lollies and biscuits, not candy and cookies".
Because this is how the open web dies - one website at a time. It's already near-dead on the client side - web browsers are not really "user" agents, but agents of oligopolist corporations, that have a stake in abusing you[1].
It's been attempted before with WAP[2], then AMP. But effectively, we're almost there.
[1]: https://www.5snb.club/posts/2023/do-not-stab/
[2]: https://news.ycombinator.com/item?id=42479172
Yes it's been invented before, known as Feedburner, which was acquired & abandoned by Google.
This often requires to do lots of tests against the endpoint, which the server prohibits.
My RSS reader YOShInOn subscribes to 110 RSS feeds through Superfeedr which absolves me of the responsibility of being on the other side of Rachel's problem.
With RSS you are always polling too fast or too slow; if you are polling too slow you might even miss items.
When a blog gets posted Superfeedr hits an AWS lambda function that stores the entry in SQS so my RSS reader can update itself at its own pace. The only trouble is Superfeedr costs 10 cents a feed per month which is a good deal for an active feed such as comments from Hacker News or article from The Guardian but is not affordable for subscribing to 2000+ indy blogs which YOShInOn could handle just fine.
I might yet write my own RSS head end, but there is something to say for protocols like ActivityPub and AT Protocol.