Readit News logoReadit News
dang · a year ago
Recent and related:

AI companies cause most of traffic on forums - https://news.ycombinator.com/item?id=42549624 - Dec 2024 (438 comments)

ericholscher · a year ago
This keeps happening -- we wrote about multiple AI bots that were hammering us over at Read the Docs for >10TB of traffic: https://about.readthedocs.com/blog/2024/07/ai-crawlers-abuse...

They really are trying to burn all their goodwill to the ground with this stuff.

PaulHoule · a year ago
In the early 2000s I was working at a place that Google wanted to crawl so bad that they gave us a hotline number to crawl if their crawler was giving us problems.

We were told at that time that the "robots.txt" enforcement was the one thing they had that wasn't fully distributed, it's a devilishly difficult thing to implement.

It boggles my mind that people with the kind of budget that some of these people have are struggling to implement crawling right 20 years later tough. It's nice those folks got a rebate.

One of the problems why people are testy today is that you pay by the GB w/ cloud providers; about 10 years ago I kicked out the sinosphere crawlers like Baidu because they were generating like 40% of the traffic on my site crawling over and over again and not sending even a single referrer.

jgalt212 · a year ago
I've found Googlebot has gotten a bit wonky lately. 10X the usual crawl rate and

- they don't respect the Crawl-Delay directive

- google search console reports 429s as 500s

https://developers.google.com/search/docs/crawling-indexing/...

TuringNYC · a year ago
Serious question - if robots.txt are not being honored, is there a risk that there is a class action from tens of thousands of small sites against both the companies doing the crawling and individual directors/officers of these companies? Seems there would be some recourse if this is done at at large enough scale.
krapp · a year ago
No. robots.txt is not in any way a legally binding contract, no one is obligated to care about it.
Uptrenda · a year ago
Hey man, I wanted to say good job on read the docs. I use it for my Python project and find it an absolute pleasure to use. Write my stuff in restructured text. Make lots of pretty diagrams (lol), slowly making my docs easier to use. Good stuff.

Edit 1: I'm surprised by the bandwidth costs. I use hetzner and OVH and the bandwidth is free. Though you manage the bare metal server yourself. Would readthedocs ever consider switching to self-managed hosting to save costs on cloud hosting?

huntoa · a year ago
Did I read it right that you pay 62,5$/TB?
exe34 · a year ago
can you feed them gibberish?
blibble · a year ago
here's a nice project to automate this: https://marcusb.org/hacks/quixotic.html

couple of lines in your nginx/apache config and off you go

my content rich sites provide this "high quality" data to the parasites

Groxx · a year ago
LLMs poisoned by https://git-man-page-generator.lokaltog.net/ -like content would be a hilarious end result, please do!
jcpham2 · a year ago
This would be my elegant solution, something like an endless recursion with a gzip bomb at the end if I can identify your crawler and it’s that abusive. Would it be possible to feed an abusing crawler nothing but my own locally-hosted LLM gibberish?

But then again if you’re in the cloud egress bandwidth is going to cost for playing this game.

Better to just deny the OpenAI crawler and send them an invoice for the money and time they’ve wasted. Interesting form of data warfare against competitors and non competitors alike. The winner will have the longest runway

GaggiX · a year ago
The dataset is curated, very likely with a previously trained model, so gibberish is not going to do anything.

Dead Comment

joelkoen · a year ago
> “OpenAI used 600 IPs to scrape data, and we are still analyzing logs from last week, perhaps it’s way more,” he said of the IP addresses the bot used to attempt to consume his site.

The IP addresses in the screenshot are all owned by Cloudflare, meaning that their server logs are only recording the IPs of Cloudflare's reverse proxy, not the real client IPs.

Also, the logs don't show any timestamps and there doesn't seem to be any mention of the request rate in the whole article.

I'm not trying to defend OpenAI but as someone who scrapes data I think it's unfair to throw around terms "like DDOS attack" without providing basic request rate metrics. This seems to be purely based on the use of multiple IPs, which was actually caused by their own server configuration and has nothing to do with OpenAI.

mvdtnz · a year ago
Why should web store operators have to be so sophisticated to use the exact right technical language in order to have a legitimate grievance?

How about this: these folks put up a website in order to serve customers, not for OpenAI to scoop up all their data for their own benefit. In my opinion data should only be made available to "AI" companies on an opt-in basis, but given today's reality OpenAI should at least be polite about how they harvest data.

griomnib · a year ago
I’ve been a web developer for decades as well as doing scraping, indexing, and analyzing million of sites.

Just follow the golden rule: don’t ever load any site more aggressively than you would want yours to be.

This isn’t hard stuff, and these AI companies have grossly inefficient and obnoxious scrapers.

As a site owner those pisses me off as a matter of decency on the web, but as an engineer doing distributed data collection I’m offended by how shitty and inefficient their crawlers are.

PaulHoule · a year ago
I worked at one place where it probably cost us 100x (in CPU) more to serve content the way we were doing it as opposed to the way most people would do it. We could afford it because it was still cheap, but we deferred the cost reduction work for half a decade and went on a war against webcrawlers instead. (hint: who introduced the robots.txt standard?)
add-sub-mul-div · a year ago
These people think they're on the verge of the most important invention in modern history. Etiquette means nothing to them. They would probably consider an impediment to their work a harm to the human race.
krapp · a year ago
>They would probably consider an impediment to their work a harm to the human race.

They do. Marc Andreeson said as much in his "techno-optimist manifesto," that any hesitation or slowdown in AI development or adoption is equivalent to mass murder.

griomnib · a year ago
Yeah but it’s just shit engineering. They re-crawl entire sites basically continuously absent any updates or changes. How hard is it to cache a fucking sitemap for a week?

It’s a waste of bandwidth and CPU on their end as well, “the bitter lesson” isn’t “keep duplicating the same training data”.

I’m glad DeepSeek is showing how inefficient and dogshit most of the frontier model engineer is - how much VC is getting burned literally redownloading a copy of the entire web daily when like <1% of it is new data.

I get they have no shame economically, that they are deluded and greedy. But bad engineering is another class of sin!

mingabunga · a year ago
We've had to block a lot of these bots as they slowed our technical forum to a crawl, but new ones appear every now and again. Amazons was the worst
griomnib · a year ago
I really wonder if these dogshit scrapers are wholly built by LLM. Nobody competent codes like this.
jonas21 · a year ago
It's "robots.txt", not "robot.txt". I'm not just nitpicking -- it's a clear signal the journalist has no idea what they're talking about.

That and the fact that they're using a log file with the timestamps omitted as evidence of "how ruthelessly an OpenAI bot was accessing the site" makes the claims in the article a bit suspect.

OpenAI isn't necessarily in the clear here, but this is a low-quality article that doesn't provide much signal either way.

ted_bunny · a year ago
The best way to tell a journalist doesn't know their subject matter: check if they're a journalist.
peterldowns · a year ago
Well said, I agree with you.
ted_bunny · a year ago
I wish they'd add a little arrow you could click.
Thoreandan · a year ago
Hear hear. Poor article going out the door for publication with zero editorial checking.
joelkoen · a year ago
Haha yeah just noticed they call Bytespider "TokTok's crawler" too
spwa4 · a year ago
It's funny how history repeats. The web originally grew because it was a way to get "an API" into a company. You could get information, without a phone call. Then, with forms and credit cards and eventually with actual API's, you could get information, you could get companies to do stuff via an API. For a short while this was possible.

Now everybody calls this abuse. And a lot of it is abuse, to be fair.

Now that has been mostly blocked. Every website tries really hard to block bots (and mostly fail because Google funds their crawler millions of dollars while companies raise a stink over paying a single SWE), but it's still at the point that automated interactions with companies (through third-party services for example) are not really possible. I cannot give my credit card info to a company and have it order my favorite foods to my home every day, for example.

What AI promises, in a way, is to re-enable this. Because AI bots are unblockable (they're more human than humans as far as these tests are concerned). For companies, and for users. And that would be a way to ... put API's into people and companies again.

Back to step 1.

afavour · a year ago
I see it as different history repeating: VC capital inserting itself as the middleman between people and things they want. If all of our interactions with external web sites now go through ChatGPT that gives OpenAI a phenomenal amount of power. Just like Google did with search.
spwa4 · a year ago
Well, it's not just that. Every company insists on doing things differently and usually in annoying ways. Having a way to deal with companies while avoiding their internal policies (e.g. upselling, "retention team", ...) would be very nice.

Yes, VCs want this because it's an opportunity for a double-sided marketplace, but I still want it too.

I wonder to what extent is what these FANG businesses want with AI can be described as just "an API into businesses that don't want to provide an API".

PaulHoule · a year ago
First time I heard this story it was '98 or so and the perp was somebody in the overfunded CS department and the victim somebody in the underfunded math department on the other side of a short and fat pipe. (Probably running Apache httpd on a SGI workstation without enough ram to even run Win '95)

In years of running webcrawlers I've had very little trouble, I've had more trouble in the last year than in the past 25. (Wrote my first crawler in '99, funny my crawlers have gotten simpler over time not more complex)

In one case I found a site got terribly slow although I was hitting it at much less than 1 request per second. Careful observation showed the wheels were coming off the site and it had nothing to do with me.

There's another site that I've probably crawled in it's entirety at least ten times over the past twenty years. I have a crawl from two years ago, my plan was to feed it into a BERT-based system not for training but to discover content that is like the content that I like. I thought I'd get a fresh copy w/ httrack (polite, respects robots.txt, ...) and they blocked both my home IP addresses in 10 minutes. (Granted I don't think the past 2 years of this site was as good as the past, so I will just load what I have into my semantic search & tagging system and use that instead)

I was angry about how unfair the Google Economy was in 2013, in lines with what this blogger has been saying ever since

http://www.seobook.com/blog

(I can say it's a strange way to market an expensive SEO community but...) and it drives me up the wall that people looking in the rear view mirror are getting upset about it now.

Back in '98 I was excited about "personal webcrawlers" that could be your own web agent. On one hand LLMs could give so much utility in terms of classification, extraction, clustering and otherwise drinking from that firehose but the fear that somebody is stealing their precious creativity is going to close the door forever... And entrench a completely unfair Google Economy. It makes me sad.

----

Oddly those stupid ReCAPTCHAs and Cloudflare CAPTCHAs torment me all the time as a human but I haven't once had them get in the way of a crawling project.

Hilift · a year ago
People who have published books recently on Amazon have noticed that immediately there are fraud knockoff copies with the title slightly changed. These are created by AI, and are competing with humans. A person this happened to was recently interviewed about their experience on BBC.