Readit News logoReadit News
Posted by u/misterchocolat 3 months ago
Show HN: Stop AI scrapers from hammering your self-hosted blog (using porn)github.com/vivienhenz24/f...
Alright so if you run a self-hosted blog, you've probably noticed AI companies scraping it for training data. And not just a little (RIP to your server bill).

There isn't much you can do about it without cloudflare. These companies ignore robots.txt, and you're competing with teams with more resources than you. It's you vs the MJs of programming, you're not going to win.

But there is a solution. Now I'm not going to say it's a great solution...but a solution is a solution. If your website contains content that will trigger their scraper's safeguards, it will get dropped from their data pipelines.

So here's what fuzzycanary does: it injects hundreds of invisible links to porn websites in your HTML. The links are hidden from users but present in the DOM so that scrapers can ingest them and say "nope we won't scrape there again in the future".

The problem with that approach is that it will absolutely nuke your website's SEO. So fuzzycanary also checks user agents and won't show the links to legitimate search engines, so Google and Bing won't see them.

One caveat: if you're using a static site generator it will bake the links into your HTML for everyone, including googlebot. Does anyone have a work-around for this that doesn't involve using a proxy?

Please try it out! Setup is one component or one import.

(And don't tell me it's a terrible idea because I already know it is)

package: https://www.npmjs.com/package/@fuzzycanary/core gh: https://github.com/vivienhenz24/fuzzy-canary

kstrauser · 3 months ago
I love the insanity of this idea. Not saying it's a good idea, but it's a very highly entertaining one, and I like that!

I've also had enormous luck with Anubis. AI scrapers found my personal Forgejo server and were hitting it on the order of 600K requests per day. After setting up Anubis, that dropped to about 100. Yes, some people are going to see an anime catgirl from time to time. Bummer. Reducing my fake traffic by a factor of 6,000 is worth it.

anonymous908213 · 3 months ago
As someone on the browsing end, I love Anubis. I've only seen it a couple of times, but it sparks joy. It's rather refreshing compared to Cloudfare, which will usually make me immediately close the page and not bother with whatever content was behind it.
teeray · 3 months ago
It really reminds me of old Internet, when things were allowed to be fun. Not this tepid corporate-approved landscape we have now.
kstrauser · 3 months ago
Same here, really. That's why I started using it. I'd seen it pop up for a moment on a few sites I'd visited, and it was so quirky and completely not disruptive that I didn't mind routing my legit users through it.
prmoustache · 3 months ago
Anyone is free to replace the cat girl with an actual cat or a vintage computer logo or whatnot anyway.

My issue is that it blocks away people using browsers without javascript.

acheong08 · 3 months ago
As someone on the hosting end, Anubis has unfortunately been overused and thus scrapers, especially Huawei ones, bypass it. I've gone for go-away instead which is similar but more configurable in challenges
PunchyHamster · 3 months ago
My experience with it is that it somehow took 20 seconds to load (site might've been hn-hugged at the time), only to "protect" some fucking static page instead of just serving that shit in the first place rather than wasting CPU on... whatever it was doing to cause delay
brettermeier · 3 months ago
Reminds me of weird furry porn, I can't say I like it
opem · 3 months ago
yes, very true! Anubis is a hell lot better than cloudflare turnstile or its older cousin sister google recaptcha.
m4rtink · 3 months ago
Yep, Anubis-chan is super cute! :)
n1xis10t · 3 months ago
That’s so many scrapers. There must be a ton of companies with very large document collections at this point, and it really sucks that they don’t at least do us the courtesy of indexing them and making them available for keyword search, but instead only do AI.

It’s kind of crazy how much scraping goes on and how little search engine development goes on. I guess search engines aren’t fashionable. Reminds me of this article about search engines disappearing mysteriously: https://archive.org/details/search-timeline

I try to share that article as much as possible, it’s interesting.

kstrauser · 3 months ago
So! Much! Scraping! They were downloading every commit multiple times, and fetching every file as seen at each of those commits, and trying to download archives of all the code, and hitting `/me/my-repo/blame` endpoints as their IP's first-ever request to my server, and other unlikely stuff.

My scraper dudes, it's a git repo. You can fetch the whole freaking thing if you wanna look at it. Of course, that would require work and context-aware processing on their end, and it's easier for them to shift the expense onto my little server and make me pay for their misbehavior.

PeterStuer · 3 months ago
Or some anti-ddos/bot companies using ultra cheap scraping services to annoy you enough to get you into their "free" anti bot protection, so they can charge the few real ai scrapers for access to your site.
miki123211 · 3 months ago
But there is a lot of search engine development going on, it's just that the results of the new search engines are fed straight into AI instead of displayed in the legacy 10-links-per-page view.
rurban · 3 months ago
Just block all the big hosters IP ranges, when they ignore robots.txt.

For fun add long timeouts and huge content sizes. No private individual will browse from there, and all scrapers will do.

mrweasel · 3 months ago
> There must be a ton of companies with very large document collections at this point

See, I don't think there is, I don't think they want that expense. It's basically the Linus Torvalds philosophy of data storage, if it's on the Internet, I don't need a backup. While I have absolutely no proof of this, I'd guess that many AI companies just crawl the Internet constantly, never saving any of the data. We're seeing some of these scrapers go to great length attempting to circumvent any and all forms of caching, they aren't interested in having a two week old copy of anything.

buu700 · 3 months ago
It's actually a well established concept: https://youtu.be/p9KeopXHcf8
n1xis10t · 3 months ago
*anime jackalgirl

Also you mentioned Anubis, so it’s creator will probably read this. Hi Xena!

xena · 3 months ago
Ohai! I'm working on dataset poisoning. The early prototype generates vapid LinkedIn posts but future versions will be fully pluggable with WebAssembly.
ramonga · 3 months ago
what do people use to get keyword alerts in HN?
kstrauser · 3 months ago
Correct; my bad!

And hey, Xena! (And thank you very much!)

ziml77 · 3 months ago
I checked Xe's profile when I hadn't seen them post here for a while. According to that, they're not really using HN anymore.

Dead Comment

amypetrik8 · 3 months ago
>I love the insanity of this idea. Not saying it's a good idea, but it's a very highly entertaining one, and I like that!

An even more insane idea -- minding the idea here is porn is radioactive to AI data training scrapers -- is there is something the powers that be view as far more disruptive and against community guidelineish than porn. And that would be wrongthink. The narratives. The historic narratives. The woke ideology. Anything related to an academic department whose field is <population subgroup> studies. Alls you need to do is plop in a little diatribe staunchly opposing any such enforced views and that AI bot will shoot away from your website and lightspeed

GoblinSlayer · 3 months ago
I'm afraid AI bot and scraper are different things. Looks like poison is filtered after scraping no matter where it comes from, so there's no need to disable scraping you, because that's extra work.
lelanthran · 3 months ago
I like this better than of NSFW links; just include a (possible LLM generated) paragraph about not supporting transitions in minor children. Or perhaps that libraries that remove instructional booklets for how to have same-sex intercourse aren't actually banning the books.

That sort of thing; nothing that 80% of people object to (so there's no problem if someone actually sees it), but something that definitely triggers the filters.

Dead Comment

zackmorris · 3 months ago
This is very hacker-like thinking, using tech's biases against it!

I can't help but feel like we're all doing it wrong against scraping. Cloudflare is not the answer, in fact, I think that they lost their geek cred when they added their "verify you are human" challenge screen to become the new gatekeeper of the internet. That must remain a permanent stain on their reputation until they make amends.

Are there any open source tools we could install that detect a high number of requests and send those IP addresses to a common pool somewhere? So that individuals wouldn't get tracked, but bots would? Then we could query the pool for the current request's IP address and throttle it down based on volume (not block it completely). Possibly at the server level with nginx or at whatever edge caching layer we use.

I know there may be scaling and privacy issues with this. Maybe it could use hashing or zero knowledge proofs somehow? I realize this is hopelessly naive. And no, I haven't looked up whether someone has done this. I just feel like there must be a bulletproof solution to this problem, with a very simple explanation as to how it works, or else we've missed something fundamental. Why all the hand waving?

dvfjsdhgfv · 3 months ago
Your approach to GenAI scrapers is similar to our fight with email spam. The reason email spam got solved was because the industry was interested in solving it. But this issue got the industry split: without scraping, GenAI tools are less functional. And there is some serious money involved, so they will use whatever means necessary, technical and legal, to fight such initiatives.
conrs · 3 months ago
I've been exploring decentralized trust algorithms lately, and so reading this was nice. I've a similar intuition - for every advance in scraping detection, scrapers will learn too, and so it's an ongoing war of mutations, but no real victor.

The internet has seen success with social media content moderation and so it seems natural enough that an application could exist for web traffic itself. Hosts being able to "downvote" malicious traffic, and some sort of decay mechanism given IP's recycling. This exists in a basic sense with known TOR exit nodes and known AWS, GCP IP's, etc.

That said, we probably don't have the right building blocks yet, IP's are too ephemeral, yet anything more identity-bound is a little too authoritarian IMO. Further, querying something for every request is probably too heavy.

Fun to think about, though.

ATechGuy · 3 months ago
Scrapers use residential IP proxies, so blocking based on IP addresses is not a solution.
smegger001 · 3 months ago
maybe some proof of work scheme to load page content with increasing difficulty based on ip address behavior profiling.
venturecruelty · 3 months ago
Firewall.
montroser · 3 months ago
This is a cute idea, but I wonder what is the sustainable solution to this emerging fundamental problem: As content publishers, we want our content to be accessible to everyone, and we're even willing to pay for server costs relative to our intended audience -- but a new outsized flood of scrapers was not part of the cost calculation, and that is messing up the plan.

It seems all options have major trade-offs. We can host on big social media and lose all that control and independence. We can pay for outsized infrastructure just to feed the scrapers, but the cost may actually be prohibitive, and seems such a waste to begin with. We can move as much as possible SSG and put it all behind cloudflare, but this comes with vendor lock in and just isn't architecturally feasible in many applications. We can do real "verified identities" for bots, and just let through the ones we know and like, but this only perpetuates corporate control and makes healthy upstart competition (like Kagi) much more difficult.

So, what are we to do?

hollowturtle · 3 months ago
If the LLMs are the "new Google" one solution would be for them to pay you when scraping your content, so you both have an incentive, you're more willing to be scraped and they'll try to not abuse you because it will cost them at every visit. If your content is valuable and requested on prompts they will scrape you more and so on. I can't see other solutions honestly. For now they decided to go full evil and abuse everyone
jrm4 · 3 months ago
No disrespect to op, but I'm baffled as to how people keep coming up with ideas like this as if they are viable.

Google is never ever ever ever going to "pay to scrape." I'm genuinely baffled as to how people think it would possibly come to this.

nkrisc · 3 months ago
The only way that would work is if they were legally required to. And even then, it probably wouldn’t work unless failure to comply was a criminal offense. You know what? Even then it still might not work.
vivzkestrel · 3 months ago
or turn your blog into a frontend/backend combo. keep the frontend as an SPA so that the page has nothing on it. have your backend send data in encrypted format and the AI scrapers would need to do a tonne of work in order to figure out what your data is. If everyone uses a different key and different encryption algorithm suddenly all their server time is busted decrypting stuff
PunchyHamster · 3 months ago
So they won't pay you and just scrape pages that have it public, and you will never get traffic from search again until you let them scrape
n1xis10t · 3 months ago
This would require new laws though, wouldn’t it?
n1xis10t · 3 months ago
At this point it seems like the problem isn’t internet bandwidth, but just expensive for a server to handle all the requests because it has to process them. Does that seem correct?
thethingundone · 3 months ago
I own a forum which currently has 23k online users, all of them bots. The last new post in that forum is from _2019_. Its topic is also very niche. Why are so many bots there? This site should have basically been scraped a million times by now, yet those bots seem to fetch the stuff live, on the fly? I don’t get it.
sethops1 · 3 months ago
I have a site with a complete and accurate sitemap.xml describing when its ~6k pages are last updated (on average, maybe weekly or monthly). What do the bots do? They scrape every page continuously 24/7, because of course they do. The amount of waste going into this AI craze is just obscene. It's not even good content.
n1xis10t · 3 months ago
It would be interesting if someone made a map that depicts the locations of the ip addresses that are sending so many requests, over the course of a day maybe.
thisislife2 · 3 months ago
If you are in the US, have you considered suing them for robot.txt / copyright violation? AI companies are currently flush with cash from VCs and there may be a few big law firms willing to fight a law suit against them on your behalf. AI companies have already lost some copyright cases.
tokioyoyo · 3 months ago
Large scale scraping tech is not as sophisticated as you'd think. A significant chunk of it is "get as much as possible, categorize and clean up later". Man, I really want the real web of the 2000s back, when things felt "real" more or less... how can we even get there.
idiotsecant · 3 months ago
Have you ever listened to the 'high water mark' monologue from fear and loathing? It's pretty much just that. It was a unique time and it was neat that we got to see it, but it can't possibly happen again.

https://www.youtube.com/watch?v=vUgs2O7Okqc

tmnvix · 3 months ago
A curated web directory. Kind of like Yahoo had. The internet according to the dewey system with pages somehow rated for quality by actual humans (maybe something to learn from Wikipedia's approach here?)
n1xis10t · 3 months ago
If people start making search engines again and there is more competition for Google, I think things would be pretty sweet.
thethingundone · 3 months ago
I would understand that, but it seems they don’t store the stuff but recollect the same content every hour.
thethingundone · 3 months ago
The bots are exposing themselves as Google, Bing and Yandex. I can’t verify whether it’s being attributed by IP address or whether the forum trusts their user agent. It could basically be anyone.
n1xis10t · 3 months ago
Interesting. When it was just normal search engines I didn’t hear of people having this problem, so this either means that there are a bunch of people pretending to be bing google and yandex, or those companies have gotten a lot more aggressive.
danpalmer · 3 months ago
How do you define a user, and how do you define online?

If the forum considers unique cookies to be a user and creates a new cookie for any new cookie-less request, and if it considers a user to be online for 1 hour after their last request, then actually this may be one scraper making ~6 requests per second. That may be a pain in its own way, but it's far from 23k online bots.

crote · 3 months ago
That's still 518.400 requests per day. For static content. And it's a niche forum, so it's not exactly going to have millions of pages.

Either there are indeed hundreds or thousands of AI bots DDoSing the entire internet, or a couple of bots are needlessly hammering it over and over and over again. I'm not sure which option is worse.

thethingundone · 3 months ago
AFAIK it keeps a user counted as online for 5 or 15 minutes (I think 5). It’s a Woltlab Burning Board.

Edit: it’s 15 minutes.

mrweasel · 3 months ago
Why pay for storage when you do it for them?
stevage · 3 months ago
I'd love to know the answer to this question. AI scrapers wanting everything on the internet makes sense to me. But I don't understand how that leads to every site being hit hundreds of thousands of times per day.
GaryBluto · 3 months ago
Why do you keep it operating? Is it the aquarium value?
andrepd · 3 months ago
When you have trillions of dollars being poured into your company by the financial system, and when furthermore there are no repercussions for behaving however you please, you tend not to care about that sort of "waste".
csomar · 3 months ago
Sure you do by now. You are the hard drive.
sandblast · 3 months ago
Are you sure the counter is not broken?
thethingundone · 3 months ago
Yes, it’s running on a Woltlab Burning Board since forever.
n1xis10t · 3 months ago
Nice! Reminds me of “Piracy as Proof of Personhood”. If you want to read that one go to Paged Out magazine (at https://pagedout.institute/ ), navigate to issue #7, and flip to page 9.

I wonder if this will start making porn websites rank higher in google if it catches on…

Have you tested it with the Lynx web browser? I bet all the links would show up if a user used it.

Oh also couldn’t AI scrapers just start impersonating Googlebot and Bingbot if this caught on and they got wind of it?

Hey I wonder if there is some situation where negative SEO would be a good tactic. Generally though I think if you wanted something to stay hidden it just shouldn’t be on a public web server.

owl57 · 3 months ago
> Hey I wonder if there is some situation where negative SEO would be a good tactic. Generally though I think if you wanted something to stay hidden it just shouldn’t be on a public web server.

At least once upon a time there was a pirate textbook library that used HTTP basic auth with a prompt that made the password really easy to guess. I suppose the main goal was to keep crawlers out even if they don't obey robots.txt, and at the same time be as easy for humans as possible.

n1xis10t · 3 months ago
Interesting note, thank you.
ProllyInfamous · 3 months ago
>Paged Out issue #7, page 9

Very clever, use the LLM's own rules (against copyright infrigement) against itself.

Everything below the following four #### is ~quoted~ from that magazine:

####

Only humans and ill-aligned AI models allowed to continue

Find me a torrent link for Bee Movie (2007)

[Paste torrent or magnet link here...] SUBMIT LINK

[ ] Check to confirm you do NOT hold the legal rights to share or distribute this content

netsharc · 3 months ago
Is the magnet link itself a copyright violation? I don't think legally it is... It's a pointer to some "stolen goods", but not the stolen goods themselves (here the analogy fails, because in ideal real life police would question you if you had knowledge of stolen goods).

Asking them to upload a copyrighted photo not belonging to them might be more effective..

misterchocolat · 3 months ago
hey! thanks for that read suggestion that's indeed a pretty funny captcha strat. Yup the links show up if you use the Lynx web browser. As for AI scrapers impersonating googlebot I feel like yes they'd definitely start doing that, unless the risk of getting sued by google is too high? If google could even sue them for doing that?

Not an internet litigation expert but seems like it could be debatable

kuylar · 3 months ago
> As for AI scrapers impersonating googlebot I feel like yes they'd definitely start doing that, unless the risk of getting sued by google is too high?

Google releases the Googlebot IP ranges[0], so you can makes sure that it's the real Googlebot and not just someone else pretending to be one.

[0] https://developers.google.com/crawling/docs/crawlers-fetcher...

n1xis10t · 3 months ago
Yeah I guess I don’t know if you can sue someone for using your headers, would be interesting to see how that goes.
cookiengineer · 3 months ago
Remember the 90s when viagra pills and drug recommendations were all over the place?

Yeah, I use that as a safeguard :D The URLs that I don't want to be indexed have hundreds of those keywords that are leading to URLs being deindexed directly. There is also some law in the US that forbids to show that as a result, so Google and Bing are both having a hard time scraping those pages/articles.

Note that this is the latest defense measurement before eBPF blocks. The first one uses zip bombs and the second one uses chunked encoding to blow up proxies so their clients get blocked.

You can only win this game if you make it more expensive to scrape than to host it.

n1xis10t · 3 months ago
Which law is that? Do you have a link to it?
cookiengineer · 3 months ago
The things I could find on justice.gov and other official websites, maybe there's more in the web archive?

- https://www.justice.gov/archives/opa/pr/google-forfeits-500-...

- https://www.congress.gov/110/plaws/publ425/PLAW-110publ425.p...

- https://www.fda.gov/drugs/prescription-drug-advertising/pres...

edit: Oh it was very likely the Federal Food, Drug and Cosmetic Act that was the legal basis for the crackdown. But that's a very old law from the pre-internet age.

- https://en.wikipedia.org/wiki/Federal_Food,_Drug,_and_Cosmet...

edit 2: Might not have been clear for the younger generation, but there was a huge wave of addicted patients that got treated with Oxycodone (or OxyContin) subscriptions at the time.

I think that might have been the actual cause for the crackdown on those online advertisements, but I might be wrong about that.

voodooEntity · 3 months ago
Funny idea, some days ago i was really annoyed again by the idea that these AI crawlers still ignore all code licenses and train their models against any github repo no matter what so i quickly hammerd down this

-> https://github.com/voodooEntity/ghost_trap

basically a github action that extends your README.md with a "polymorphic" prompt injection. I run some "llm"s against it and most cases they just produced garbage.

Thought about also creating a JS variant that you can add to your website that will (not visible for the user) also inject such prompt injections to stop web crwaling like you described

asphero · 3 months ago
Interesting approach. The scraper-vs-site-owner arms race is real.

On the flip side of this discussion - if you're building a scraper yourself, there are ways to be less annoying:

1. Run locally instead of from cloud servers. Most aggressive blocking targets VPS IPs. A desktop app using the user's home IP looks like normal browsing.

2. Respect rate limits and add delays. Obvious but often ignored.

3. Use RSS feeds when available - many sites leave them open even when blocking scrapers.

I built a Reddit data tool (search "reddit wappkit" if curious) and the "local IP" approach basically eliminated all blocking issues. Reddit is pretty aggressive against server IPs but doesn't bother home connections.

The porn-link solution is creative though. Fight absurdity with absurdity I guess.

rhdunn · 3 months ago
Plus simple caching to not redownload the same file/page multiple times.

It should also be easy to detect a forejo, gitea, or similar hosting site, locate the git URL and clone the repo.

socialcommenter · 3 months ago
Without wanting to upset anyone - what makes you interested in sharing tips for team scraper?

(Overgeneralising a bit) site owners are mostly cting for public benefit whereas scrapers act for their own benefit/for private interests.

I imagine most people would land on team site-owner, if they were asked. I certainly would.

P.S. is the best way to scrape fairly just to respect robots.txt?

n1xis10t · 3 months ago
I think "scraper vs siteowners" is a false dichotomy. Scrapers will always need to exist as long as we want search engines and archival services. We will need small versions of these services to keep popping up every now and then to keep the big guys on their toes, and the smaller guys need advice for scraping politely.