Readit News logoReadit News
mrb · 5 months ago
"There’s ways to get around TLS signatures but it’s much harder and requires a lot more legwork to get working"

I wouldn't call it "much harder". All you need to bypass the signature is to choose random ciphers (list at https://curl.se/docs/ssl-ciphers.html) and you mash them up in a random order separated by colons in curl's --ciphers option. If you pick 15 different ciphers in a random order, there are over a trillion signatures possible, which he couldn't block. For example this works:

  $ curl --ciphers AES256-GCM-SHA384:AES128-GCM-SHA256:AES256-SHA:... https://foxmoss.com/blog/packet-filtering/
But, yes, most bots don't bother randomizing ciphers so most will be blocked.

ospider · 5 months ago
It can be much more easier and realistic with https://github.com/lexiforest/curl-impersonate.
halJordan · 5 months ago
This works for the ten minute period it takes to switch from a blacklist to a whitelist
mandatory · 5 months ago
benatkin · 5 months ago
> NOTE: Due to many WAFs employing JavaScript-level fingerprinting of web browsers, thermoptic also exposes hooks to utilize the browser for key steps of the scraping process. See this section for more information on this.

This reminds me of how Stripe does user tracking for fraude detection https://mtlynch.io/stripe-update/ I wonder if thermoptic could handle that.

mips_avatar · 5 months ago
Cool project!
mandatory · 5 months ago
Thanks!
joshmn · 5 months ago
Work like this is incredible. I did not know this existed. Thank you.
mandatory · 5 months ago
Thanks :) if you have any issues with it let me know.
snowe2010 · 5 months ago
People like you are why independent sites can’t afford to run on the internet anymore.
1gn15 · 5 months ago
I block all humans (only robots are allowed) and I'm still able to run independent websites.
mandatory · 5 months ago
They can't? I've run many free independent sites for years, that's news to me.
timbowhite · 5 months ago
I run independent websites and I'm not broke yet.
Symbiote · 5 months ago
Oh great /s

In a month or two, I can be annoyed when I see some vibe-coded AI startup's script making five million requests a day to work's website with this.

They'll have been ignoring the error responses:

  {"All data is public and available for free download": "https://example.edu/very-large-001.zip"}
— a message we also write in the first line of every HTML page source.

Then I will spend more time fighting this shit, and less time improving the public data system.

mandatory · 5 months ago
Feel free to read the README, this was already an ability that startups could pay for using private premium proxy services before thermoptic.

Having an open source version allows regular people to do scraping and not just those rich in capital.

Much of the best data services on the internet all start with scraping, the README lists many of them.

geocar · 5 months ago
Do you actually use this?

    $ md5 How\ I\ Block\ All\ 26\ Million\ Of\ Your\ Curl\ Requests.html
MD5 (How I Block All 26 Million Of Your Curl Requests.html) = e114898baa410d15f0ff7f9f85cbcd9d

(downloaded with Safari)

    $ curl https://foxmoss.com/blog/packet-filtering/ | md5sum
    e114898baa410d15f0ff7f9f85cbcd9d  -
I'm aware of curl-impersonate https://github.com/lwthiker/curl-impersonate which works around these kinds of things (and makes working with cloudflare much nicer), but serious scrapers use chrome+usb keyboard/mouse gadget that you can ssh into so there's literally no evidence of mechanical means.

Also: If you serve some Anubis code without actually running the anubis script in the page, you'll get some answers back so there's at least one anubis-simulator running on the Internet that doesn't bother to actually run the JavaScript it's given.

Also also: 26M requests daily is only 300 requests per second and Apache could handle that easily over 15 years ago. Why worry about something as small as that?

mrb · 5 months ago
He does use it (I verified it from curl on a recent Linux distro). But he probably blocked only some fingerprints. And the fingerprint depends on the exact OpenSSL and curl versions, as different version combinations will send different TLS ciphers and extensions.
renegat0x0 · 5 months ago
What I have seen it is hard to tell what "serious scrapers" use. They use many things. Some use this, some not. This is what I have learned reading webscraping on reddit. Nobody speaks things like that out loud.

There are many tools, see links below

Personally I think that running selenium can be a bottle neck, as it does not play nice, sometimes processes break, even system sometimes requires restart because of things blocked, can be memory hog, etc. etc. That is my experience.

To be able to scale I think you have to have your own implementation. Serious scrapers complain about people using selenium, or derivatives as noobs, who will come back asking why page X does not work in scraping mechanisms.

https://github.com/lexiforest/curl_cffi

https://github.com/encode/httpx

https://github.com/scrapy/scrapy

https://github.com/apify/crawlee

klaussilveira · 5 months ago
> so there's literally no evidence of mechanical means.

Keystroke dynamics and mouse movement analysis are pretty fun ways to tackle more advanced bots: https://research.roundtable.ai/proof-of-human/

But of course, it is a game of cat and mouse and there are ways to simulate it.

efilife · 5 months ago
I don't think that mouse movement analysis is used anywhere. But it was reportedly used 10 years ago by Google's captcha. This is a client side check than can trivially be bypassed
dancek · 5 months ago
The article talks about 26M requests per second. It's theoretical, of course.
noAnswer · 5 months ago
Not requests, packets: "And according to some benchmarks Wikipedia cites, you can drop 26 million packets per second on consumer hardware."

The Number in the Title is basically fantasy. (Not based on the authors RL experience.) So is saying a DDoS is well distributed over 24 hours.

chlorion · 5 months ago
Claude was scraping my cgit at around 12 requests per second, but in bursts here or there. My VPS could easily handle this, even being a free tier e2-micro on Google Cloud/Compute Engine, but they used almost 10GB of my egress bandwidth in just a few days, and ended up pushing me over the free tier.

Granted it wasn't a whole lot of money spent, but why waste money and resources so "claude" can scrape the same cgit repo over and over again?

    >(1) root@gentoo-server ~ # grep 'claude' /var/log/lighttpd/access.log | wc -l
    >1099323

jacquesm · 5 months ago
> Also also: 26M requests daily is only 300 requests per second and Apache could handle that easily over 15 years ago. Why worry about something as small as that?

That doesn't matter, does it? Those 26 million requests could be going to actual users instead and 300 requests per second is non-trivial if the requests require backend activity. Before you know it you're spending most of your infra money on keeping other people's bots alive.

arcfour · 5 months ago
Blocking 26M bot requests doesn't mean 26M legitimate requests magically appear to take their place. The concern is that you're spending infrastructure resources serving requests that provide zero business value. Whether that matters depends on what those requests actually cost you. As the original commenter pointed out, this is likely not very much at all.
geek_at · 5 months ago
btw you opensourced also your website

~$ curl https://foxmoss.com/.git/config [core] repositoryformatversion = 0 filemode = true bare = false logallrefupdates = true [remote "origin"] url = https://github.com/FoxMoss/PersonalWebsite fetch = +refs/heads/:refs/remotes/origin/ [branch "master"] remote = origin merge = refs/heads/master

vanyle · 5 months ago
The git seems to only contain the build of the website with no source code.

The author is probably using git to push the content to the hosting server as an rsync alternative, but there does not seem to be much leaked information, apart from the url of the private repository.

hcaz · 5 months ago
It exposed their committer email (I know its already public on the site, but still)

You can wget the whole .git folder and look through the commit history, so if at any point something had been pushed which should not have been its available

seba_dos1 · 5 months ago
> with tools like Anubis being largely ineffective

To the contrary - if someone "bypasses" Anubis by setting the user agent to Googlebot (or curl), it means it's effective. Every Anubis installation I've been involved with so far explicitly allowed curl. If you think it's counterproductive, you probably just don't understand why it's there in the first place.

jgalt212 · 5 months ago
If you're installing Anubis, why are you setting it to allow curl to bypass?
seba_dos1 · 5 months ago
The problem you usually attempt to alleviate by using Anubis is that you get hit by load generated by aggressive AI scrappers that are otherwise indistinguishable from real users. As soon as the bot is polite enough to identify as some kind of a bot, the problem's gone, as you can apply your regular measures for rate limiting and access control now.

(yes, there are also people who use it as an anti-AI statement, but that's not the reason why it's used on the most high-profile installations out there)

coppsilgold · 5 months ago
There are also HTTP fingerprints. I believe it's named after akamai or something.

All of it is fairly easy to fake. JavaScript is the only thing that poses any challenge and what challenge it poses is in how you want to do it with minimal performance impact. The simple truth is that a motivated adversary can interrogate and match every single minor behavior of the browser to be bit-perfect and there is nothing anyone can do about it - except for TPM attestations which also require a full jailed OS environment in order to control the data flow to the TPM.

Even the attestation pathway can probably be defeated, either through the mandated(?) accessibility controls or going for more extreme measures. And putting the devices to work in a farm.

delusional · 5 months ago
This is exactly right, and it's why I believe we need to solve this problem in the human domain, with laws and accountability. We need new copyrights that cover serving content on the web, and gives authors control over who gets to access that content, WITHOUT requiring locked down operating systems or browser monopolies.
dpoloncsak · 5 months ago
>with laws and accountability.

Isn't this how we get EU's digital ID nonsense? Otherwise, how do you hold an anon user behind 5 proxies accountable? What if its from a foreign country?

Symbiote · 5 months ago
Laws are only enforceable in their own country, and possibly some friendly countries.

If that means blocking foreign access, the problem is solved anyway.

Deleted Comment

1gn15 · 5 months ago
The last thing we need is more intellectual property restrictions.
b112 · 5 months ago
Laws only work in domestic scenarios.

If laws appear, the entire planet, all nations must agree and ensure prosecuting on that law. I cannot imagine that happening. It hasn't with anything compute yet.

So it'll just move off shore, and people will buy the resulting data.

Also is your nick and response sarcasm?

peetistaken · 5 months ago
Indeed, I named it after akamai because they wrote a whitepaper for it. I think I first used akamai_fingerprint on https://tls.peet.ws, where you can see all your fingerprints!
piggg · 5 months ago
Blocking on ja3/ja4 signals to folks exactly what you are up to. This is why bad actors doing ja3 randomization became a thing in the last few years and made ja3 matching useless.

Imo use ja3/ja4 as a signal and block on src IP. Don't show your cards. Ja4 extensions that use network vs http/tls latency is also pretty elite to identify folks proxying.

mrweasel · 5 months ago
Some of the bad actors, and Chrome, randomize extensions, but only their order. I think it's ja3n that started to sort the extensions, before doing the hashing.

Blocking on source IP is tricky, because that frequently means blocking or rate-limiting thousands of IPs. If you're fine with just blocking entire subnets or all of AWS, I'd agree that it's probably better.

It really depends on who your audience is and who the bad actors are. For many of us the bad actors are AI companies, and they don't seem to randomize their TLS extensions. Frankly many of them aren't that clever when it comes to building scrapers, which is exactly the problem.

piggg · 5 months ago
For my use cases I block src IP for some period of time (minutes). I don't block large pools of IPs as the blast radius is too large. That said - there are well established shit hosters who provide multiple /24s to proxy/dirty VPN types that are generally bad.
jamesnorden · 5 months ago
I'm curious about why the user-agent he described can bypass Anubis, since it contains "Mozilla", sounds like a bug to me.

Edit: Nevermind, I see part of the default config is allowing Googlebot, so this is literally intended. Seems like people who criticize Anubis often don't understand what the opinionated default config is supposed to accomplish (only punish bots/scrapers pretending to be real browsers).