If you crawl at 1Hz per crawled IP, no reasonable server would suffer from this. It's the few bad apples (impatient people who don't rate limit) who ruin the internet for both users and hosters alike. And then there's Google.
If you crawl at 1Hz per crawled IP, no reasonable server would suffer from this. It's the few bad apples (impatient people who don't rate limit) who ruin the internet for both users and hosters alike. And then there's Google.
For slicers I use PrusaSlicer on Linux (don't have a Prusa; it's really good for generic slicing). But I can see how Bambu stuff could be an issue if it's Win only and not Wineable.
Please elaborate; can you name a few tools and what you use them for? Just curious.
He is just making up a fantasy world where his elves run in specific patterns to please him.
There is no metrics or statistics on code quality, bugs produced, feature requirements met.. or anything.
Just a gigantic wank session really.
I do think it's overly complex though; but it's a novel concept.
Have been doing manual orchestration where I write a big spec which contains phases (each done by an agent) and instructions for the top level agent on how to interact with the sub agent. Works well but it's hard utilize effectively. No doubt this is the future. This approach is bottlenecked by limitations of the CC client; mainly that I cannot see inter-agent interactions fully, only the tool calls. Using a hacked client or compatible reimplementation of CC may be the answer. Unless the API was priced attractively, or other models could do the work. Gemini 3 may be able to handle it better than Opus 4.5. The Gemini 3 pricing model is complex to say the least though (really).
1 Hz is 86400 hits per day, or 600k hits per week. That's just one crawler.
Just checked my access log... 958k hits in a week from 622k unique addresses.
95% is fetching random links from u-boot repository that I host, which is completely random. I blocked all of the GCP/AWS/Alibaba and of course Azure cloud IP ranges.
It's almost all now just comming of a "residential" and "mobile" IP address space from completely random places all around the world. I'm pretty sure my u-boot fork is not that popular. :-D
Every request is a new IP address, and available IP space of the crawler(s) is millions of addresses.
I don't host a popular repo. I host a bot attraction.
A whitelist would be needed for sites where getting all the pages make sense. And probably in addition to the 1Hz, an additional limit of 1k/day would be needed.
I can see now why Google has not much solid competition (Yandex/Baidu arguably don't compete due to network segmentation).
Scraping reliably is hard, and the chance of kicking Google off their throne may be even further reduced due to AI crawler abuse.
PS 958k hits is a lot! Even if your pages were a tiny 7.8k each (HN front page minus assets), that would be about 7G of data (about 4.6 Bee Movies in 720p h256).