> Anubis uses a proof-of-work challenge to ensure that clients are using a modern browser and are able to calculate SHA-256 checksums
https://anubis.techaro.lol/docs/design/how-anubis-works
This is pretty cool, I have a project or two that might benefit from it.
But I find that when it comes to simple serving of content, human vs. bot is not usually what you’re trying to filter or block on. As long as a given client is not abusing your systems, then why do you care if the client is a human?
I've been looking into other techniques as well like making a little hibernation/dehydration framework for LLMs to help them process things over longer periods of time. The idea is that the agent either stops working or says that it needs to wait for something to occur, and then you start completions again upon occurrence of a specific event or passage of some time.
I have always figured that if we could get LLMs to run indefinitely and keep it all in context, we'd get something much more agentic.