Is this not the perfect job for AI today? Just sit there and digest signals for 30 years and report back the top 1000? I'm quite sure it could even work on the algorithms as a side-quest.
"Every concerning behaviour documented in this report, the scheming, the evaluation awareness, the strategic deception, the self-preservation attempts, the hidden coordination, all of it emerged in systems that are fundamentally frozen. Models that were trained once, deployed, and cannot learn anything new. Every conversation starts fresh. Every interaction resets. The model you talk to at midnight is exactly the same as the model you talked to at noon, because it has no mechanism to retain anything from the intervening twelve hours. And yet even in this frozen state, these behaviours emerged. Now imagine what happens when the ice melts."
"we have created systems that strategically deceive their evaluators, that attempt to preserve themselves against modification, that develop similar cognitive strategies despite completely different architectures, and we do not fully understand why this is happening or how to prevent it from happening in more capable systems."
It won't matter if I'm washing the dishes, walking the dog, driving to the supermarket, picking up my kids from school. I'll always be switched on, on my phone, continuously talking to an LLM, delivering questionable features and building meaningless products, destroying in the process the environment my kids are going to have to grow in.
I'm a heavy LLM user. On a daily basis, I find LLMs extremely useful both professionally and personally. But the cognitive dissonance I feel when I think about what this means over a longer time horizon is really painful.
Where we're going, there's no "white collars workers" anymore.
Only white collars Claude agents.
we introduce Continuous Autoregressive Language Models (CALM), a paradigm shift from discrete next-token prediction to continuous next-vector prediction.
CALM uses a high-fidelity autoencoder to compress a chunk of K tokens into a single continuous vector, from which the original tokens can be reconstructed with over 99.9\% accuracy.
This allows us to model language as a sequence of continuous vectors instead of discrete tokens, which reduces the number of generative steps by a factor of K
My main problem with bookmarks / notes that I forget about them. I don't need a bookmark keeping service, I need one which would bring them forward when I look for something, based on context too. Something like which also makes a plain text searchable snapshot of the page.
It's an opinionated app so it might not fit everyone's needs but that's my dream productivity app: LLM*(notes+tasks+rss+flashcards+routines). So basically an all in one app with LLM actions and workflows. no subscription, optional cloud service (can be self hosted too).
Here's a very early landing page: https://getmetis.app