Both Llama 4 Scout and Llama 4 Maverick use a Mixture-of-Experts (MoE) design with 17B active parameters each
Those experts are LLM trained on specific tasks or what?
I wrote a ~50 LOC browser extension that always redirects away from the feed to your profile. Works great, sideload and forget.
At ORIS I wrote a Laravel wrapper for PTOSC and really miss it now that I'm back on PostgreSQL. Now I mostly use updatable views in front of modified tables then drop-swap things later once any transitional backfilling is done.
What's to keep it from continuing to fill back in?
Looks like it’s just summarizing facts gathered during chats and adding those to the prompt they feed to the AI. I mean that works (been doing it myself) but what’s the news here?