So, you can still use TikTok or Facebook or Instagram, just without the hyper-personalized discovery/FYP/etc feeds.
I'm cautiously optimistic, since this kind of block doesn't really create a moat around existing businesses. And frankly, I like that kind of non-personalized feed sometimes.
EDIT: Downvoters, that's from the text of the bill itself. I recommend reading it if you don't trust (or don't like) this TL;DR.
God forbid anybody show any intellectual curiosity if it went against the doomer dogma.
And the worst part is the people with the “wrong think” were right. Covid didn’t have a “4% kill rate”. It almost certainly came from a lab. The vaccine was not always safe and definitely wasn’t effective. Lockdowns didn’t work and neither did masks. Closing school for two years and keeping kids locked inside on iPads will fuck them up for the rest of their lives.
And saying any of that resulted in being banned, accused of “dangerous thought”, and being yelled at by society.
Also you are still wrong about most of that. The vaccine is certainly safe and effective, masks definitely help, lockdowns definitely helped the overrun hospitals. Yes there were adverse effects in some of these policies unfortunately.
https://tinybase.org/guides/persistence/database-persistence...
Lately I've been wondering... is this a problem, or a strength?
It might be a fallacy to compare how LLMs "think" with how humans think. But humor me for a second. When you are speaking, each time you emit a word, you are not attending to every previous word in your sentence (like transformers), rather you have a state in your mind that represents the grammar and concepts, which is continuously updated as you speak (more similar to SSMs).
Similarly, when you read a book, every time you read a word, you are not attending to every previous word in the book. Your model of "the book" is rather a fuzzy/approximate state that is updated with new information every time a new word appears. Right? (I'm sorry I know this is very handwavy and psuedoscientific but bear with me).
Ok, so if (big if) you feel like the above is true, then to match human-type language modelling, SSMs seem more human-like than transformers.
BUT... then aren't transformers strictly better in terms of accuracy? Because a transformer never "forgets" information, as long as it is within the context window, because it revisits that information every time it emits a new token.
So let's say we can remove the "quadratic attention" problem of transformers with SSMs. That's a nice training/inference performance boost. But... look at where we got with "naive" attention. GPT 4, Claude 3. It's not like we're hitting a wall with quadratic attention. It's absurdly more expensive than SSMs, but GPUs certainly aren't getting slower. If all AI work stops now, and only hardware improves, it wouldn't be long until GPT4 could run on local hardware, right, provided Moore's law?
/end rant, not really sure what my point was, I'm not against SSMs (they're cool) but rather I'm wondering if the SOTA will ever be SSM when attention is so damn good
Planetscale is definitely popular and they get a lot of free advertisement from tech influencers (may change without the free tier) but most people I know in enterprise haven't heard of it.