It's a death by thousand cuts that's finally starting to come together:
- Remote attestation like Play "integrity"
- Hardware backed DRM like Widevine
- No full access to filesystem on Android, and no access to filesystem at all on iOS
- No ability to run your own programs at all on iOS without Apple's permission.
- "Secure" boot on Android and iOS that do not allow running your own software
Ever wondered why Windows 11 have a TPM requirement? No, it's not just planned obsolescence.
If they get their way, user-owned computers running free software will never be usable again, and we'll lose the final escape hatch slowing down the enshittification of computers. The only hope we have is that they turn up the temperature a little too quickly that normies would catch on before it gets far enough.
Maybe you don't. To be clear, this is benefiting massively from hindsight, just as how if I didn't know that combustion engines worked, I probably wouldn't have dreamed up how to make one, but the emergent conversational capabilities from LLMs are pretty obvious. In a massive dataset of human writing, the answer to a question is by far the most common thing to follow a question. A normal conversational reply is the most common thing to follow a conversation opener. While impressive, these things aren't magic.
No it isn't. Type a question into a base model, one that hasn't been finetuned into being a chatbot, and the predicted continuation will be all sorts of crap, but very often another question, or a framing that positions the original question as rhetorical in order to make a point. Untuned raw language models have an incredible flair for suddenly and unexpectedly shifting context - it might output an answer to your question, then suddenly decide that the entire thing is part of some internet flamewar and generate a completely contradictory answer, complete with insults to the first poster. It's less like talking with an AI and more like opening random pages in Borge's infinite library.
To get a base language model to behave reliably like a chatbot, you have to explicitly feed it "a transcript of a dialogue between a human and an AI chatbot", and allow the language model to imagine what a helpful chatbot would say (and take control during the human parts). The fact that this works - that a mere statistical predictive language model bootstraps into a whole persona merely because you declared that it should, in natural English - well, I still see that as a pretty "magic" trick.