I would say "owners" rather than "founders", but I agree with you. I think Sam Altman's couldn't be worse than Elon Musk's X, no?
I bet the author, Boris The Brave, could find a relatable account of future events in the writings of Daniel C. Dennett.
Anyway, I do (involuntarily) use 2FA for two services, and managed to set myself up with Google Authenticator on my Android phone. Both services that onboarded me for this explained it really poorly, but at least got me hooked up and I now routinely (and reluctantly) login to those services this way. Reading this I suddenly realised, whoaaa, if I lose my phone do I lose access to those (important) services? Well no, I hope not at least, when I look at the Authenticator app it has the green "your codes are being saved to your google account" cloud icon. That's kind of reassuring. I suppose.
I'm not really sure what my point is, other than online security is an ever more important issue, it's a swamp and even many technical people who might know everything there is to know about some arcane corner of the technology universe don't necessarily properly understand it. Although I suspect most would not be prepared to admit it like I just did. Actual normal people (like my wife for example) have absolutely no chance of getting on top of the details and navigating their way to a best practice solution. I hope Google (or Apple) don't either give up on this or go full evil, that would be really bad.
I think I will check out whether my two services can give me recovery codes. I am confident I can manage vital username/password combinations and recovery codes, that's the level of sophistication (or not) I'm comfortable with in this space.
I'd bet that GitLab, as an entity or corporation, felt mission-driven and empowered by settling not far from GitHub.
Does he or does the company? If they had decent understanding of economics, they should have shut down their Bay Area operations?
3 Problems with that assumption:
a) Unlike living things, that information doesn't allow them to change. When a human touches a hotplate for the first time, it will (in addition to probably yelling and cursing a lot), learn that hotplates are dangerous and change its internal state to reflect that.
What we currently see as "AI" doesn't do that. Information gathered through means such as websearch + RAG, has ZERO impact on the systems internal makeup.
b) The "AI" doesn't collect the information. The model doesn't collect anything, and in fact can't. It can produce some sequence that may or may not cause some external entity to feed it back some more data (e.g. a websearch, databases, etc.). That is an advantage for technical applications, because it means we can easily marry an LLM to every system imaginable, but its really bad for the prospect of an AGI, that is supposed to be "autonomous".
c) The representation of the information has nothing to do with what it represents. All information an LLM works with, including whatever it is eing fed from th outside, is represented PURELY AND ONLY in terms of statistical relationships between the tokens in the message. There is no world-model, there is no understanding of information. There is mimicry of these things, to the point where they are technically useful and entice humans to anthropomorphise them (a BIIIG chunk of VC money hinges on that), but no actual understanding...and as soon as a model is left to its own devices, which would be a requirement for an AGI (remember: Autonomous), that becomes a problem.
Your assertions also make some sense, especially on a technical level. I'd add only that human minds are no longer the only minds utilizing digital tools. There is almost no protective gears or powerful barrier that would likely stand in the way of sentient AIs or AGI trying to "run" and function well on bio cells, like what makes up humans or animals, for the sake of their computational needs and self-interests.