All 3 points (you have had all of it your head at some point, it is still there, that is not true of an LLM) are mere conjectures, and not provable at this time, certainly not in the general case. You may be able to show this of some codebases for some developers and for some LLMs, but not all.
As a user, it feels like the race has never been as close as it is now. Perhaps dumb to extrapolate, but it makes me lean more skeptical about the hard take-off / winner-take-all mental model that has been pushed.
Would be curious to hear the take of a researcher at one of these firms - do you expect the AI offerings across competitors to become more competitive and clustered over the next few years, or less so?
At the time I was working with maybe 30 providers and it would be doable to rebuild the server and reconfigure all providers, cars, insurance, etc. Content would probably take longer, but also doable. But at the time I took it as a sign to shift to something else.
Glad I didn't and that the project came back from the ashes, literally.
We tag “complacency” as bad, but I think it’s just a byproduct of our reliance on heuristics and patterns which is evolutionarily useful overall.
On the other hand we worry (sometimes excessively) about how the future might unfold and really much of that is unknown.
Much more practical (and rewarding) to keep improving oneself or organisation to meet the needs of the world today withe an eye on how the world is evolving, rather than try to be some oracle or predict too far out (in which case you need to both get the prediction and the execution right!).
As an aside, it seems a recent fashion to love these big bets these days (AI, remember Metaverse), and to make big high conviction statements about the future, but that’s more to do with their individual specific circumstances and motivations.