I think we massively underestimate just how much one simple fact, the absurd explosion of housing costs, have contributed to this. Take that away and just about everyone would have this kind of freedom. Look at just the gaming industry for example. Practically any great game from the past decade you can name was made in Europe or Japan; places that have kept these costs low relative to the rest of the West. When you don't have everyone terrified about how they're going to make rent next month, it frees up literally everything else. You can take these random walks and experiment when the cost of your existence doesn't necessitate bringing in huge amounts of income.
Is it? AI very much seems to be in market capture mode. And IIRC, very few businesses actually report profits.
I can only predict AI models ramping up the cost like crazy once the victor captures the market. Same as every other tech trend in the last 20 years.
Regardless which model is currently the best, it looks like there will be an open weight model ~6 months behind it which can be vendored at costs that are closely tied to the hardware costs.
We are in this weird twilight zone where everything is still relativity high quality and stuff sort of works but in a few decades shit will start degrading faster than you can say “OpenAI”.
Weird thing will start happening like tax systems for the government not being able to be upgraded while consuming billions, infrastructure failing for unknown reasons, simple non or low-power devices that are now ubiquitous will become rare. Everything will require subscriptions and internet access and nothing will work right. You will have to talk to LLMs all day.
But, then there will be a demand for "all-in-one" reliable mega apps to replace everything else. These apps will usher in the megacorp reality William Gibson described.
The most disturbing thing about it is the way advice to forget about science and optimize for the process is mixed with standard tips for good communication. It shows that the community is so far gone that they don't see the difference.
If anyone needs a point of reference, just look at an algorithms and data structures journal to see what life is like with a typical rather than extreme level of problems.
Chemists are extremely brand-aware regarding their figures.
In synthetic chemistry many chemists could guess the author based just on the color scheme of the paper's figures.
For instance, look at the consistency here: https://macmillan.princeton.edu/publications/
And it comes with rewards! The above lab is synonymous with several popular techniques (one, organocatalysis, which garnered a Nobel prize) - the association would be much less strong if the lab hadn't kept a consistent brand over so many years.
I felt physically sick from second-hand embarrassment watching this.
Association with that brand would be very valuable.
Anecdotally, I think this behavior is undesirable for most commercial LLM use cases. I have several friends that have complained about Gemini’s “back talking” and prefer ChatGPT’s relative sycophancy.
As an aside, I think it's funny that the AI Doomer crowd ignores image and video AI models when it comes to AI models that will enslave humanity. It's not inconceivable that a video model would have a better understanding of the world than an LLM. So perhaps it would grow new capabilities and sprout some kind of intent. It's super-intelligence! Surely these models if trained long enough will deduce hypnosis or some similar kind of mind control and cause mass extinction events.
I mean, the only other explanation why LLMs are so scary and likely to be the AI that kills us all is that they're trained on a lot of sci-fi novels so sometimes they'll say things mimicking sentient life and express some kind of will. But obviously that's not true ;-)