This is stated as a very matter-of-fact downside, but this is a pretty crazy portent for the future of dev tools / libraries / frameworks / languages.
Predictions:
- LLMs will further amplify the existing winner-take-all, first-mover nature of dev tools
- LLMs will encourage usage of open-source tools because they will be so much more useful with more/better training data
Future frameworks will be designed for AI and enablement. There will be a reversal in convention-over-configuration. Explicit referencing and configuration allow models to make fewer assumptions with less training.
All current models are trained on good and bad examples of existing frameworks. This is why asking an LLM to “code like John Carmack” produces better code.. Future frameworks can quickly build out example documentation and provide it within the framework for AI tools to reference directly.
My personal prediction is that the US foundational model makers will OSS something close to N-1 for the next 1-3 iterations. The CAPEX for the foundational model creation is too high to justify OSS for the current generation. Unless the US Gov steps up and starts subsidizing power, or Stargate does 10x what it is planned right now.
N-1 model value depreciates insanely fast. Making an OSS release of them and allowing specialized use cases and novel developments allows potential value to be captured and integrated into future model designs. It's medium risk, as you may lose market share. But also high potential value, as the shared discoveries could substantially increase the velocity of next-gen development.
There will be a plethora of small OSS models. Iteration on the OSS releases is going to be biased towards local development, creating more capable and specialized models that work on smaller and smaller devices. In an agentic future, every different agent in a domain may have its own model. Distilled and customized for its use case without significant cost.
Everyone is racing to AGI/SGI. The models along the way are to capture market share and use data for training and evaluations. Once someone hits AGI/SGI, the consumer market is nice to have, but the real value is in novel developments in science, engineering, and every other aspect of the world.
[0] https://www.anthropic.com/research/persona-vectors > We demonstrate these applications on two open-source models, Qwen 2.5-7B-Instruct and Llama-3.1-8B-Instruct.