Agribusiness absolutely makes money off of those. In fact they had a hilariously easy time adapting to the consumer trend because all they had to do to label a cow “free range” or “grass fed” was change the finishing stage to a lower density configuration instead of those abominable feed lots you see along highways. The first two stages, rearing and pasturing, didn’t change because they were already “free range” and “grass fed”. Half of the farmland in the US is pastureland and leaving animals in the field to eat grass was always the cheapest way to rear and grow them. They only really get fed corn and other food at the end to fatten them up for human consumption.
The dirty not-so-secret is that free range/grass fed cows eat almost the exact same diet as regular cows, they just eat a little more grass because they’re in the field more during finishing. They’re still walking up to troughs of feed, because otherwise the beef would be unpalatable and grow quite slower.
True grass fed beef is generally called “grass finished” beef and it’s unregulated so you won’t find it at a supermarket. They taste gamier and usually have a metallic tang that I quite honestly doubt would ever be very popular. The marbling is also noticeably different and less consistent. Grain finished beef became popular in the 1800s and consumers in the West have strongly preferred it since.
I’m not sure you can even find a cow in the entire world that isn’t “grass fed”. Calves need the grass for their gut microbiomes to develop properly.
[0] https://www.weforum.org/stories/2024/07/the-water-challenge-...
> This is a fascinating shift in economics and suggests there could be a runaway power concentrating moment for AI system developers who have the largest number of paying customers. Those customers are footing the bill to create new high quality data … which improves the model … which becomes better and more preferred by users … you get the idea.
While I think this is an interesting hypothesis, I'm skeptical. You might be lowering the cost of your training corpus by a few million dollars, but I highly doubt you are getting novel, high quality data.
We are currently in a world where SOTA base model seems to be capped at around GPT4o levels. I have no doubt that in 2-3 years our base models will compete with o1 or even o3... just it remains to be seen what innovations/optimizations get us there.
The most promising idea is to use reasoning models to generate data, and then train our non-reasoning models with the reasoning-embedded data. But... it remains to be seen how much of the chain of thought reasoning you can really capture into model weights. I'm guessing some, but I wonder if there is a cap to multi-head attention architecture. If reasoning can be transferred from reasoning models to base models, OpenAI should have already trained a new model with o3 training data, right?
Another thought is maybe we don't need to improve our base models much. It's sufficient to have them be generalists, and to improve reasoning models (lowering price, improving quality) going forward.