If we produced ASI, things would become truly unpredictable. There are some obvious things that are on the table- fusion, synthetic meat, actual VR, immortality, ending hunger, global warming, or war, etc. We probably get these if they can be gotten. And then it's into unknown unknowns.
Perfectly reasonable to believe ASI is impossible or that LLMs don't lead to AGI, but there is not much room to question how impactful these would be.
AI will make a lot of things obsolete but I think that is just the inherent nature of such a disruptive technology.
It makes labor cost way lower for many things. But how the economy reorganizes itself around it seems unclear but I don’t really share this fear of the world imploding. How could cheap labor be bad?
Robotics for physical labor lag way behind e.g. coding but only because we haven’t mastered how to figure out the data flywheel and/or transfer knowledge sufficiently and efficiently (though people are trying).
LLMs would always bottleneck on one of those two, as computing demand grows crazy quickly with the data amount, and data is necessarily limited. Turns out people threw crazy amounts of compute into it, so the we got the other limit.
https://epoch.ai/blog/can-ai-scaling-continue-through-2030
There is plenty of data left, we don’t just train with crawled text data. Power constraints may turn out to be the real bottleneck but we’re like 4 orders of magnitude away
Deleted Comment
Deleted Comment
The fact that "scaling laws" didn't scale? Go open your favorite LLM in a hex editor, oftentimes half the larger tensors are just null bytes.
I'd much rather live in a world of tolerable good and bad opposing each other in moderate ways.