Where are the results, tell me? What insanely great products have been shipped by people leveraging/building on top of LLMs...?
Yeah, silence. As usual.
Where are the results, tell me? What insanely great products have been shipped by people leveraging/building on top of LLMs...?
Yeah, silence. As usual.
Do we still think we'll have soft take off?
https://news.ycombinator.com/item?id=31686140
https://gwern.net/doc/fiction/science-fiction/2012-10-03-yva...
https://github.com/karpathy/llm.c
The proof is in the pudding. Let's see your code
A fun property of S-curves is that they look exactly like exponential curves until the midpoint. Projecting exponentials is definitionally absurd because exponential growth is impossible in the long term. It is far more important to study the carrying capacity limits that curtail exponential growth.
- Prediction of exponential AI research feedback loops (AI coding speeding up AI R&D) which Amodei says is already starting today
- AI being a race between democracies and autocracies with winner-takes-all dynamics, with compute being crucial in this race and global slowdown being infeasible
- Mention of bioweapons and mirror life in particular being a big concern
- The belief that AI takeoff would be fast and broad enough to cause irreplaceable job losses rather than being a repeat of past disruptions (although this essay seems to be markedly more pessimistic than AI 2027 with regard to inequality after said job losses)
- Powerful AI in next few years, perhaps as early as 2027
I wonder if either work influenced the other in any way or is this just a case of "great minds think alike"?
Early "rationalist" community was concerned with AI in this way 20 years ago. Eliezer inspired and introduced the founders of Google DeepMind to Peter Thiel to get their funding. Altman acknowledged how influential Eliezer was by saying how he is most deserving of a Nobel Peace prize when AGI goes well (by lesswrong / "rationalist" discussion prompting OpenAI). Anthropic was a more X-risk concerned fork of OpenAI. Paul Christiano inventor of RLHF was big lesswrong member. AI 2027 is an ex-OpenAI lesswrong contributor and Scott Alexander, a centerpiece of lesswrong / "rationalism". Dario, Anthropic CEO, sister is married to Holden Karnofsky, a centerpiece of effective altruism, itself a branch of lesswrong / "rationalism". The origin of all this was directionally correct, but there was enough power, $, and "it's inevitable" to temporarily blind smart people for long enough.
Now if the LLMs could modify their own nets, and improve themselves, then that would be immensely valuable for the world.
But as of now, its a billionaires wet dream to threaten all workers as a way to replace labor.