Readit News logoReadit News
reducesuffering commented on The AI boom is causing shortages everywhere else   washingtonpost.com/techno... · Posted by u/1vuio0pswjnm7
mystraline · 2 days ago
The current LLMs are not constantly learning. They're static models that require megatons of coal to retrain.

Now if the LLMs could modify their own nets, and improve themselves, then that would be immensely valuable for the world.

But as of now, its a billionaires wet dream to threaten all workers as a way to replace labor.

reducesuffering · 2 days ago
Think bigger, think of the entire system outside of just the single LLM: the interplay of capital, human engineering, and continual improvement of LLMs. First gen LLM used 100% human coding. Previous gen were ~50% human coding. Current gen are ~10% human coding (practically all OpenAI/Anthropic engineers admit they're entirely using Claude Code / Codex for code). What happens when we're at 1% human coding, then 0%? The recursive self-improvement is happening, it's just indirectly for now.
reducesuffering commented on Amazon plunge continues $1T wipeout as AI bubble fears ignite sell-off   cnbc.com/2026/02/06/ai-se... · Posted by u/truegoric
sdf2erf · 3 days ago
Im still waiting to read about macro-level mass-lay offs or insane productivity leaps.

Where are the results, tell me? What insanely great products have been shipped by people leveraging/building on top of LLMs...?

Yeah, silence. As usual.

reducesuffering · 3 days ago
There's literally a billion ChatGPT users already, the worlds fastest growing product. Do you think they're all just playing around in the sand? Ask anyone in education, it has completely upended every student's workflow.
reducesuffering commented on GPT-5.3-Codex   openai.com/index/introduc... · Posted by u/meetpateltech
mirsadm · 4 days ago
I can't tell if this is a serious conversation anymore.
reducesuffering · 4 days ago
“Best start believing in science fiction stories. You're in one.”

https://x.com/TheZvi/status/2017310187309113781

reducesuffering commented on GPT-5.3-Codex   openai.com/index/introduc... · Posted by u/meetpateltech
aurareturn · 4 days ago
More importantly, this is the early steps of a model self improving itself.

Do we still think we'll have soft take off?

reducesuffering · 4 days ago
This has already been going on for years. It's just that they were using GPT 4.5 to work on GPT 5. All this announcement mean is that they're confident enough in early GPT 5.3 model output to further refine GPT 5.3 based on initial 5.3. But yes, takeoff will still happen because of this recursive self improvement works, it's just that we're already past the inception point.
reducesuffering commented on Y Combinator will let founders receive funds in stablecoins   fortune.com/2026/02/03/fa... · Posted by u/shscs911
reducesuffering · 6 days ago
Remember when YC funded (and boosted reach of) ~50 crypto scamlike co's during the heyday of the craze? Like the Stablegains scam fiasco:

https://news.ycombinator.com/item?id=31686140

https://news.ycombinator.com/item?id=31431224

https://news.ycombinator.com/item?id=31461634

reducesuffering commented on Outsourcing thinking   erikjohannes.no/posts/202... · Posted by u/todsacerdoti
reducesuffering · 9 days ago
See Scott Alexander’s The Whispering Earring (2012):

https://gwern.net/doc/fiction/science-fiction/2012-10-03-yva...

reducesuffering commented on The Adolescence of Technology   darioamodei.com/essay/the... · Posted by u/jasondavies
zesterer · 13 days ago
(they are all wrong)

A fun property of S-curves is that they look exactly like exponential curves until the midpoint. Projecting exponentials is definitionally absurd because exponential growth is impossible in the long term. It is far more important to study the carrying capacity limits that curtail exponential growth.

reducesuffering · 13 days ago
Covid was also an S curve...
reducesuffering commented on The Adolescence of Technology   darioamodei.com/essay/the... · Posted by u/jasondavies
2001zhaozhao · 14 days ago
It's interesting just how many opinions Amodei shares with AI 2027's authors despite coming from a pretty different context.

- Prediction of exponential AI research feedback loops (AI coding speeding up AI R&D) which Amodei says is already starting today

- AI being a race between democracies and autocracies with winner-takes-all dynamics, with compute being crucial in this race and global slowdown being infeasible

- Mention of bioweapons and mirror life in particular being a big concern

- The belief that AI takeoff would be fast and broad enough to cause irreplaceable job losses rather than being a repeat of past disruptions (although this essay seems to be markedly more pessimistic than AI 2027 with regard to inequality after said job losses)

- Powerful AI in next few years, perhaps as early as 2027

I wonder if either work influenced the other in any way or is this just a case of "great minds think alike"?

reducesuffering · 14 days ago
It's because few realize how downstream most of this AI industry is of Thiel, Eliezer Yudkowsky and LessWrong.com.

Early "rationalist" community was concerned with AI in this way 20 years ago. Eliezer inspired and introduced the founders of Google DeepMind to Peter Thiel to get their funding. Altman acknowledged how influential Eliezer was by saying how he is most deserving of a Nobel Peace prize when AGI goes well (by lesswrong / "rationalist" discussion prompting OpenAI). Anthropic was a more X-risk concerned fork of OpenAI. Paul Christiano inventor of RLHF was big lesswrong member. AI 2027 is an ex-OpenAI lesswrong contributor and Scott Alexander, a centerpiece of lesswrong / "rationalism". Dario, Anthropic CEO, sister is married to Holden Karnofsky, a centerpiece of effective altruism, itself a branch of lesswrong / "rationalism". The origin of all this was directionally correct, but there was enough power, $, and "it's inevitable" to temporarily blind smart people for long enough.

u/reducesuffering

KarmaCake day4487October 27, 2016
About
https://exoroad.com

Software engineer

eric@exoroad.com

View Original