Readit News logoReadit News
grafmax · 2 months ago
> Scientific progress is the biggest driver of overall progress

> There will be very hard parts like whole classes of jobs going away, but on the other hand the world will be getting so much richer so quickly that we’ll be able to seriously entertain new policy ideas we never could before

Real wages haven’t risen since 1980. Wealth inequality has. Most people have much less political power than they used to as wealth - and thus power - have become concentrated. Today we have smartphones, but also algorithm-driven polarization and a worldwide rise in authoritarian leaders. Depression and anxiety affect roughly 30% of our population.

The rise of wealth inequality and the stagnation of wages corresponds to the collapse of the labor movement under globalization. Without a counterbalancing force from workers, wealth accrues to the business class. Technological advances have improved our lives in some ways but not on balance.

So if we look at people’s well-being, society as whole hasn’t progressed since the 1980s; in many ways it’s gotten worse. Thus the trajectory of progress described in the blog post is make believe. The utopia Altman describes won’t appear. Mass layoffs, if they happen, will further concentrate wealth. AI technology will be used more and more for mass surveillance, algorithmic decision making (that would make Kafka blush), and cost cutting.

What we can realistically expect is lowering of quality of life, an increased shift to precarious work, further concentration of wealth and power, and increasing rates of suffering.

What we need instead of science fiction is to rebuild the labor movement. Otherwise “value creation” and technology’s benefits will continue to accrue to a dwindling fraction of society. And more and more it will be at everyone else’s expense.

atleastoptimal · 2 months ago
Sure, people’s well being as a whole haven’t gotten better since the 1980s, except for

>Air quality (no more leaded gasoline)

>Life expectancy

>Cancer survival rates

>Access to information

>Infant mortality

>Violent crime rates across the western world

>Access to education

>Clean water and food for 4+ billion people

>HIV treatment

>etc

The negativity on this site is insane. They will deny the greatest scientific achievements if it lets them dunk on AI or whoever is the enemy of the week.

grafmax · 2 months ago
My position is more nuanced that you present it to be.

I’m arguing against Altman’s notion of progress as being driven by scientific and technological achievements. Looking through your list it’s social policies which either drive or are necessary components of most of the improvements you mention. Even the medical advances you mention are dependent on a society’s ability to offer healthcare to its members - a notable deficiency of the US system. I emphasized the importance of the labor movement in my comment; but I don’t want to deny the importance of governmental policy changes. It’s just in the current political arena it’s not clear to me that politicians are interested in much more than serving corporate and billionaire interests, so I don’t have much hope for our ability to continue to make positive policy changes. Therefore the main avenue that people have to take control of their futures seems to me to be organized labor. This is a tool which has shown itself effective in the past for achieving all sorts of improvements in the mass of people’s lives.

Besides that I advocate for taking a sober look at our current situation. Painting the picture of society as one of progress by looking at our achievements and denying our societal shortcomings is naive. Examples: the climate crisis, rising authoritarianism, warmongering among nuclear powers, high rates of depression and anxiety, concentration of wealth and power in a few hands. Science and technology may plausibly play some role in addressing these issues, sure, but progress rarely occurs without struggle. Altman’s version is utopian, unrealistic, and will likely make society worse unless the working class can successfully struggle to reap the benefits of “value-creation” for itself.

yencabulator · 2 months ago
> The utopia Altman describes won’t appear.

Sure it will, as far as Altman is concerned. To make the whole post make sense, add "... for the rich" where appropriate.

sjducb · 2 months ago
The problem is that housing and health insurance are too expensive. Tech isn’t responsible for either of those problems.
BriggyDwiggs42 · 2 months ago
Parent didn’t claim tech was responsible for every problem? Housing prices are likely an inequality issue; as a greater portion of money in the economy is held by rich people, more money is invested and less is spent on goods/services, hence a scarce asset like land sees an increase in value.
grafmax · 2 months ago
I don’t say it was responsible. My argument is against Altman’s picture of progress. He argues that improvements in science and technology drive progress. My argument is that technology brings both positive and negative changes, and the degree to which the working class sees a net benefit largely depends on its ability to struggle against the business class.
kjkjadksj · 2 months ago
In a way it is. Why are housing costs so high in Redmond, WA? The result of an influx of high income tech workers to the local housing market and the resulting shift of prices such as to eventually dilute the utility of that high salary to begin with. People in the area without a hook on that whale are of course left high and dry.
_DeadFred_ · 2 months ago
I mean home prices went up insane in California due to tech. Many people cashed out and bought homes in cheaper locations...driving up the housing prices there beyond what locals could afford.

How did Hacker News already forget these things?

nradov · 2 months ago
Real wages have risen a lot since 1980 when you include employer contributions to employee health insurance.
Rooster61 · 2 months ago
It's difficult for me to call those wages "real" when medical costs have been so absurdly gouged to eat up those contributions. Those increases have had no real impact on the average consumer, and is profoundly awful for those without access to employment that provides that insurance
boole1854 · 2 months ago
Even without including employer health insurance costs, real wages are up 67% since 1980.

Source: https://fred.stlouisfed.org/graph/?g=1JxBn

Details: uses the "Wage and salary accruals per full-time-equivalent employee" time series, which is the broadest wage measure for FTE employees, and adjusts for inflation using the PCE price index, which is the most economically meaningful measure of "how much did prices change for consumers" (and is the inflation index that the Fed targets)

xboxnolifes · 2 months ago
No, that's not a real wage increase, thats nominal wage. If i make 20k more, but health insurance costs also went up 20k, my real wage did not change. I am no richer.
insane_dreamer · 2 months ago
Not when you account for the insane rise in cost of health care.
ImHereToVote · 2 months ago
Does it correct for housing costs?
tim333 · 2 months ago
>Real wages haven’t risen since 1980

is a very US thing. In China they've probably 10xd over that time.

hollerith · 2 months ago
Even after that 10x growth, median Chinese household income is only 13-16% of US median household income:

China: ~$10,000 – $12,000

US: ~$74,580 (U.S. Census Bureau, 2022)

grafmax · 2 months ago
That is because of China and US positions in the global system over this time. The wage/labor/inequality story is broadly true across the global north; China can credit forward thinking central planning, social programs, and industrialization for its economic progress (yet it continues to live under authoritarian rule).
azan_ · 2 months ago
> Real wages haven’t risen since 1980.

Do people really believe that? I think either people have too rosy view of 80s or consider that real wages should also adjust for lifestyle inflation.

naryJane · 2 months ago
Yes. It’s even a part of Ray Dalio’s speeches on the topic. Here is one example where he mentions it: https://www.linkedin.com/pulse/why-how-capitalism-needs-refo...
Herring · 2 months ago
Lots of people don't care about "progress" in an absolute sense, eg longer healthier lifespans for all. They only care about it in a relative sense, eg if cop violence against minorities goes down, they feel anxiety and resentment. They really really want to be the biggest fish in the little pond. That's how a caste system works, it "makes a captive of everyone within it". Equality feels like a demotion [1].

That's why we have a whole thing about immigration going on. It's the one issue that the president is not underwater on right now [2]. You can't get much of a labor movement like this.

[1] https://www.texasobserver.org/white-people-rural-arkansas-re...

[2] https://www.natesilver.net/p/trump-approval-ratings-nate-sil...

flessner · 2 months ago
> Already we live with incredible digital intelligence, and after some initial shock, most of us are pretty used to it. Very quickly we go from being amazed that AI can generate a beautifully-written paragraph to wondering when it can generate a beautifully-written novel;

It was probably around 7 years ago when I first got interested in machine learning. Back then I followed a crude YouTube tutorial which consisted of downloading a Reddit comment dump and training an ML model on it to predict the next character for a given input. It was magical.

I always see LLMs as an evolution of that. Instead of the next character, it's now the next token. Instead of GBs of Reddit comments, it's now TBs of "everything". Instead of millions of parameters, it's now billions of parameters.

Over the years, the magic was never lost on me. However, I can never see LLMs as more than a "token prediction machine". Maybe throwing more compute and data at it will at some point make it so great that it's worthy to be called "AGI" anyway? I don't know.

Well anyway, thanks for the nostalgia trip on my birthday! I don't entirely share the same optimism - but I guess optimism is a necessary trait for a CEO, isn't it?

helloplanets · 2 months ago
What's your take on Anthropic's 'Tracing the thoughts of a large language model'? [0]

> To write the second line, the model had to satisfy two constraints at the same time: the need to rhyme (with "grab it"), and the need to make sense (why did he grab the carrot?). Our guess was that Claude was writing word-by-word without much forethought until the end of the line, where it would make sure to pick a word that rhymes. We therefore expected to see a circuit with parallel paths, one for ensuring the final word made sense, and one for ensuring it rhymes.

> Instead, we found that Claude plans ahead. Before starting the second line, it began "thinking" of potential on-topic words that would rhyme with "grab it". Then, with these plans in mind, it writes a line to end with the planned word.

This is an older model (Claude 3.5 Haiku) with no test time compute.

[0]: https://www.anthropic.com/news/tracing-thoughts-language-mod...

Sammi · 2 months ago
What is called "planning" or "thinking" here doesn't seem conceptually much different to me than going from naive breath first search based Dijkstra shortest path search, to adding a heuristics that makes it search in a particular direction first and calling it A*. In both cases you're adding another layer to an existing algorithm in order to make it more effective. Doesn't make either AGI.

I'm really no expert in neural nets or LLMs, so my thinking here is not an expert opinion, but as a CS major reading that blog from Anthropic, I just cannot see how they provided any evidence for "thinking". To me it's pretty aggressive marketing to call this "thinking".

yencabulator · 2 months ago
Generalize the concept from next token prediction to coming tokens prediction and the rest still applies. LLMs are still incredibly poor at symbolic thought and following multi-step algorithms, and I as a non-ML person don't really see what in the LLM mechanism would provide such power. Or maybe we're still just another 1000x scale off and symbolic thought will emerge at some point.

Me personally, I expect to see LLMs to be a mere part of whatever will be invented later.

iNic · 2 months ago
The mere token prediction comment is wrong, but I don't think any of the other comments really explained why. Next token prediction is not what the AI does, but its goal. It's like saying soccer is a boring sport having only ever seen the final scores. The important thing about LLMs is that they can internally represent many different complex ideas efficiently and coherently! This makes them an incredible starting point for further training. Nowadays no LLM you interact with will be a pure next token predictor anymore, they will have all gone through various stages of RL, so that they actually do what we want them to do. I think I really feel the magic looking at the "circuit" work by Anthropic. It really shows that these models have some internal processing / thinking that is complex and clever.
quonn · 2 months ago
> that they can internally represent many different complex ideas efficiently and coherently

The Transformer circuits[0] suggest that this representation is not coherent at all.

[0] https://transformer-circuits.pub

trashtester · 2 months ago
The "next token prediction" is a distraction. That's not where the interesting part of an AI model happens.

If you think of the tokenization near the end as a serializer, something like turning an object model into json, you get a better understanding. The interesting part of a an OOP program is not in the json, but what happens in memory before the json is created.

Likewise, the interesting parts of a neural net model, whether it's LLM's, AlphaProteo or some diffusion based video model, happen in the steps that operate in their latent space, which is in many ways similar to our subconscious thinking.

In those layers, the AI models detect deeper and deeper patterns of reality. Much deeper than the surface pattern of the text, images, video etc used to train them. Also, many of these patterns generalize when different modalities are combined.

From this latent space, you can "serialize" outputs in several different ways. Text is one, image/video another. For now, the latent spaces are not general enough to do all equally well, instead models are created that specialize on one modality.

I think the step to AGI does not require throwing a lot more compute into the models, but rather to have them straddle multiple modalities better, in particular, these:

- Physical world modelling at the level of Veo3 (possibly with some lessons from self driving or robotics model for elements like object permananence and perception) - Symbolic processing of the best LLM's. - Ability to be goal oriented and iterate towards a goal, similar to the Alpha* family of systems - Optionally: Optimized for the use of a few specific tools, including a humanoid robot.

Once all of these are integrated into the same latent space, I think we basically have what it takes to replace most human thought.

sgt101 · 2 months ago
>which is in many ways similar to our subconscious thinking

this is just made up.

- we don't have any useful insight on human subconscious thinking. - we don't have any useful insight on the structures that support human subconscious thinking. - the mechanisms that support human cognition that we do know about are radically different from the mechanisms that current models use. For example we know that biological neurons & synapses are structurally diverse, we know that suppression and control signals are used to change the behaviour of the networks , we know that chemical control layers (hormones) transform the state of the system.

We also know that biological neural systems continuously learn and adapt, for example in the face of injury. Large models just don't do these things.

Also this thing about deeper and deeper realities? C'mon, it's surface level association all the way down!

phorkyas82 · 2 months ago
As far as I understood any AI model is just a linear combination of its training data. Even if that were such a large corpus as the entire web... it's still just like a sophisticated compression of other's people's expressions.

It has not made its own experiences, not interacted with the outer world. Dunno, I won't to rule out something operating solely on language artifacts cannot develop intelligence or consciousness, whatever that is,.. but so far there are also enough humans we could care about and invest into.

andsoitis · 2 months ago
> the AI models detect deeper and deeper patterns of reality. Much deeper than the surface pattern of the text

What are you talking about?

klipt · 2 months ago
If you wish to make an apple pie from scratch

You must first invent the universe

If you wish to predict the next token really well

You must first model the universe

Aeolun · 2 months ago
> wondering when it can generate a beautifully-written novel

Not quite yet, but I’m working on it. It’s ~~hard~~ impossible to get original ideas out of an LLM, so it’ll probably always be a human assisted effort.

agumonkey · 2 months ago
The TB of everything with transformers makes a difference, maybe i'm just too uneducated, but the amount of semantic context that can be taken into account when generating the next token is really disrupting.
marsten · 2 months ago
> Over the years, the magic was never lost on me. However, I can never see LLMs as more than a "token prediction machine".

The "mere token prediction machine" criticism, like Pearl's "deep learning amounts to just curve fitting", is true but it also misses the point. AI in the end turns a mirror on humanity and will force us to accept that intelligence and consciousness can emerge from some pretty simple building blocks. That in some deep sense, all we are is curve fitting.

It reminds me of the lines from T.S. Eliot, “...And the end of all our exploring, Will be to arrive where we started, And know the place for the first time."

daxfohl · 2 months ago
> although we’ll make plenty of mistakes and some things will go really wrong, we will learn and adapt quickly

If the "mistake" is that of concentrating too much power in too few hands, there's no recovery. Those with the willingness to adapt will not have the power to do so, and those with the power to adapt will not have the willingness. And it feels like we're halfway there. How do we establish a system of checks and balances to avoid this?

rcarmo · 2 months ago
This read like a Philip K. Dick, Ubik-style advertisement for a dystopian future, and I’m pretty amazed it is an actual blog post by a corporate leader in 2025. Maybe Sam and Dario should be nominated for Hugos or something…
crossroadsguy · 2 months ago
I have read his Scanner Darkly and partially another book. Not sure whether you are overrating this post, or insulting his writing style.

Dead Comment

wolecki · 2 months ago
Some reasoning tokens on this post:

>Intelligence too cheap to meter is well within grasp

And also:

>cost of intelligence should eventually converge to near the cost of electricity.

Which is a meter-worthy resource. So intelligence effect on people's lives is in the order of magnitude of one second of a toaster use each day, in present value. This begs the question: what could you do with a toaster-second say 5 years from today?

GarnetFloride · 2 months ago
That's what they said about electricity when nuclear power plants were proposed. What's your electricity bill like today?
AnthonyMouse · 2 months ago
The primary operating cost of traditional power plants is fuel, i.e. coal and natural gas. The fuel cost for nuclear power plants is negligible because the energy content is higher by more than a factor of a million. So if you build enough nuclear plants to power the grid, charging per kWh for the electricity is pointless because the marginal cost of the fuel is so low. Meanwhile the construction cost should be on par with a coal plant, since the operating mechanism (heat -> steam -> electricity) is basically the same.

Unsurprisingly, this scared the crap out of the fossil fuel industry in the US and countries like Russia that are net exporters of fossil fuels, so they've spent decades lobbying to bind nuclear plant construction up in red tape to prevent them being built and funding anti-nuclear propaganda.

You can see a lot of the same attempts being made with AI, proposals to ban it or regulate it etc., or make sure only large organizations have it.

But the difference is that power plants are inherently local. If the US makes it uneconomical to build new nuclear plants, US utility customers can't easily get electricity from power plants in China. That isn't really how it works with AI.

danw1979 · 2 months ago
The thing is, nuclear was never on such a steep learning curve as solar and batteries are today.

It’ll never be too cheap to meter, but electricity will get much cheaper over the coming decades, and so will synthetic hydrocarbons on the back of it.

tim333 · 2 months ago
My and or my family's electricity bills have never been near zero. On the other hand my AI bill is zero. I think different economics apply.

(that excludes a brief period when I camped with a solar panel)

TheOtherHobbes · 2 months ago
Your electricity bill is set by the grift of archaic fossil energy industries. And nuclear qualifies as a fossil industry because it's still essentially digging ancient stuff out of the ground, moving it around the world, and burning it in huge dirty machines constructed at vast expense.

There are better options, and at scale they're literally capable of producing electricity that literally is too cheap to meter.

The reasons they haven't been built at scale are purely political.

Today's AI is computing's equivalent of nuclear energy - clumsy, centralised, crude, industrial, extractive, and massively overhyped and overpriced.

Real AI would be the step after that - distributed, decentralised, reliable, collaborative, free in all senses of the word.

joshjob42 · 2 months ago
Well, Altman is also investing in Helion, which projects to get the price of electricity to ~$10/MWh, but for whom, much like solar, wind, and actual nuclear the cost structure is overwhelmingly dominated by capital costs and non-varying capital costs (the cost of uranium or Helion's fuel will be negligible vs capital and manpower). So there's actually a pretty good reason to think long term electricity will be marginally so cheap that it isn't metered but instead basically bought in chunks of capacity or availability.

Another way for intelligence to get too cheap to meter is for the cost to fall so low it becomes hyperabundant. If you were to, for instance, take AI2027 as a benchmark and think ultimately we'll achieve something like the equivalent of John von Neumann in a box with a 2T dense equivalent parameter model and it will match such a Nobel prize winner's productivity when running inference at say 15 tokens a second (as fast as people can read) then you only need in principle 60 teraflops of AI infernce compute, which is roughly 2x the current Apple Neural Engine. So plausibly by the time you get to the 2030s, every laptop, smartphone, etc will be easily able to run models as powerful as the smartest people.

Somewhat longer term, I'm sure Altman expects the entire process to be automated and for the computational efficiency to rise significantly. If you take recent estimates from various players in the reversible computing space, you'd guesstimate that you ought to be able to do 60tflops by the late 2030s using under 0.1W or ~1kWh/yr which Helion could produce for ~1¢. I do feel like 1 year of cognitive labor from the smartest person a penny or two renders intelligence too cheap to meter out on a per-hour basis.

greenie_beans · 2 months ago
watch the cost of electricity go up because the demand created by data centers. i'm building an off grid solar system right now and it ain't cheap! thinking about a future where consumers are competing with data centers for electricity makes me think it might feel cheap in the future, though.
antihero · 2 months ago
Datacentre operators want to keep energy costs down and also have capital to make it happen.
unstablediffusi · 2 months ago
bro, how much did your electricity cost go up because of millions of people playing games on their 500w+ gpus? by a billion of people watching youtube? by hundreds of millions of women and children scrolling instagram and tiktok 8 hours a day?
nhdjd · 2 months ago
Fusion will show up soon dont worry. AI will accelerate its arrival.
jes5199 · 2 months ago
I’m not sure we’ll be metering electricity if Wright’s Law continues
TheAceOfHearts · 2 months ago
Do you think we will get AI models capable of learning in real time, using a small number of examples similar to humans, in the next few years? This seems like a key barrier to AGI.

More broadly, I wonder how many key insights he thinks are actually missing for AGI or ASI. This article suggests that we've already cleared the major hurdles, but I think there are still some major keys missing. Overall his predictions seem like fairly safe bets, but they don't necessarily suggest superintelligence as I expect most people would define the term.

paradox242 · 2 months ago
This is a condensed version of Altman's greatest hits when it comes to his pitch for the promise of AI as he (allegedly) conceives it, and in that sense it is nothing new. What is conspicuous is that there is a not-so-subtle reframing. No longer is AGI just around the corner, instead one gets the sense that OpenAI they have already looked around that corner and seen nothing there. No, this is one of what I expect will be many more public statements intended to cool things down a bit, and to reframe (investor) expectations that the timelines are going to be longer than were previously implied.
throw310822 · 2 months ago
Cool things down a bit? That's what you call "we're already in the accelerating part of the singularity, past the event horizon, the future progress curve looks vertical, the past one looks flat"? :D
woopsn · 2 months ago
Artificial intelligence is a nourished and well educated population. Plus some Adderall maybe. Those are the key insights which represent the only scientific basis for that term.

The crazy thing is that a well crafted language model is great product. A man should be content to say "my company did something akin to compressing the whole internet behind a single API" and take his just rewards. Why sully that reputation boasting to have invented a singularity that solves every (economically registerable) problem on Earth?

ixtli · 2 months ago
Because they promised the latter to the investors and due to the nature of capitalism they can never admit they’ve done all they can.
hoseja · 2 months ago
Such is the nature of the scorpion.
thegeomaster · 2 months ago
I hate to enter this discussion, but learning based on a small number of examples is called few-shot learning, and is something that GPT-3 could already do. It was considered a major breakthrough at the time. The fact that we call gradient descent "learning" doesn't mean that what happens with a well-placed prompt is not "learning" in the colloquial sense. Try it: you can teach today's frontier reasoner models to do fairly complex domain-specific tasks with light guidance and a few examples. It's what prompt engineering is about. I think you might be making a distinction on the complexity of the tasks, which is totally fine, but needs to be spelled out more precisely IMO.
hdjdbdirbrbtv · 2 months ago
Are you talking about teaching in the context window or fine tuning?

If it is the context window, then you are limited to the size of said window and everything is lost on the next run.

Learning is memory, what you are describing is an llm being the main character in the movie Momento, I.e. no longterm memories past what was trained in the last training run.

Deleted Comment

tim333 · 2 months ago
AlphaZero learned various board games from scratch up to better than human levels. I guess in principle that sort of algorithm could be generalized to other things?
crazylogger · 2 months ago
What you described can be (and is being) achieved by agentic systems like Claude Code. When you give it a task, it knows to learn best practices on the web, find out what other devs are doing in your codebase, and it adapts. And it condenses + persists its learnings in CLAUDE.md files.

Which underlying LLM powers your agent system doesn't matter. In fact you can swap them for any state-of-the-art model you like, or even points Cursor to your self-hosted LLM API.

So in a sense every advanced model today is AGI. We were already past the AGI "singularity" back in 2023 with GPT4. What we're going through now is a maybe-decades-long process of integrating AGI into each corner of society.

It's purely an interface problem. Coding agent products hook the LLM to the real world with [web_search, exec_command, read_file, write_file, delegate_subtask, ...] tools. Other professions may require vastly more complicated interfaces (such as "attend_meeting",) it takes more engineering effort, sure, but 100% those interfaces will be built at some point in the coming years.

Amekedl · 2 months ago
This level of conceitedness can hardly be measured anymore; it's on a new scale. Big corps will build and label whatever as "superintelligent" system, even if it has plain if conditions placed within to suit their owners interests.

It'll govern our choices, shape our realities, and enforce its creators' priorities under the guise of objective, superior intelligence. This 'superintelligence' won't be a benevolent oracle, but a sophisticated puppet – its strings hidden behind layers of complexity and marketing hype. Decisions impacting lives, resources, and freedoms will be made by algorithms fundamentally skewed by corporate agendas, dressed up as inevitable, logical conclusions.

The danger isn't just any bias; it's the institutionalization of bias on a massive scale, presented as progress.

We'll be told the system 'optimized' for efficiency or profit, mistaking corporate self-interest for genuine intelligence, while dissent gets labeled as irrationality against the machine's 'perfect' logic. The conceit lies in believing their engineered tool is truly autonomous wisdom, when it's merely power automated and legitimized by a buzzword. AI LETS GOOOOOOOOOOOOO

physix · 2 months ago
I started quickly reading the article without reading who actually wrote it. As I scanned over the things being said, I started to ask myself: Who wrote this? It's probably some AI proponent, someone who has a vested interest. I had to smile when I saw who it was.
thrwwy_jhdkqsdj · 2 months ago
I did the same thing, I thought "This post looks like a posthumous letter".

I hope LLM use will drive efforts for testing and overall quality processes up. If such thing as an AGI ever exists, we'll still need output testing.

To me it does not matter if the person doing something for you is smarter than you, if it's not well specified and tested it is as good as a guess.

Can't wait for the AI that is almost unusable for someone without a defined problem.