Readit News logoReadit News

Deleted Comment

Deleted Comment

psb217 commented on Fei-Fei Li: Spatial intelligence is the next frontier in AI [video]   youtube.com/watch?v=_PioN... · Posted by u/sandslash
coldtea · 2 months ago
>there is really only one usable dataset: the world itself, which cannot be compacted or fed into a computer at high speed.

Why wouldn't it be? If the world is ingressed via video sensors and lidar sensor, what's the hangup in recording such input and then replaying it faster?

psb217 · 2 months ago
I think there's an implicit assumption here that interaction with the world is critical for effective learning. In that case, you're bottlenecked by the speed of the world... when learning with a single agent. One neat thing about artificial computational agents, in contrast to natural biological agents, is that they can share the same brain and share lived experience, so the "speed of reality" bottleneck is much less of an issue.
psb217 commented on The Death of the Middle-Class Musician   thewalrus.ca/the-death-of... · Posted by u/pseudolus
Kinrany · 2 months ago
You can of course create wealth in such a way that inequality stays the same. Not all types of wealth are finite for practical purposes.
psb217 · 2 months ago
But, if empirically our current system for net wealth creation tends to also produce wealth concentration, it makes sense to consider ways of modifying the system to mitigate some of the wealth concentration while maintaining as much of the wealth creation as possible.
psb217 commented on Sam Altman says Meta offered OpenAI staffers $100M bonuses   bloomberg.com/news/articl... · Posted by u/EvgeniyZh
namblooc · 2 months ago
I was never involved in doing ML myself, even through my CS studies. However, from the outside it looks... not that complicated? How do they justify these salaries? Where do they see it coming back to them in terms of revenue?
psb217 · 2 months ago
Most of the people pursued in these "AI talent wars" are folks deeply involved in training or developing infrastructure for training LLMs at whatever level is currently state-of-the-art. Due to the resources required for projects that can provide this sort of experience, the pool of folks with this experience is limited to those with significant clout in orgs with money to burn on LLM projects. These people are expensive to hire, and can kind of run through a loop of jumping from company to company in an upward compensation spiral.

Ie, the skills aren't particularly complicated in principle, but the conditions needed to acquire them aren't widely available, so the pool of people with the skills is limited.

psb217 commented on Meta invests $14.3B in Scale AI to kick-start superintelligence lab   nytimes.com/2025/06/12/te... · Posted by u/RyanShook
CamperBob2 · 2 months ago
I don't know about "useful" but this answer from o3-pro was nicely-inspired, I thought: https://chatgpt.com/share/684c805d-ef08-800b-b725-970561aaf5...

I wonder if the comparison is actually original.

psb217 · 2 months ago
Comparing the process of research to tending a garden or raising children is fairly common. This is an iteration on that theme. One thing I find interesting about this analogy is that there's a strong sense of the model's autoregressiveness here in that the model commits early to the gardening analogy and then finds a way to make it work (more or less).

The sorts of useful analogies I was mostly talking about are those that appear in scientific research involving actionable technical details. Eg, diffusion models came about when folks with a background in statistical physics saw some connections between the math for variational autoencoders and the math for non-equilibrium thermodynamics. Guided by this connection, they decided to train models to generate data by learning to invert a diffusion process that gradually transforms complexly structured data into a much simpler distribution -- in this case, a basic multidimensional Gaussian.

I feel like these sorts of technical analogies are harder to stumble on than more common "linguistic" analogies. The latter can be useful tools for thinking, but tend to require some post-hoc interpretation and hand waving before they produce any actionable insight. The former are more direct bridges between domains that allow direct transfer of knowledge about one class of problems to another.

psb217 commented on Meta invests $14.3B in Scale AI to kick-start superintelligence lab   nytimes.com/2025/06/12/te... · Posted by u/RyanShook
zozbot234 · 2 months ago
> It's a high bar, but I think that's fair for declaring superintelligence.

I have to disagree because the distinction between "superficial similarities" and genuinely "useful" analogies is pretty clearly one of degree. Spend enough time and effort asking even a low-intelligence AI about "dumb" similarities, and it'll eventually hit a new and perhaps "useful" analogy simply as a matter of luck. This becomes even easier if you can provide the AI with a lot of "context" input, which is something that models have been improving at. But either way it's not superintelligent or superhuman, just part of the general 'wild' weirdness of AI's as a whole.

psb217 · 2 months ago
I think you misunderstood what I meant about setting a high bar. First, passing the bar is a necessary but not sufficient condition for superintelligence. Secondly, by "fair for" I meant it's fair to set a high bar, not that this particular bar is the one fair bar for measuring intelligence. It's obvious that usefulness of an analogy generator is a matter of degree. Eg, a uniform random string generator is guaranteed to produce all possible insightful analogies, but would not be considered useful or intelligent.

I think you're basically agreeing with me. Ie, current models are not superintelligent. Even though they can "think" super fast, they don't pass a minimum bar of producing novel and useful connections between domains without significant human intervention. And, our evaluation of their abilities is clouded by the way in which their intelligence differs from our own.

psb217 commented on Meta invests $14.3B in Scale AI to kick-start superintelligence lab   nytimes.com/2025/06/12/te... · Posted by u/RyanShook
zozbot234 · 2 months ago
If anything, "abstract links across domains" is the one area where even very low intelligence AI's will still have an edge, simply because any AI trained on general text has "learned" a whole lot of random knowledge about lots of different domains; more than any human could easily acquire. But again, this is true of AI's no matter how "smart" they are. Not related to any "super intelligence" specifically.

Similarly, "deeper insight" may be surfaced occasionally simply by making a low-intelligence AI 'think' for longer, but this is not something you can count on under any circumstances, which is what you may well expect from something that's claimed to be "super intelligent".

psb217 · 2 months ago
I don't think current models are capable of making abstract links across domains. They can latch onto superficial similarities, but I have yet to see an instance of a model making an unexpected and useful analogy. It's a high bar, but I think that's fair for declaring superintelligence.

In general, I agree that these models are in some sense extremely knowledgeable, which suggests they are ripe for producing productive analogies if only we can figure out what they're missing compared to human-style thinking. Part of what makes it difficult to evaluate the abilities of these models is that they are wildly superhuman in some ways and quite dumb in others.

psb217 commented on Meta invests $14.3B in Scale AI to kick-start superintelligence lab   nytimes.com/2025/06/12/te... · Posted by u/RyanShook
Fraterkes · 2 months ago
I think this is kind of a philosphical distinction to a lot of people: the assumption is that a computer that can reason like a smart person but still runs at the speed of a computer would appear superintelligent to us. Speed is already the way we distinguish supercomputers from normal ones.
psb217 · 2 months ago
I'd say superintelligence is more about producing deeper insight, making more abstract links across domains, and advancing the frontiers of knowledge than about doing stuff faster. Thinking speed correlates with intelligence to some extent, but at the higher end the distinction between speed and quality becomes clear.
psb217 commented on Why Bell Labs Worked   1517.substack.com/p/why-b... · Posted by u/speckx
A_Duck · 3 months ago
I mean absolutely, but I think 5 years is a good amount of time to let people noodle!
psb217 · 3 months ago
You wouldn't get 5 years to noodle -- maybe 1 or 2 at best. You're competing for your next thing against other smart folks who are going hard on maximizing publication rate and grant winning in their current thing. To continue with your riskier, bigger thinking you'd have to be ready to bet that: (i) you'll produce a highly impactful result before you start applying for your next thing and (ii) the high impactfulness of that result will be recognized in time to support your applications.

The most successful folks tend to mix talent and hard work with a bit of luck in terms of early gold striking to gain a quick boost of credibility that helps them draw other people into their fold (eg, grad students in a big lab) who can handle a lot of the metric maxxing to free up some (still not enough) time for more ambitious thinking.

u/psb217

KarmaCake day317September 2, 2008View Original