Readit News logoReadit News
calf commented on The contrarian physics podcast subculture   timothynguyen.org/2025/08... · Posted by u/Emerson1
seanhunter · 3 days ago
There is history here and Sabine is being particularly dishonest saying that professional physicists failed to engage with Weinstein. Tim Nguyen specifically along with a couple of others made a detailed analysis of the paper [1] and responded very thoughtfully. He got involved because his research area touches on gauge theory (which is the source for some of Weinstein’s Geometric Unity thing).

Here’s a page giving some of his side of the picture and he includes the original Weinstein paper etc if you want to read it https://timothynguyen.org/geometric-unity/

[1] https://files.timothynguyen.org/geometric_unity.pdf

calf · 3 days ago
But that's the issue, Nguyen is not the institution as a whole so then their concerns are just talking past each other. (And perhaps typical of a professional physicists Nguyen's complaints miss this point.)
calf commented on The contrarian physics podcast subculture   timothynguyen.org/2025/08... · Posted by u/Emerson1
calf · 4 days ago
The author's accusations about Sabine are buried in the middle but I could not follow the main point. If anyone actually reads this carefully perhaps they could paraphrase a summary of their claims for the rest of us.

(Actually come to think of it, Sabine saying at one time that Weinstein's work is bad, at another time that professional physicists failed to engage with Weinstein properly--this is not a contradictory position, the former is a personal opinion and the latter is akin to an Enlightenment principle on how an institution ought to be behaving even towards dissenters and outsiders. Disappointing that the blogger doesn't seem to understand this and is using it simplistically as an example of Sabine being a dishonest science communicator)

calf commented on The contrarian physics podcast subculture   timothynguyen.org/2025/08... · Posted by u/Emerson1
jordanpg · 4 days ago
She has gone way beyond this. She is actively undermining the entire academic scientific enterprise, even as she makes money popularizing it. It's unclear why she does this. She portrays herself as speaking truth to power, but -- much like certain actors in US public life these days -- is simply doing the easy work of tearing things down, without doing the hard work of building things.
calf · 4 days ago
She has done nothing of the sort and this kind of narrative is exactly the self-victimization that science-academic industry tells itself to insulate its own thinking. Sabine does not have that much power or influence.
calf commented on Why LLMs Can Think (as per new fundamental theory)   claude.ai/public/artifact... · Posted by u/bobsh
allears · 9 days ago
Wow. I didn't make it all the way through, but the garbled jargon and the breath-taking claims are way over the top. I'm not sure who would fall for this -- anyone who understands the big words they're using is probably too bright to be taken in.
calf · 8 days ago
People are ending up in hospitals from bad ChatGPT advice, not sure how to get through to a person who has fallen for this bullshit.
calf commented on Steve Wozniak: Life to me was never about accomplishment, but about happiness   yro.slashdot.org/comments... · Posted by u/MilnerRoute
atonse · 10 days ago
Maybe I'm not creative enough but I've tried this thought exercise with friends and it's a fun one.

The question is, try to spend $1bn on stuff. Go.

So then you start with big ticket items (like maybe a yacht or a house). That gets you to your first $500m. After that, stuff gets WAY "cheaper" where you just run out of things generally before even hitting $1bn.

And then at the end of it we try to imagine what it's like having stuff worth $250bn. And there's just no way to make that tangible.

I did try this with my son and he said he'd buy an A-list soccer team. But I feel that starts to get into "buying companies that make you MORE money" territory.

At a much smaller scale, it seems to be that $10mn is so much that you could live in a $2m house (good by any standard in any location), have a stable of cars, have full-time help, fly first class or even private everywhere, and vacation as much as you want. Or am I off by a lot given inflation?

calf · 10 days ago
I assume people with $1bn are playing Civilization IRL, they aren't "spending" the way consumers think of goods.
calf commented on LLMs aren't world models   yosefk.com/blog/llms-aren... · Posted by u/ingve
teleforce · 12 days ago
>LLMs are not "compelled" by the training algorithms to learn symbolic logic.

I think "compell" is such a unique human trait that machine will never replicate to the T.

The article did mention specifically about this very issue:

"And of course people can be like that, too - eg much better at the big O notation and complexity analysis in interviews than on the job. But I guarantee you that if you put a gun to their head or offer them a million dollar bonus for getting it right, they will do well enough on the job, too. And with 200 billion thrown at LLM hardware last year, the thing can't complain that it wasn't incentivized to perform."

If it's not already evident that in itself LLM is a limited stochastic AI tool by definition and its distant cousins are the deterministic logic, optimization and constraint programming [1],[2],[3]. Perhaps one of the two breakthroughs that the author was predicting will be in this deterministic domain in order to assist LLM, and it will be the hybrid approach rather than purely LLM.

[1] Logic, Optimization, and Constraint Programming: A Fruitful Collaboration - John Hooker - CMU (2023) [video]:

https://www.youtube.com/live/TknN8fCQvRk

[2] "We Really Don't Know How to Compute!" - Gerald Sussman - MIT (2011) [video]:

https://youtube.com/watch?v=HB5TrK7A4pI

[3] Google OR-Tools:

https://developers.google.com/optimization

[4] MiniZinc:

https://www.minizinc.org/

calf · 11 days ago
And yet there are two camps on the matter. Experts like Hinton disagree, others agree.
calf commented on LLMs aren't world models   yosefk.com/blog/llms-aren... · Posted by u/ingve
yosefk · 14 days ago
I'm not saying that LLMs can't learn about the world - I even mention how they obviously do it, even at the learned embeddings level. I'm saying that they're not compelled by their training objective to learn about the world and in many cases they clearly don't, and I don't see how to characterize the opposite cases in a more useful way than "happy accidents."

I don't really know how they are made "good at math," and I'm not that good at math myself. With code I have a better gut feeling of the limitations. I do think that you could throw them off terribly with unusual math quastions to show that what they learned isn't math, but I'm not the guy to do it; my examples are about chess and programming where I am more qualified to do it. (You could say that my question about the associativity of blending and how caching works sort of shows that it can't use the concept of associativity in novel situations; not sure if this can be called an illustration of its weakness at math)

calf · 12 days ago
But this is parallel to saying LLMs are not "compelled" by the training algorithms to learn symbolic logic.

Which says to me there are two camps on this and the verdict is still out on this and all related questions.

Deleted Comment

calf commented on Is Chain-of-Thought Reasoning of LLMs a Mirage? A Data Distribution Lens   arstechnica.com/ai/2025/0... · Posted by u/blueridge
Insanity · 13 days ago
Link to the talk?
calf · 12 days ago
It was a Royal Institution public lecture, "Will AI outsmart human intelligence? - with 'Godfather of AI' Geoffrey Hinton", https://www.youtube.com/watch?v=IkdziSLYzHw

Ultimately I somehwat disagreed with some of Hintons points in this talk, and after some thought I came up with specific reasons/doubts, and yet at the same time, his intuitive explanations helped shift my views somewhat as well.

calf commented on Is Chain-of-Thought Reasoning of LLMs a Mirage? A Data Distribution Lens   arstechnica.com/ai/2025/0... · Posted by u/blueridge
nyrikki · 13 days ago
A) Hinton is quite vocal about desiring to be an outsider/outlier as he says it is what lets him innovate.

B) He is also famous for his Doomerism, which often depends on machines doing "reasoning".

So...it's complicated, and we all suffer from confirmation bias.

calf · 12 days ago
This is sloppy, I was asking about scientific consensus from the perspective of the prior commenter as a conference-goer. I am not asking for opinions bordering on ad hominems of Hinton or any other scientist, please refrain from that style of misinformation.

u/calf

KarmaCake day652January 20, 2011View Original