Readit News logoReadit News
JBits commented on Double-slit experiment holds up when stripped to its quantum essentials   news.mit.edu/2025/famous-... · Posted by u/ColinWright
ziofill · 22 days ago
Quantum physicist here. I can only say that reality down there at the quantum level is really really weird. You can get used to it, but forget making sense of it.

A delayed choice setup is not too dissimilar than a Bell inequality violation experiment. The weirdness there is that you can set things up such that no signal can travel between the systems being measured, and yet the outcomes are more correlated than any classical joint state can be.

So the conclusion is that either locality fails (i.e. it’s not true that outcomes on one side are independent of how you measure the other side) or realism fails (i.e. you can’t assign values to properties before the measurement, or in other words a measurement doesn’t merely “reveal” a pre-existing value: the values pop into existence in a coordinated fashion). Both of these options are crazy, and yet at least one of them must be true.

JBits · 22 days ago
My impression about reality is the opposite. The quantum world makes perfect sense while it's the emergence of the classical world which is unfathomable. The crazy "pop into existence" part is still incomprehensible, so I guess it's essentially the same.
JBits commented on Hierarchical Reasoning Model   arxiv.org/abs/2506.21734... · Posted by u/hansmayer
liamnorm · a month ago
The Universal Approximation Theorem.
JBits · a month ago
I don't see how that changes anything. By this logic, there's no need for CoT reasoning at all, as a single pass should be sufficient. I don't see how that proves that CoT increases capabilities.
JBits commented on Hierarchical Reasoning Model   arxiv.org/abs/2506.21734... · Posted by u/hansmayer
Oras · a month ago
and the training was only on Sudoku. Which means they need to train a small model for every problem that currently exists.

Back to ML models?

JBits · a month ago
I would assuming that training a LLM would be unfeasible for a small research lab, so isn't tackling small problems like this unavoidable? Given that current LLMs have clear limitations, I can't think of anything better than developing beter architectures on small test cases, then a company can try scaling it later.
JBits commented on Hierarchical Reasoning Model   arxiv.org/abs/2506.21734... · Posted by u/hansmayer
malcontented · a month ago
I appreciate the connections with neurology, and the paper itself doesn't ring any alarm bells. I don't think I'd reject it if it fell to me to peer review.

However, I have extreme skepticism when it comes to the applicability of this finding. Based on what they have written, they seem to have created a universal (maybe; adaptable at the very least) constraint-satisfaction solver that learns the rules of the constraint-satisfaction problem from a small number of examples. If true (I have not yet had the leisure to replicate their examples and try them on something else), this is pretty cool, but I do not understand the comparison with CoT models.

CoT models can, in principle, solve _any_ complex task. This needs to be trained to a specific puzzle which it can then solve: it makes no pretense to universality. It isn't even clear that it is meant to be capable of adapting to any given puzzle. I suspect this is not the case, just based on what I have read in the paper and on the indicative choice of examples they tested it against.

This is kind of like claiming that Stockfish is way smarter than current state of the art LLMs because it can beat the stuffing out of them in chess.

I feel the authors have a good idea here, but that they have marketed it a bit too... generously.

JBits · a month ago
> CoT models can, in principle, solve _any_ complex task.

What is the justification for this? Is there a mathematical proof? To me, CoT seems like a hack to work around the severe limitations of current LLMs.

JBits commented on The Rise of Whatever   eev.ee/blog/2025/07/03/th... · Posted by u/cratermoon
sunnybeetroot · 2 months ago
It is a nice read but something tells me it’s AI generated due to the frequent em dashes so I wouldn’t place all bets on it being entirely human written.

Edit: I apologies, the author has pre-gpt posts that use em dashes so likely it’s part of their writing style.

JBits · 2 months ago
Some people just like em dashes—myself included. You can find em dashes in articles written by the author before LLMs became a thing.
JBits commented on Hilbert's sixth problem: derivation of fluid equations via Boltzmann's theory   arxiv.org/abs/2503.01800... · Posted by u/nsoonhui
andyfilms1 · 2 months ago
Interesting, her videos have never struck me as contrarian for the sake of it, she seems genuinely frustrated at a lack of substantial progress in physics and the plethora of garbage papers. Though I imagine it must be annoying to be a physicist and have someone constantly telling you you're not good enough, but that itself is kind of part of the scientific process too.
JBits · 2 months ago
The issue is that many of her videos argue that funding for particle physics should instead go into foundations and interpretations of quantum mechanics, specifically research completely identical to what she works on.

This is not helped by the fact that she pushes an interpretation of quantum mechanics viewed as fringe at best. Her takes on modern physics seem typically disingenuous or biased.

JBits commented on Quantum Computation Lecture Notes (2022)   math.mit.edu/~shor/435-LN... · Posted by u/ibobev
rvz · 2 months ago
Well right now I am very skeptical, but I think we have somewhat given quantum computing plenty of time (we have given it decades) unless someone can convince me that it is not a scam.

Right now it hasn't amounted to anything useful, other than Shor's and 'experiments' and promises and applications that are no better done on a GPU rack right now.

JBits · 2 months ago
There are multiple competing quantum computing hardwares, so you haven't given all of them the same length of time.
JBits commented on Inigo Quilez: computer graphics, mathematics, shaders, fractals, demoscene   iquilezles.org/articles/... · Posted by u/federicoponzi
ykl · 3 months ago
I had the incredible good fortune to cross paths with iq at Pixar; I was an intern while he was developing the Wondermoss procedural vegetation system for Brave. A bunch of us interns were already fans of his work from the demoscene world and upon learning this, he was kind enough to put together a special lecture for the interns on procedural graphics and the work he was doing for Wondermoss. That was one of the best and most mind-blowing lectures I've ever seen- for every concept he would discuss in the lecture, he would live-code a demo in front of us (this was before ShaderToy was a thing, so live-coding was something nobody had ever really seen before), and halfway through the lecture he revealed that the text editor he was using was built on top of his realtime live editing graphics system and therefore could be live-coded as well. One of the things he showed us was an early version of what eventually became the BeautyPi tech demo [0]; keep in mind that this still looks incredible today and iq was demoing this for us interns in realtime 14 years ago.

Wondermoss was a spectacular piece of tech. Every single forest scene and every single piece of vegetation in Brave is made using Wondermoss, and it was all procedural- when you'd open up a shot from Brave in Menv30, you'd see just the characters and groundplane and very little else, and then you'd fire up the renderer and a huge vast lush forest would appear at rendertime. The even cooler thing was that since Brave was still using REYES RenderMan, iq took advantage of the REYES algorithm's streaming behavior to make Wondermoss not only generate but also discard vegetation on-the-fly, meaning that Wondermoss used vanishingly little memory. If I remember correctly, Wondermoss only added like a few dozen MB of memory usage at most to each render, which was insane since it was responsible for like 95% of the visual complexity of each frame. One fun quirk of Wondermoss was that the default random seed was iq's phone number, and that remained for quite a number of years, meaning his phone number is forever immortalized in pretty much all of Pixar's films from the 2010s.

iq is one of the smartest and most inspiring people I've ever met.

[0] https://www.youtube.com/watch?v=_9CZ9UgrcZU

JBits · 3 months ago
What sort of tech/techniques did wondermoss use? Was it generating polygons?
JBits commented on Matt Godbolt sold me on Rust by showing me C++   collabora.com/news-and-bl... · Posted by u/LorenDB
steveklabnik · 4 months ago
The problem is, everything should have been there since day 1. It’s still unclear which API Rust should end up with, even today, which is why it isn’t stable yet.
JBits · 4 months ago
Looking forward to the API when it's stabilised. Have there been any updates on the progress of allocators of this general area of Rust over the past year?
JBits commented on Fujitsu and RIKEN develop world-leading 256-qubit sup quantum computer   fujitsu.com/global/about/... · Posted by u/donutloop
rtrgrd · 4 months ago
To people who do quantum computing: are qubits (after error correction) functionally equivalent and hence directly comparable across quantum computers, and is it a useful objective measure to compare progress? Or is it more a easy-to-mediatise stat?
JBits · 4 months ago
It matters, but it's not functionally equivalent between different architectures.

Since noone has many qubits, typically physical qubits are compared as opposed to virtual qubits (the error corrected ones).

The other key figures of merit are the 1-qubit and 2-qubit gate fidelities (basically the success rates). The 2-qubit gate is typically more difficult and has a lower fidelity, so people often compare qubits by looking only at the 2-qubit gate fidelity. Every 9 added to the 2-qubit gate fidelity is expected to roughly decrease the ratio of physical to virtual qubits by an order of magnitude.

In architectures where qubits are fixed in place and can only talk to their nearest neighbours, moving information around requires swap gates which are made up of the elementary 1 and 2-qubit gates. Some architectures have mobile qubits and all-to-all connectivity, so their proponents hope to avoid swap gates, considerably reducing the number of required 2-qubit gates required to run an algorithm, thus resulting in less errors to deal with.

Some companies, particularly ones on younger architectures, but perhaps with much better gate fidelities, argue that their scheme is better by virtue of being more "scalable" (having more potential in future).

It is expected that in the future, the overall clock speed of the quantum computer will matter, as the circuits we ultimately want to run are expected to be massively long. Since we're far away from the point where this matters, clock speed is uncommonly brought up.

In general, different architectures have different advantages. With different proponents having different beliefs of what matters, it was once described to me as each architecture having their own religion.

TL;DR: the two key stats are number of qubits and 2-qubit gate fidelity.

u/JBits

KarmaCake day167November 19, 2021View Original