The title of this post changed as I was reading it. "It looks like the 'JVG algorithm' only wins on tiny numbers" is a charitable description. The article is Scott Aaronson lambasting the paper and shaming its authors as intellectual hooligans.
Agree. Scott is exactly correct when he just straight calls it crap.
It's inaccurate to say it wins on small numbers because on small numbers you would use classical computers. By the time you get to numbers that take more than a minute to factor classically, and start dreaming of quantum computers, you're well beyond the size where you could tractably do the proposed state preparation.
What do you mean? The original 2019 supremacy experiment was eventually simulated, as better classical methods were found, but the followups are still holding strong (for example [4] and [5]).
There was recently a series of blog posts by Dominik Hangleiter summarizing the situation: [1][2][3].
the reason people pay attention to him is that he does a good job publicizing both positive and negative results, and accurately categorizing which are bullshit
While I think the idea that claiming one can "precompute the xr mod N’s on a classical computer" sounds impractical there are a subset of problems where this might be valid. According to computational complexity theory, there's a class of algorithms called BQP (bounded-error quantum polynomial time).
Shor's algorithm is part of BQP. Is the JVC algorithm part of BQP, even though it utilizes classical components? I think so.
I believe that the precomputational step is the leading factor in the algorithm's time complexity, so it isn't technically a lower complexity than Shor's. If I had to speculate, there will be another class in quantum computational complexity theory that accommodates precomputation utilizing classical computing.
I welcome the work, and after a quick scroll through the original paper, I think there is a great amount of additional research that could be done in this computational complexity class.
There is a genuinely interesting complexity class called BQP/poly, which is pronounced something like “bounded-error quantum polynomial time with classical advice” (add some more syllables for a complete pronunciation).
The JVG algorithm is not a high quality example of this or really anything else. If you think of it as “classical advice”, then it fails, because the advice depends on the input and not just the size of the input. If you think of it as precomputation, it’s useless, because the precomputation involved already fully solves the discrete log problem. And the JVG paper doesn’t even explain how to run their circuit at respectable sizes without the sheer size of the circuit making the algorithm fail.
It’s a bit like saying that one could optimize Stockfish to run 1000x faster by giving it an endgame table covering all 16-or-fewer-piece-positions. Sure, maybe you could, but you also already solved chess by the time you finish making that table.
JVC isn't BQP. it's exp time (I.e. worse than factoring without a quantum computer at all). it takes the only step of shors algorithm that is faster to run on a quantum computer and moves it to a classical computer
I didn't get the quantum hype last year. At least with AI, you can see it do some impressive things with caveats, and there are bull and bear cases that are both reasonable. The quantum hype training is promising the world, but compared to AI, it's at the linear regression stage.
Quantum computing is cool, but a lot of the people who were hyping it last year were absolute charletons. They were promosing things that quantum computers couldn't even do theoretically let alone next year. Even the more down to earth claims were stuff we are still 10-40 years away from presented as if its going to happen next month.
Quantum computers are still cool and things worthy of research. Its going to be a very long road though. Where we are with quantum computers is like equivalent to where we were with regular computers in the 1800s.
The hype people just make everything suck and should be ignored.
People get taken by the theoretical coolness and ultimate utility of the idea, and assume it's just a matter of clever ideas and engineering to make it a reality. At some point, it becomes mandatory to work on it because the win would be so big it would make them famous and win all sorts of prizes and adulation.
QC is far earlier than "linear regression" because linear regression worked right away when it was invented (reinvented multiple times, I think). Instead, with QC we have: an amazing theory based on our current understanding of physics, and the ability to build lab machines that exploit the theory, and some immediate applications were a powerful enough quantum computer built. On the other side, making one that beats a real computer for anything other than toy challenges is a huge engineering challenge, and every time somebody comes up with a QC that does something interesting, it spurs the classical computing folks to improve their results, which can be immediately applied on any number of off-the-shelf systems.
The only things I'm aware of that I consider actual problems it solves are "it breaks classical encryption" and "you may be able to use it to directly model other quantum systems like for protein folding and such".
Everything else I consider pretty silly. "It can improve logistics" - I'm fairly sure computers are already as good as they can be, what dominates logistics calculations isn't an inability to optimize but the fact the real world can only conform so closely to any model you build. "It can improve finance" - same deal, really. All the other examples I see cited are problem where we've probably already got running code that is at the noise floor imposed by reality and its stubborn unwillingness to completely conform to plans.
If I had $1 to invest between AI and quantum computing I'd end up rounding the fraction of a cent that should rationally go to quantum computing and put the whole dollar in AI.
By far the most exciting possibility is one that Scott Aaronson has cited, which is, what if quantum computers fail somehow? To put it in simple and unsophisticated terms, what if we could prove that you can't entangle more than 1024 qubits and do a certain amount of calculation with them? What if the universe actually refuses to factor a thousand-digit prime number? The way in which it fails would inevitably be incredibly interesting.
my understanding is that they factored 15 using a modular exponentiation circuit that presumes that the modulus is 3. factoring 15 with knowledge of 3 is not so impressive. Shor's algorithm has never been run with a full modular exponentiation circuit.
> (yes, the authors named it after themselves)
The same way the AVL tree is named after its inventors - Georgy Adelson-Velsky and Evgenii Landis... Nothing peculiar about this imh
This might not be something entirely obvious to people outside of academia, but the vast majority (which I'm only weakening a claim of "totality" in order to guard against unknown instances) of entities that bear the name of humans in the sciences do so because other people decided to call them by that name.
From another view, Adelson-Velsky and Landis called their tree algorithm "an algorithm for the organization of information" (or, rather, they did so in Russian --- that's the English translation). RSA was called "a method" by Rivest, Shamir, and Adleman. Methods/algorithms/numbers/theorems/etc. generally are not given overly specific names in research papers, in part for practical reasons: researchers will develop many algorithms or theorems, but a very small proportion of these are actually relevant or interesting. Naming all of them would be a waste of time, so the names tend to be attached well after publication.
To name something after oneself requires a degree of hubris that is looked down upon in the general academic community; the reason for this is that there is at least a facade (if not an actual belief) that one's involvement in the sciences should be for the pursuit of truth, not for the pursuit of fame. Naming something after yourself is, intrinsically, an action taken in the seeking of fame.
Same with RSA and other things, I think the author's point is that slapping your name on an algorithm is a pretty big move (since practically, you can only do it a few times max in your life before it would get too confusing), and so it's a gaudy thing to do, especially for something illegitimate.
But also note that naming an algorithm, in and of itself, is fine; it's naming it after yoursel(f,ves) in the initial paper that's a sign of crackpottery.
* Named by: Probably fine but heavily weighted on the grandiosity of the title.
* Named after: Almost certainly fine (unless it's something like "X's Absolute Drivel Faced Garbage That Never Works Because X Kidnapped My Dog And Is A Moral Degenerate Algorithm", obvs.)
* Named by yoursel(f,ves) after yoursel(f,ves): In the initial paper? Heavy likelihood of crackpottery. Years later? Egotistical but strong likelihood of being a useful algorithm.
It's inaccurate to say it wins on small numbers because on small numbers you would use classical computers. By the time you get to numbers that take more than a minute to factor classically, and start dreaming of quantum computers, you're well beyond the size where you could tractably do the proposed state preparation.
[1]: https://quantumfrontiers.com/2026/01/06/has-quantum-advantag...
[2]: https://quantumfrontiers.com/2026/01/25/has-quantum-advantag...
[3]: https://quantumfrontiers.com/2026/02/28/what-is-next-in-quan...
[4]: https://arxiv.org/abs/2303.04792
[5]: https://arxiv.org/abs/2406.02501
https://news.ycombinator.com/item?id=47246295
Deleted Comment
Shor's algorithm is part of BQP. Is the JVC algorithm part of BQP, even though it utilizes classical components? I think so.
I believe that the precomputational step is the leading factor in the algorithm's time complexity, so it isn't technically a lower complexity than Shor's. If I had to speculate, there will be another class in quantum computational complexity theory that accommodates precomputation utilizing classical computing.
I welcome the work, and after a quick scroll through the original paper, I think there is a great amount of additional research that could be done in this computational complexity class.
The JVG algorithm is not a high quality example of this or really anything else. If you think of it as “classical advice”, then it fails, because the advice depends on the input and not just the size of the input. If you think of it as precomputation, it’s useless, because the precomputation involved already fully solves the discrete log problem. And the JVG paper doesn’t even explain how to run their circuit at respectable sizes without the sheer size of the circuit making the algorithm fail.
It’s a bit like saying that one could optimize Stockfish to run 1000x faster by giving it an endgame table covering all 16-or-fewer-piece-positions. Sure, maybe you could, but you also already solved chess by the time you finish making that table.
Dead Comment
Quantum computers are still cool and things worthy of research. Its going to be a very long road though. Where we are with quantum computers is like equivalent to where we were with regular computers in the 1800s.
The hype people just make everything suck and should be ignored.
People get taken by the theoretical coolness and ultimate utility of the idea, and assume it's just a matter of clever ideas and engineering to make it a reality. At some point, it becomes mandatory to work on it because the win would be so big it would make them famous and win all sorts of prizes and adulation.
QC is far earlier than "linear regression" because linear regression worked right away when it was invented (reinvented multiple times, I think). Instead, with QC we have: an amazing theory based on our current understanding of physics, and the ability to build lab machines that exploit the theory, and some immediate applications were a powerful enough quantum computer built. On the other side, making one that beats a real computer for anything other than toy challenges is a huge engineering challenge, and every time somebody comes up with a QC that does something interesting, it spurs the classical computing folks to improve their results, which can be immediately applied on any number of off-the-shelf systems.
Everything else I consider pretty silly. "It can improve logistics" - I'm fairly sure computers are already as good as they can be, what dominates logistics calculations isn't an inability to optimize but the fact the real world can only conform so closely to any model you build. "It can improve finance" - same deal, really. All the other examples I see cited are problem where we've probably already got running code that is at the noise floor imposed by reality and its stubborn unwillingness to completely conform to plans.
If I had $1 to invest between AI and quantum computing I'd end up rounding the fraction of a cent that should rationally go to quantum computing and put the whole dollar in AI.
By far the most exciting possibility is one that Scott Aaronson has cited, which is, what if quantum computers fail somehow? To put it in simple and unsophisticated terms, what if we could prove that you can't entangle more than 1024 qubits and do a certain amount of calculation with them? What if the universe actually refuses to factor a thousand-digit prime number? The way in which it fails would inevitably be incredibly interesting.
https://news.ycombinator.com/item?id=44608622 - Replication of Quantum Factorisation Records with a VIC-20, an Abacus, and a Dog (2025-07-18, 25 comments)
From another view, Adelson-Velsky and Landis called their tree algorithm "an algorithm for the organization of information" (or, rather, they did so in Russian --- that's the English translation). RSA was called "a method" by Rivest, Shamir, and Adleman. Methods/algorithms/numbers/theorems/etc. generally are not given overly specific names in research papers, in part for practical reasons: researchers will develop many algorithms or theorems, but a very small proportion of these are actually relevant or interesting. Naming all of them would be a waste of time, so the names tend to be attached well after publication.
To name something after oneself requires a degree of hubris that is looked down upon in the general academic community; the reason for this is that there is at least a facade (if not an actual belief) that one's involvement in the sciences should be for the pursuit of truth, not for the pursuit of fame. Naming something after yourself is, intrinsically, an action taken in the seeking of fame.
In my "crackpot index", item 20 says:
20 points for naming something after yourself. (E.g., talking about the "The Evans Field Equation" when your name happens to be Evans.)
> By doing so, we aim to provide a novel paradigm [...]
also made me think of item 19 on your list:
> 10 points for claiming that your work is on the cutting edge of a "paradigm shift".
I'm sad though that you didn't call it the "Baez crackpot index"...
Deleted Comment
But also note that naming an algorithm, in and of itself, is fine; it's naming it after yoursel(f,ves) in the initial paper that's a sign of crackpottery.
* Named by: Probably fine but heavily weighted on the grandiosity of the title.
* Named after: Almost certainly fine (unless it's something like "X's Absolute Drivel Faced Garbage That Never Works Because X Kidnapped My Dog And Is A Moral Degenerate Algorithm", obvs.)
* Named by yoursel(f,ves) after yoursel(f,ves): In the initial paper? Heavy likelihood of crackpottery. Years later? Egotistical but strong likelihood of being a useful algorithm.