The big unsolved problem in QC is the lifetime of the QC itself until decoherence. Imagine a computer where you can execute a maximum of 1800 commands (IBM Heron) and then it is broken. That is the status of QC at the moment. A QC might store TB of data and do searches with O(1), but there is no way (at the moment) to upload a TB database to the QC. What we need is a quantum processor that lives for hours or days in a coherent state, but what we have is milli seconds.
Just my 2 ct.
edit: IBMs Roadmap shows a QC with 1 Billion commands (gates) after 2033. With that machine they could (in principle) upload a 100MB database and do searches.
The bigger question is how many qubits exactly you can keep arbitrarily entangled on your preferred time scale. Because a linear increase in the number of qubits gives you an exponential increase in capability (compared to a classical computer) for problems that are amenable to the quantum approach. So if adding qubits turns out to be exponentially difficult, then QC will not amount to much since you could've done the same thing in a classical simulation. If it can be done more or less arbitrarily with a non-exponential cost, it's a true asymptotic change for the kinds of problems QC can address.
It's not a big mystery how this will get accomplished however, which is why participants in the field seem hopeful (if modest and tempered). The theory of quantum error correction is rich and pretty well developed. It's just that engineering a system which has enough runway to be error corrected requires a lot of development and innovations in a lot of directions: qubit design and fabrication, quantum compilers, rapid experimental procedures, etc.
Edit to edit: Thinking of gate depths as permitting such-and-such megabyte databases as being "uploaded" isn't really a good or accurate metric in my opinion.
I'm a bit biased, but non-superconducting modalities have considerably better coherence time properties. Neutral-atom, for example, has on the order of seconds. It has other constraining issues, but in theory the coherence times are better
The roadmap of IONQ has them producing 64 logical qubits with error correction within 2025. Whether or not they succeed, I think time will tell as theyve stopped publishing and a cofounder stepped down to return to teaching and research.
Quantinuum, the Honeywell merger, expects billions of revenue in 2026. Honeywell gave up on transmons and joined up with their $260m investment into Cambridge.
Speaking of transmons, Rigetti is facing delisting
IBM is still going hard on transmons and selling mainly to universities.
Google had sycamore but were still waiting to see if they’ve solved the cascading errors inherent to scaling
Microsoft invested heavily in majorna fermions for compute — which might not even exist. They are not an authority on the physics
IBM's quantum roadmap[0] also looks... curious. There's some modest scaling until 2028. Then there sudden orders-of-magnitude jumps in the following years. Perhaps that's backed by technology they have in the labs and plan to have ready by then, but without details it looks too good to be true.
It means that they are willing to claim to the public that there is a business case for their technology providing billions in value in that year. They do not have the bookings.
I don't think its just skeptics. Afaik most researchers (other than those who are trying to attract investments) have been very clear that we are still a ways off from practical quantum computing, and even then the applications of QC is limited to specific domains.
I agree with this. Unimaginable hype was mostly from those with an obvious incentive to raise money, to put themselves on a shortlist for a Nobel, and all that. Most of the ground-floor researchers and experimentalists have been extraordinarily realistic, though sometimes passive or quiet since there's an incentive to not bite the hand that feeds them.
It also doesn't help that the giant PR machines of IBM, Google, IonQ, et al. have been commanding the narrative. At this point you'd think transmons, ions, and atoms are the only commercially/at-scale interesting options. :)
We already have options that are quantum resistant (just not as well tested as we would like). There will probably be a short intermediate period where qc can break like 128 bit rsa keys but not 4096 bit. The moment that happens everyone will switch to other algos very quickly.
Its not like its all that different from the situation with md5 or the current situation with sha1. The world survived. A few people got hacked... but that was very much the exception and mostly software migrated to new algos.
Personally i think quantum simulation is a much more interesting application than factoring.
General purpose QC isn't going to happen all that soon, and may always be faced with scaling issues.
But we will see QC in a very specific application before the decade is out.
It's kind of a perfect marriage between ML and QC. If you want stochastic outputs and aren't going to know what's going on at each step in the network anyways, the issues of error correction around measurement aren't nearly as crippling as they are for general purpose QC.
While initially the work will be converting from classical to optoelectronic hardware for matching current operations, such as MIT's work this year, once we've seen greater availability of optoelectronic hardware I suspect we'll see algorithms for ML developed that would only work in photonic networks and fully exploit the quantum properties therein.
I am a researcher in quantum information and this seems highly unlikely to me.
Quantum computers, as far as I can tell, get significant improvements over classical computers only for quite specific highly mathematically structured problems like the discrete log problem or simulating other quantum systems.
I don't think it's likely that there will be substantial "quantum adantage" in the machine learning/ai area.
Don't greyed out comments mean someone is downvoting you? What's going on in this thread, this is the second comment of yours that's factually far better than the preceding one but greyed out.
I agree with you, quantum ai has only ever been people shouting "quantum" at black box neural nets, as if that had ever produced a viable algorithm yet...
I'm not a researcher here, but I'm pretty sure Bayesian neural networks are realizable with quantum computers, which is a substantial "quantum advantage".
It's not really any different from the current state of models, it's just that moving the same constraints to a different hardware foundation opens up significant performance gains.
"While initially the work will be converting from classical to optoelectronic hardware for matching current operations, such as MIT's work this year,"
I'm pretty sure you're talking about mean Dirk Englund's photonic matrix multiplication. Also, you should do better at explaining what you mean, because if I wasn't already very familiar with this I would have no clue what you're talking about.
I got curious about quantum computers so I ordered and read (most of) Quantum Computation and Quantum Information: 10th Anniversary Edition.
It pretty much proved to me that quantum computing is at best 100+ years away or at worst a pipe dream fantasy. Until decoherence at scale is solved there is no way quantum computing will be useful beyond current computing abilities. There is not even a hint that this problem will be solved anytime soon.
I co-authored that book (written in the late 1990s). I don't know when quantum computing will happen - I only follow developments very, very loosely now - but think you're being much too pessimistic. It is a very (very!) challenging problem, and it's still early days, but there's also been steady and impressive progress for many years.
How do people know that real-world QC is even possible?
I understand enough of nuclear physics and quantum physics to see that fusion is mostly a technical/engineering problem while QC is widely speculative.
I don’t see how you can read and understood Nielsen and Chuang in one sitting, unless you are already a quantum computation theorist. I also don’t see how reading what is essentially an algorithms textbook can lead you to develop an informed opinion about the state of quantum computer engineering…
it’s like reading saying “I was curious about how computer software works so I ordered and read CLRS and I don’t think faster computers are anywhere on the horizon in 100 years…”
It was not one sitting... That's a textbook, it took a while. Probably over a year. I was also doing it along learning some of the math involved in another course. Also my SO is a physicist so I had some help.
The theory is great. The problem is that it all hinges on a scientific breakthrough that has not happened yet. I don't see it happening soon. Just my not totally uneducated opinion. I have no horse in the race I think the people claiming it will work "soon" are being a bit dishonest with themselves as well as everyone else. For all we know it will end up taking several other scientific breakthroughs to get all the parts needed. I personally think that is the case and why I say it will not be in our lifetime.
Except published research demonstrates continual improvements in coherence time and implementations of error correction protocols. So "not even a hint" is at best hyperbolic or at worst just wrong.
But it is solved theoretically, by quantum error correction. I'm not denying there are plenty of problems but can you justify that decoherence is the limiting problem for architectures other than superconducting qubits?
To expand on this - I went into my first AI class in college thinking it'd be amazing. I left with a very bland taste, having solved n-queens a bunch of different ways and written genetic algorithms to "evolve" images.
I read Kurzweil before this. I had thought we were decades away from digital immortality. Taking the AI course and algorithm analysis was quite disappointing. Reality set in. Things are harder than we hand wave away.
I then started taking bio, read papers on neuroscientists decoding visual signals from mammalian LGN, and went deep down the biology rabbit hole. That only further convinced me that Kurzweil was wishfully wrong. Here are systems more complex than anything previously described to me in my entire life.
But now we're confronted with a pace of innovation that is frankly quite humbling. Things I had written off no longer seem impossible.
For all the people saying this is a scam/waste of money/always 10 years away, I’m curious what you envision the funding model for this kind of speculative tech to be. The government ceased to be the bankroll for this kind of stuff since the end of the Cold War, and for better or for worse, VC is where the money is right now. The marketing BS with stuff like quantum, fusion, carbon capture, etc. is simply a cost of doing business in this environment.
If it’s such a travesty that VCs are making bets on tech like this, how then do you fund long term R&D projects with a high risk of failure? Isn’t that the point?
there is general public funding for all kinds of science, including physics
I never got the intense interest in quantum computing. My pet theory is that (CS-educated) rich VCs were taken by the word 'computing' , but in reality QC is a theoretical sub-branch of quantum physics. I should test my theory by publishing my tomato computing theory.
Anything being done in the public sector (e.g. academia or government) is pocket change by comparison, and so bound up with red tape these days that most truly high risk/reward bets are passed over. That, and you’re at the mercy of congressional budget cycles and the usual grant writing hunger games. If innovation emerges from that environment, it’s in spite of it, not because of it.
If private capital wants to fund deeptech, more power to them. If they want forego due diligence and fund tomato computers, that’s (literally) their business.
Moore's law was enabled by lithography which enabled a pretty clear path forward on how to shrink transistors over time. AFAIK there is no similar enabling technology for qubit. Maybe that will change, but until then, I don't really see much hope for quantum computers future.
I get the impression that cavity-QED is a similar path for QC that lithography was for classical compute. But I'm not at all an expert, just a curious and interested observer.
You only need a linear increase in number of error-corrected qubits to get exponential gains on the limited subset of problems they can improve, whereas Moore's law has provided an exponential increase in the number of elements for classical computers.
So, Moore's law isn't needed for something similar to Moore's law gains in quantum computing, a linear Moore's law will do if quantum error correction doesn't face scaling laws for the whole ensemble.
That might mean they only keep pace with classical computing if Moore's law continues to hold there, but eventually that hits the Landauer limit without new physics (or possibly reversible computing).
IMO this is very wrong. Most real applications of quantum computers will need thousands of error correct qbits (aka millions of real qbits). At the current growth rate in qbits of <100 qbits per year, if the growth was linear, it would be about 10000 years before we had quantum computers solving real problems.
Just my 2 ct.
edit: IBMs Roadmap shows a QC with 1 Billion commands (gates) after 2033. With that machine they could (in principle) upload a 100MB database and do searches.
source: https://www.tomshardware.com/tech-industry/quantum-computing...
Edit to edit: Thinking of gate depths as permitting such-and-such megabyte databases as being "uploaded" isn't really a good or accurate metric in my opinion.
2025 156 qubits, 5000 gates
2028 156 qubits, 15K gates
2029 200 qubits, 100M gates
2033 2000 qubits, 1B gates
The jump from 15K to 100M gates looks fishy to me. Maybe I am wrong but I doubt that will work that way.
Nope. Grover's algorithm allows only O(sqrt(n)) search.
Edit: quera is having its moment with the darpa announcement and the 48-logical qubit accomplishment in lab
Deleted Comment
Or a few seconds, but with better error correction.
https://www.nature.com/articles/s41586-022-05434-1
Deleted Comment
Quantinuum, the Honeywell merger, expects billions of revenue in 2026. Honeywell gave up on transmons and joined up with their $260m investment into Cambridge.
Speaking of transmons, Rigetti is facing delisting
IBM is still going hard on transmons and selling mainly to universities.
Google had sycamore but were still waiting to see if they’ve solved the cascading errors inherent to scaling
Microsoft invested heavily in majorna fermions for compute — which might not even exist. They are not an authority on the physics
[0] https://newsroom.ibm.com/2023-12-04-IBM-Debuts-Next-Generati...
It also doesn't help that the giant PR machines of IBM, Google, IonQ, et al. have been commanding the narrative. At this point you'd think transmons, ions, and atoms are the only commercially/at-scale interesting options. :)
Like who? I think we're making up a person.
What other options are there?
Breaking crypto alone would make it worth its weight in gold.
Its not like its all that different from the situation with md5 or the current situation with sha1. The world survived. A few people got hacked... but that was very much the exception and mostly software migrated to new algos.
Personally i think quantum simulation is a much more interesting application than factoring.
But we will see QC in a very specific application before the decade is out.
It's kind of a perfect marriage between ML and QC. If you want stochastic outputs and aren't going to know what's going on at each step in the network anyways, the issues of error correction around measurement aren't nearly as crippling as they are for general purpose QC.
While initially the work will be converting from classical to optoelectronic hardware for matching current operations, such as MIT's work this year, once we've seen greater availability of optoelectronic hardware I suspect we'll see algorithms for ML developed that would only work in photonic networks and fully exploit the quantum properties therein.
Quantum computers, as far as I can tell, get significant improvements over classical computers only for quite specific highly mathematically structured problems like the discrete log problem or simulating other quantum systems.
I don't think it's likely that there will be substantial "quantum adantage" in the machine learning/ai area.
I agree with you, quantum ai has only ever been people shouting "quantum" at black box neural nets, as if that had ever produced a viable algorithm yet...
You have a way with words, I'm already more at ease with the perspective of a schizophrenic AGI ...
I'm pretty sure you're talking about mean Dirk Englund's photonic matrix multiplication. Also, you should do better at explaining what you mean, because if I wasn't already very familiar with this I would have no clue what you're talking about.
I'm talking about this one: https://dl.acm.org/doi/10.1145/3603269.3604821
It pretty much proved to me that quantum computing is at best 100+ years away or at worst a pipe dream fantasy. Until decoherence at scale is solved there is no way quantum computing will be useful beyond current computing abilities. There is not even a hint that this problem will be solved anytime soon.
I understand enough of nuclear physics and quantum physics to see that fusion is mostly a technical/engineering problem while QC is widely speculative.
it’s like reading saying “I was curious about how computer software works so I ordered and read CLRS and I don’t think faster computers are anywhere on the horizon in 100 years…”
The theory is great. The problem is that it all hinges on a scientific breakthrough that has not happened yet. I don't see it happening soon. Just my not totally uneducated opinion. I have no horse in the race I think the people claiming it will work "soon" are being a bit dishonest with themselves as well as everyone else. For all we know it will end up taking several other scientific breakthroughs to get all the parts needed. I personally think that is the case and why I say it will not be in our lifetime.
I read Kurzweil before this. I had thought we were decades away from digital immortality. Taking the AI course and algorithm analysis was quite disappointing. Reality set in. Things are harder than we hand wave away.
I then started taking bio, read papers on neuroscientists decoding visual signals from mammalian LGN, and went deep down the biology rabbit hole. That only further convinced me that Kurzweil was wishfully wrong. Here are systems more complex than anything previously described to me in my entire life.
But now we're confronted with a pace of innovation that is frankly quite humbling. Things I had written off no longer seem impossible.
I'm intensely excited for the future.
If it’s such a travesty that VCs are making bets on tech like this, how then do you fund long term R&D projects with a high risk of failure? Isn’t that the point?
National labs in the US are on the forefront of quantum research (in general) and quantum computing research.
I never got the intense interest in quantum computing. My pet theory is that (CS-educated) rich VCs were taken by the word 'computing' , but in reality QC is a theoretical sub-branch of quantum physics. I should test my theory by publishing my tomato computing theory.
If private capital wants to fund deeptech, more power to them. If they want forego due diligence and fund tomato computers, that’s (literally) their business.
Dead Comment
So, Moore's law isn't needed for something similar to Moore's law gains in quantum computing, a linear Moore's law will do if quantum error correction doesn't face scaling laws for the whole ensemble.
That might mean they only keep pace with classical computing if Moore's law continues to hold there, but eventually that hits the Landauer limit without new physics (or possibly reversible computing).