Readit News logoReadit News
vessenes · a year ago
I’m a quantum dabbler so I’ll throw out an armchair reaction: this is a significant announcement.

My memory is that 256 bit keys in non quantum resistant algos need something like 2500 qubits or so; and by that I mean generally useful programmable qubits. To show a bit over 100 qubits with stability, meaning the information survives a while, long enough to be read, and general enough to run some benchmarks on is something many people thought might never come.

There’s a sort of religious reaction people have to quantum computing: it breaks so many things that I think a lot of people just like to assume it won’t happen: too much in computing and data security will change -> let’s not worry about it.

Combined with the slow pace of physical research progress (Schorrs algorithm for quantum factoring was mid 90s), and snake oil sales companies, it’s easy to ignore.

Anyway seems like the clock might be ticking; AI and data security will be unalterably different if so. Worth spending a little time doing some long tail strategizing I’d say.

qnleigh · a year ago
You need to distinguish between "physical qubits" and "logical qubits." This paper creates a single "first-of-a-kind" logical qubit with about 100 physical qubits (using Surface Code quantum error correction). A paper from Google in 2019 estimates needing ~20 million physical qubits ("How to factor 2048 bit RSA integers in 8 hours using 20 million noisy qubits" - https://arxiv.org/abs/1905.09749), though recent advances probably brought this number down a bit. That's because to run Shor's algorithm at a useful scale, you need a few thousand very high quality logical qubits.

So despite this significant progress, it's probably a still a while until RSA is put out of the job. That being said, quantum computers would be able to retroactively break any public keys that were stored, so there's a case to be made for switching to quantum-resistant cryptography (like lattice-based cryptography) sooner rather than later.

rhaps0dy · a year ago
Thank you for the explanation. It's still an upwards update on the qubit timelines of https://arxiv.org/pdf/2009.05045 (see Fig. 7), but not by an insane amount. We've realized their 95% expectation of qubit progress (1 logical qubit) for 2026, in 2024.92 instead.

Which to be clear is quite a bit faster than expected in 2020, but still within the realm of plausible stuff.

prmoustache · a year ago
> so there's a case to be made for switching to quantum-resistant cryptography (like lattice-based cryptography) sooner rather than later.

This.

People seems to think that because something is end to end encrypted it is secure. They don't seem to grasp that the traffic and communication that is possibly dumped/recorded now in encrypted form could be used against them decades later.

rhubarbtree · a year ago
This is correct. I worked in quantum research a little.
kortilla · a year ago
> quantum computers would be able to retroactively break any public keys that were stored

Use a key exchange that offers perfect forward secrecy (e.g. diffie Hellman) and you don’t need to worry about your RSA private key eventually being discovered.

JanisErdmanis · a year ago
The required number of qubits to execute Shor’s algorithm is way larger than 2500 qubits as the error ceiling for logical qubits must decrease exponentially with every logical qubit added to produce meaningful results. Hence, repeated applications of error correction or an increase in the surface code would be required. That would significantly blow up the number of physical qubits needed.
adastra22 · a year ago
He’s quoting the number of logical qubits (which is 1024 IIRC, not 2500), after error correction.

ETA: Wikipedia 2330 qubits, but I'm not sure it is citing the most recent work: https://en.wikipedia.org/wiki/Elliptic-curve_cryptography#ci...

JanisErdmanis · a year ago
Well, this is embarrassing. I just realised I had wrongly interpreted the result in [1]. I made an error on how Shor's algorithm encodes the numbers, wrongly assuming that numbers are encoded into quantum state space, which is 2^(2^n), where instead, there is one bit encoded into one qubit, which is also more practical.

The result shall be interpreted directly with the error rate for logical qubits to decrease as ~n^(-1/3). This, in turn, means that factorisation of a 10000-bit number would only require an error rate of 1/10th of the number of the logical qubits for a 10-bit number. This is practical given that one can make a quantum computer with around 100k qubits and correct errors on them.

On the other hand, a sibling comment already mentioned the limited connectivity that those quantum computers now have. This, in turn, requires a repeated application of SWAP gates to get the interaction one needs. I guess this would add a linear overhead for the noise; hence, the scaling of the error rate for logic qubits is around ~n^(-4/3). This, in turn, makes 10000-bit factorisation require a logical error rate of 1/10000 that of 10-bit number factorisation. Assuming that 10 physical qubits are used to reduce error by order, it can result in around 400k physical qubits.

[1]: https://link.springer.com/article/10.1007/s11432-023-3961-3

The5thElephant · a year ago
Isn't that what they are claiming is true now? That the errors do decrease exponentially with each qubit added?
thrance · a year ago
The error rates given are still horrendous and nowhere near low enough for the Quantum Fourier Transform used by Shor's algorithm. Taking qubit connectivity into account, a single CX between 2 qubits that are 10 edges aways gives an error rate of 1.5%.

Also, the more qubits you have/the more instructions are in your program, the faster the quantum state collapses. Exponentially so. Qubit connectivity is still ridiculously low (~3) and does not seem to be improving at all.

About AI, what algorithm(s) do you think might have an edge over classical supercomputers in the next 30 years? I'm really curious, because to me it's all (quantum) snake oil.

AlexCoventry · a year ago
In addition to that, the absolutely enormous domains that the Fourier Transform sums over (essentially, one term in the sum for each possible answer), and the cancellations which would have to occur for that sum to be informative, means that a theoretically-capable Quantum Computer will be testing the predictions of Quantum Mechanics to a degree of precision hundreds of orders of magnitude greater than any physics experiment to date. (Or at least dozens of orders of magnitude, in the case of breaking Discrete Log on an Elliptic Curve.) It demands higher accuracy in the probability distributions predicted by QM than could be confirmed by naive frequency tests which used the entire lifetime of the entire universe as their laboratory!

Imagine a device conceived in the 17th century, the intended functionality of which would require a physical sphere which matches a perfect, ideal, geometric sphere in Euclidean space to thousands of digits of precision. We now know that the concept of such a perfect physical sphere is incoherent with modern physics in a variety of ways (e.g., atomic basis of matter, background gravitational waves.) I strongly suspect that the cancellations required for the Fourier Transform in Shor's algorithm to be cryptographically relevant will turn out to be the moral equivalent of that perfect sphere.

We'll probably learn some new physics in the process of trying to build a Quantum Computer, but I highly doubt that we'll learn each others' secrets.

LeftHandPath · a year ago
Re: AI, it's a long way off still. The big limitation to anything quantum is always going to be decoherence and t-time [0]. To do anything with ML, you'll need whole circuit (more complex than shor's) just to initialize the data on the quantum device; the algorithms to do this are complex (exponential) [1]. So, you have to run a very expensive data-initialization circuit, and only then can you start to run your ML circuit. All of this needs to be done within the machine's t-time limit. If you exceed that limit, then the measured state of a qubit will have more to do with outside-world interactions than interactions with your quantum gates.

Google's willow chip has t-times of about 60-100mu.s. That's not an impressive figure -- in 2022, IBM announced their Eagle chip with t-times of around 400mu.s [2]. Google's angle here would be the error correction (EC).

The following portion from Google's announcement seems most important:

> With 105 qubits, Willow now has best-in-class performance across the two system benchmarks discussed above: quantum error correction and random circuit sampling. Such algorithmic benchmarks are the best way to measure overall chip performance. Other more specific performance metrics are also important; for example, our T1 times, which measure how long qubits can retain an excitation — the key quantum computational resource — are now approaching 100 µs (microseconds). This is an impressive ~5x improvement over our previous generation of chips.

Again, as they lead with, their focus here is on error correction. I'm not sure how their results compare to competitors, but it sounds like they consider that to be the biggest win of the project. The RCS metric is interesting, but RCS has no (known) practical applications (though it is a common benchmark). Their T-times are an improvement over older Google chips, but not industry-leading.

I'm curious if EC can mitigate the sub-par decoherence times.

[0]: https://www.science.org/doi/abs/10.1126/science.270.5242.163...

[1]: https://dl.acm.org/doi/abs/10.5555/3511065.3511068

[2]: https://www.ibm.com/quantum/blog/eagle-quantum-processor-per...

sesm · a year ago
> Also, the more qubits you have/the more instructions are in your program, the faster the quantum state collapses.

Was this actually measured and published somewhere?

goatking · a year ago
How can I, a regular software engineer, learn about quantum computing without having to learn quantum theory?

> Worth spending a little time doing some long tail strategizing I’d say

any tips for starters?

billti · a year ago
If you're a software engineer, then the Quantum Katas might fit your learning style. The exercises use Q#, which is quantum specific programming language.

https://quantum.microsoft.com/en-us/tools/quantum-katas

The first few lessons do cover complex numbers and linear algebra, so skip ahead if you want to get straight to the 'quantum' coding, but there's really no escaping the math if you really want to learn quantum.

Disclaimer: I work in the Azure Quantum team on our Quantum Development Kit (https://github.com/microsoft/qsharp) - including Q#, the Katas, and our VS Code extension. Happy to answer any other questions on it.

potsandpans · a year ago
Start here: https://youtu.be/F_Riqjdh2oM

You don't need to know quantum theory necessarily, but you will need to know some maths. Specifically linear algebra.

There are a few youtube courses on linear algebra

For a casual set of video: - https://youtube.com/playlist?list=PLZHQObOWTQDPD3MizzM2xVFit...

For a more formal approach:

- https://youtube.com/playlist?list=PL49CF3715CB9EF31D

And the corresponding open courseware

- https://ocw.mit.edu/courses/18-06-linear-algebra-spring-2010...

Linear algebra done right comes highly recommended

- https://linear.axler.net/

kvathupo · a year ago
The bar for entry is surprisingly low, you just need to brush up on intro abstract algebra. I recommend the following:

1. Kaye, LaFlamme, and Mosca - An Introduction to Quantum Computing

2. Nielsen and Chuang - Quantum Computation and Quantum Information (The Standard reference source)

3. Andrew Childs's notes here [1]. Closest to the state-of-the-art, at least circa ~3 years ago.

[1] - https://www.cs.umd.edu/~amchilds/qa/

currymj · a year ago
specifically avoid resources written by and for physicists.

the model of quantum mechanics, if you can afford to ignore any real-world physical system and just deal with abstract |0>, |1> qubits, is relatively easy. (this is really funny given how incredibly difficult actual quantum physics can be.)

you have to learn basic linear algebra with complex numbers (can safely ignore anything really gnarly).

then you learn how to express Boolean circuits in terms of different matrix multiplications, to capture classical computation in this model. This should be pretty easy if you have a software engineer's grasp of Boolean logic.

Then you can learn basic ideas about entanglement, and a few of the weird quantum tricks that make algorithms like Shor and Grover search work. Shor's algorithm may be a little mathematically tough.

realistically you probably will never need to know how to program a quantum computer even if they become practical and successful. applications are powerful but very limited.

"What You Shouldn't Know About Quantum Computers" is a good non-mathematical read.

https://arxiv.org/abs/2405.15838

zitterbewegung · a year ago
I recommend this book I studied it in Undergrad and I never took a quantum theory course. https://www.amazon.com/Quantum-Computing-Computer-Scientists...
ajb · a year ago
The simplest algorithm to understand is probably Grover's algorithm. Knowing that shows you how to get an sqrt(N) speedup on many classical algorithms. Then have a look at shor's algorithm which is the classic factoring algorithm.

I would not worry about hardware at first. But if you are interested and like physics, the simplest to understand are linear optical quantum circuits. These use components which may be familiar from high school or undergraduate physics. The catch is that the space (and component count) is exponential in the number of qubits, hence the need for more exotic designs.

cevi · a year ago
I always recommend Watrous's lecture notes: https://cs.uwaterloo.ca/~watrous/QC-notes/QC-notes.pdf

I prefer his explanation to most other explanations because he starts, right away, with an analogy to ordinary probabilities. It's easy to understand how linear algebra is related to probability (a random combination of two outcomes is described by linearly combining them), so the fact that we represent random states by vectors is not surprising at all. His explanation of the Dirac bra-ket notation is also extremely well executed. My only quibble is that he doesn't introduce density matrices (which in my mind are the correct way to understand quantum states) until halfway through the notes.

jvanderbot · a year ago
There is a course mentioned in the article, but I'm not clear on how "theory" it is.

https://coursera.org/learn/quantum-error-correction

sshb · a year ago
Might be worth checking out: https://quantum.country/
tsimionescu · a year ago
If you want to learn about what theoretical quantum computers might be able to do faster than classical ones and what they might not, you can try to read about quantum complexity theory, or some of what Scott Aaronson puts out on his blog if you don't want to go that in depth.

But the key thing to know about quantum computing is that it is all about the mathematical properties of quantum physics, such as the way complex probabilities work.

neoden · a year ago
These lessons might be of help: https://youtu.be/3-c4xJa7Flk?si=krrpXMKh3X5ktrzT
carabiner · a year ago
First learn about eigenvalues.
sizzle · a year ago
How long until this can derive a private key from its public key in the cryptocurrency space? Is this an existential threat to crypto?
bawolff · a year ago
Long enough you don't need to panic or worry.

Short enough that its reasonable to start r&d efforts on post quantum crypto.

ryankshaw · a year ago
I had the same question and this article was really helpful in explaining the threat models

https://www.deloitte.com/nl/en/services/risk-advisory/perspe...

valval · a year ago
This is the same as climate change. Something might happen sometime in the future.
isoprophlex · a year ago
Data security okay. But AI? How will that change?
numpad0 · a year ago
Aren't quantum computers expected to be like digitally read analog computers for high dimension optimization problems, and AI is like massive high dimension optimization problems?
adastra22 · a year ago
AI is essentially search. Quantum computers are really good at search.
inasio · a year ago
They showed a logical qubit that can last entangled for an hour, but to do that they had to combine their hundred or so physical qubits into a single one, so in some sense they have, right now, a single (logical) qubit

Deleted Comment

cherryteastain · a year ago
> AI and data security will be unalterably different if so

Definitely agree with the latter, but do you have any sources on how quantum comphters make "AI" (i.e. matrix multiplication) faster?

meta_x_ai · a year ago
Exploring via Search can become O(1) instead of M^N

Deleted Comment

weatherlite · a year ago
> AI and data security will be unalterably different if so

So what are the implications if so ?

r33b33 · a year ago
> Worth spending a little time doing some long tail strategizing I’d say.

What do you mean by this?

tsimionescu · a year ago
"long tail" typically refers to the tail of a normal distribution - basically it's a sciencey, but common, way of saying "very unlikely event". So, the OP was saying that it's worth spending some time strategizing about the unlikely event that a practical RSA-breaking QC appears in the near future, even though it's still a "long tail" (very unlikely) event.

Honestly, there's not that much to discuss on this though. The only things you can do from this strategizing is to consider even encrypted data as not safe to store, unless you're using quantum resistant encryption such as AES; and to budget time for switching to PQC as it becomes available.

unethical_ban · a year ago
The only thing I know or understand about quantum computing is its ability to "crack" traditional encryption algorithms.

So the commenter is saying that Cybersecurity needs to be planning for a near-world where traditional cryptography, including lots of existing data at rest, is suddenly as insecure as plaintext.

bee_rider · a year ago
I think some element of it might be: Shor’s algorithm has been known of for 30 years, and hypothetically could be used to decrypt captured communications, right? So, retroactively I will have been dumb for not having switched to a quantum-resistant scheme. And, dumb in a way that a bunch of academic nerds have been pointing out for decades.

That level of embarrassment is frankly difficult to face. And it would be devastating to the self-image of a bunch of “practical” security gurus.

Therefore any progress must be an illusion. In the real world, the threats are predictable and mistakes don’t slowly snowball into a crisis. See also, infrastructure.

adastra22 · a year ago
What would you switch to? There hasn’t been post quantum systems to use until very very recently.
bawolff · a year ago
All you encrypted communication from the 90s (SSL anyways) can probably be decrypted with classical means anyways. 90s SSL was pretty bad.
winwang · a year ago
Edit after skimming arxiv preprint[1]:

Yeah, this is pretty huge. They achieved the result with surface codes, which are general ECCs. The repetition code was used to further probe quantum ECC floor. "Just POC" likely doesn't do it justice.

(Original comment):

Also quantum dabbler (coincidentally dabbled in bitflip quantum error correction research). Skimmed the post/research blog. I believe the key point is the scaling of error correction via repetition codes, would love someone else's viewpoint.

Slightly concerning quote[2]:

"""

By running experiments with repetition codes and ignoring other error types, we achieve lower encoded error rates while employing many of the same error correction principles as the surface code. The repetition code acts as an advance scout for checking whether error correction will work all the way down to the near-perfect encoded error rates we’ll ultimately need.

"""

I'm getting the feeling that this is more about proof-of-concept, rather than near-practicality, but this is certainly one fantastic POC if true.

[1]: https://arxiv.org/abs/2408.13687

[2]: https://research.google/blog/making-quantum-error-correction...

Relevant quote from preprint (end of section 1, sorry for copy-paste artifacts):

"""

In this work, we realize surface codes operating below threshold on two superconducting processors. Using a 72-qubit processor, we implement a distance-5 surface code operating with an integrated real-time decoder. In addition, using a 105-qubit processor with similar performance, we realize a distance-7 surface code. These processors demonstrate Λ > 2 up to distance-5 and distance7, respectively. Our distance-5 quantum memories are beyond break-even, with distance-7 preserving quantum information for more than twice as long as its best constituent physical qubit. To identify possible logical error f loors, we also implement high-distance repetition codes on the 72-qubit processor, with error rates that are dominated by correlated error events occurring once an hour. These errors, whose origins are not yet understood, set a current error floor of 10−10. Finally, we show that we can maintain below-threshold operation on the 72qubit processor even when decoding in real time, meeting the strict timing requirements imposed by the processor’s fast 1.1µs cycle duration.

"""

wasabi991011 · a year ago
You got the main idea, it's a proof-of-concept: that a class of error-correcting code on real physical quantum chips obey the threshold theorem, as is expected based on theory and simulations.

However the main scaling of error correction is via surface codes, not repetition codes. It's an important point as surface codes correct all Pauli errors, not just either bit-flips or phase-flips.

They use repetition codes as a diagnostic method in this paper more than anything, it is not the main result.

In particular, I interpret the quote you used as: "We want to scale surface codes even more, and if we were able to do the same scaling with surface codes as we are able to do with repetition codes, then this is the behaviour we would expect."

Edit: Welp, saw your edit, you came to the same conclusion yourself in the time it took me to write my comment.

Deleted Comment

echelon · a year ago
Google could put themselves and everyone else out of business if the algorithms that underpin our ability to do e-commerce and financial transactions can be defeated.

Goodbye not just to Bitcoin, but also Visa, Stripe, Amazon shopping, ...

npalli · a year ago
> Worth spending a little time doing some long tail strategizing I’d say.

Yup, like Bitcoin going to zero.

vessenes · a year ago
I'm a little more in my wheelhouse here -- without an algo change, Grover's algorithm would privilege quantum miners significantly, but not any more than the industry has seen in the last 13 years (C code on CPU -> GPU -> Large Geometry ASIC -> Small Geometry ASIC are similarly large shifts in economics for miners probably).

As to faking signatures and, e.g. stealing Satoshi's coins or just fucking up the network with fake transactions that verify, there is some concern and there are some attack vectors that work well if you have a large, fast quantum computer and want to ninja in. Essentially you need something that can crack a 256 bit ECDSA key before a block that includes a recently released public key can be inverted. That's definitely out of the reach of anyone right now, much less persistent threat actors, much less hacker hobbyists.

But it won't always be. The current state of the art plan would be to transition to a quantum-resistant UTXO format, and I would imagine, knowing how Bitcoin has managed itself so far, that will be a well-considered, very safe, multi-year process, and it will happen with plenty of time.

K0balt · a year ago
I think you’re going to need about 10,000,000 qbits to divert a transaction, but that’s still within foreseeable scale. I think it’s extreme likely that the foundation will have finished their quantum resistance planning before we get to 10MM coherent qbits, but still, it’s a potential scenario.

More likely that other critical infrastructure failures will happen within trad-finance, much larger vulnerability footprint, and being able to trivially reverse engineer every logged SSL session is likely to be a much more impactful turn of events. I’d venture that there are significant ear-on-the-wire efforts going on right now in anticipation of a reasonable bulk SSL de cloaking solution. Right now we think it doesn’t matter who can see our “secure” traffic. I think that is going to change, retroactively, in a big way.

sekai · a year ago
> Yup, like Bitcoin going to zero.

If the encryption on Bitcoin is broken, say goodbye to the banking system.

m101 · a year ago
Bitcoin will just fork to a quantum proof encryption scheme and there will be something called "bitcoin classic" that is the old protocol (which few would care about)
drcode · a year ago
eh, they will add a quantum-resistant signature scheme (already a well-understood thing) then people can transfer their funds to the new addresses before it is viable to crack the existing addresses
beams_of_light · a year ago
https://en.wikipedia.org/wiki/BGP_hijacking#Public_incidents

A long-term tactic of our adversaries is to capture network traffic for later decryption. The secrets in the mass of packets China assumedly has in storage, waiting for quantum tech, is a treasure trove that could lead to crucial state, corporate, and financial secrets being used against us or made public.

AI being able to leverage quantum processing power is a threat we can't even fathom right now.

Our world is going to change.

codeulike · a year ago
They opened the API for it and I'm sending requests but the response always comes back 300ms before I send the request, is there a way of handling that with try{} predestined{} blocks? Or do I need to use the Bootstrap Paradox library?
handfuloflight · a year ago
Have you tried using the Schrödinger Exception Handler? It catches errors both before and after they occur simultaneously, until you observe the stack trace.
oblio · a year ago
I swear I can't tell which of these comments are sarcastic/parody and which are actual answers.

A sort of quantum commenting conundrum, I guess.

nobrains · a year ago
What happens when you don't send the request after receiving the response? Please try and report back.
jmcqk6 · a year ago
No matter what we've tried so far, the request ends up being sent.

The first time I was just watching, determined not to press the button, but when I received the response, I was startled into pressing it.

The second time, I just stepped back from my keyboard, and my cat came flying out of the back room and walked on the keyboard, triggering the request.

The third time, I was holding my cat, and a train rumbled by outside, rattling my desk and apparently triggering the switch to send the request.

The fourth time, I checked the tracks, was holding my cat, and stepped back from my keyboard. Next thing I heard was a POP from my ceiling, and the request was triggered. There was a small hole burned through my keyboard when I examined it. Best I can figure, what was left of a meteorite managed to hit at exactly the right time.

I'm not going to try for a fifth time.

r3trohack3r · a year ago
You unlock the "You've met a terrible fate." achievement [1]

[1] https://outerwilds.fandom.com/wiki/Achievements

ukuina · a year ago
Please report back and try.*
tealpod · a year ago
Looks like we don't have a choice.
wk_end · a year ago
Finally, INTERCAL’s COME FROM statement has a practical use.
dtquad · a year ago
>They opened the API for it and I'm sending requests but the response always comes back 300ms before I send the request

For a brief moment I thought this was some quantum-magical side effect you were describing and not some API error.

pinkmuffinere · a year ago
Isn't that.... the joke?
vangamoZX · a year ago
Write the catch clause before the try block

Deleted Comment

Deleted Comment

Nifty3929 · a year ago
Try using inverse promises. You get back the result you wanted, but if you don't then send the request the response is useless.

It's a bit like Jeopardy, really.

yu3zhou4 · a year ago
Did you try staring on your IP packets while sending the requests?
sebastiennight · a year ago
You are getting that response 300ms beforehand because your request is denied.

If you auth with the bearer token "And There Are No Friends At Dusk." then the API will call you and tell you which request you wanted to send.

yonatan8070 · a year ago
Pretty sure you just need to use await-async (as opposed to async-await)
cloudking · a year ago
The answer is yes and no, simultaneously
nick3443 · a year ago
Help! Every time I receive the response, an equal number of bits elsewhere in memory are reported as corrupt by my ECC RAM.
codeulike · a year ago
Update: I tried installing the current Boostrap Paradox library but it says I have to uninstall next years version first.
robomartin · a year ago
> I'm sending requests but the response always comes back 300ms before I send the request

Ah. Newbie mistake. You need to turn OFF your computer and disconnect from the network BEFORE sending the request. Without this step you will always receive a response before the request is issued.

qingcharles · a year ago
I'm trying to write a new version of Snake game in Microsoft Q# but it keeps eating its own tail.
timcobb · a year ago
What does Gemini say?
KTibow · a year ago
It responds with 4500 characters: https://hst.sh/olahososos.md
GuB-42 · a year ago
It think you are supposed to use a "past" object to get your results before calling the API.
tiborsaas · a year ago
Try setting up a beam-splitter router and report back with the interference pattern. If you don't see a wave pattern it might be because someone is spying on you.
jawns · a year ago
> It lends credence to the notion that quantum computation occurs in many parallel universes, in line with the idea that we live in a multiverse

I see the evidence, and I see the conclusion, but there's a lot of ellipses between the evidence and the conclusion.

Do quantum computing folks really think that we are borrowing capacity from other universes for these calculations?

wasabi991011 · a year ago
I was also really taken aback by this quote.

I have no idea who put it there, but I can assure you the actual paper contains no such nonsense.

I would have thought whoever writes the google tech blogs is more competent than bottom tier science journalists. But in this case I think it is more reasonable to assume malice, as the post is authored by the Google Quantum AI Lead, and makes more sense as hype-boosting buzzword bullshit than as an honest misunderstanding that was not caught during editing.

movpasd · a year ago
There are compelling arguments to believe in the many-worlds interpretation.

No sign of a Heisenberg cut has been observed so far, even as experiments involving entanglement of larger and larger molecules are performed, which makes objective-collapse theories hard to consider seriously.

Bohmian theories are nice, but require awkward adjustments to reconcile them with relativity. But more importantly, they are philosophically uneconomical, requiring many unobservable — even theoretically — entities [0].

That leaves either many-worlds or a quantum logic/quantum Bayesian interpretations as serious contenders [1]. These interpretations aren't crank fringe nonsense. They are almost inevitable outcomes of seriously considering the implications of the theory.

I will say that personally, I find many-worlds to focus excessively on the Schrödinger-picture pure state formulation of quantum mechanics. (At least to the level that I understood it — I expect there is literature on the connection with algebraic formulations, but I haven't taken the time to understand it.) So I would lean towards quantum logic–type interpretations myself.

The point of this comment was to say that many-worlds (or "multiverses", though I dislike the term) isn't nonsense. But it also isn't exactly the kind of sci-fi thing non-physicists might picture. Given how easy it is to misinterpret the term, however, I must agree with you that a self-aware science communicator would think twice about whether the term should be included, and that there may be not-so-scrupulous intentions at play here.

Quick edit: I realise the comment I've written is very technical. I'm happy to try to answer any questions. I should preface it by stating that I'm not a professional in the field, but I studied quantum information theory at a Masters level, and always found the philosophical questions of interest.

---

[0] Many people seem to believe that many-worlds also postulates the existence of unobservable parallel universes, but this isn't true. We observe the interaction of these universe's every time we observe quantum interference.

While we're here, we can clear up the misconception about "branching" — there is no branching in many-worlds, just the coherent evolution of the universal wave function. The many worlds are projections out of that wave function. They don't discretely separate from one another, either — it depends on your choice of basis. That choice is where decoherence comes in.

[1] And of course, there is the Copenhagen "interpretation" — preferred among physicists who would rather not think about philosophy. (A respectable choice.)

hshshshshsh · a year ago
Quantum computation done in done multiple universes is the explanation given by David Deutsch the father of Quantum Computing. He invented the idea of a quantum computer to test the idea of parallel universes.

If you are okay with a single universe coming to existence out of nothing you should be able to handle parallel universes as well just fine.

Also your comment does not have any useful information. You assumed hype as the reason why they mentioned parallel computing. It's just a bias you have on looking at world. Hype does helps explain a lot of things. So it can be tempting to use it as a placeholder for anything that you don't accept based on your current set of beliefs.

vixen99 · a year ago
Presumably the 'nonsense' is the supposed link between the chip and MW theory.

Let me add a recommendation for David Wallace's book The Emergent Multiverse - a highly persuasive account of 'quantum theory according to the Everett Interpretation'. Aside from the technical chapters, much of it is comprehensible to non-physicists. It seems that adherents to MW do 'not know how to refute an incredulous stare'. (From a quotation)

killerstorm · a year ago
Everett interpretation simply asserts that quantum wavefunctions are real and there's no such thing as "wavefunction collapse". It's the simplest interpretation.

People call it "many worlds" because we can interact only with a tiny fraction of the wavefunction at a time, i.e. other "branches" which are practically out of reach might be considered "parallel universes".

But it would be more correct to say that it's just one universe which is much more complex than what it looks like to our eyes. Quantum computers are able to tap into this complexity. They make a more complete use of the universe we are in.

tobias2014 · a year ago
This might turn into a debate of defining "simplest", but I think the ensemble/statistical interpretation is really the most minimal in terms of fancy ideas or concepts like "wavefunction collapse" or "multiverses". It doesn't need a wavefunction collapse nor does it need multiverses.
gaze · a year ago
I'm upset they put this in because this is absolutely not the view of most quantum foundations researchers.
justinpombrio · a year ago
From Wikipedia[1]:

A poll of 72 "leading quantum cosmologists and other quantum field theorists" conducted before 1991 by L. David Raub showed 58% agreement with "Yes, I think MWI is true".[85]

Max Tegmark reports the result of a "highly unscientific" poll taken at a 1997 quantum mechanics workshop. According to Tegmark, "The many worlds interpretation (MWI) scored second, comfortably ahead of the consistent histories and Bohm interpretations."[86]

In response to Sean M. Carroll's statement "As crazy as it sounds, most working physicists buy into the many-worlds theory",[87] Michael Nielsen counters: "at a quantum computing conference at Cambridge in 1998, a many-worlder surveyed the audience of approximately 200 people... Many-worlds did just fine, garnering support on a level comparable to, but somewhat below, Copenhagen and decoherence." But Nielsen notes that it seemed most attendees found it to be a waste of time: Peres "got a huge and sustained round of applause…when he got up at the end of the polling and asked 'And who here believes the laws of physics are decided by a democratic vote?'"[88]

A 2005 poll of fewer than 40 students and researchers taken after a course on the Interpretation of Quantum Mechanics at the Institute for Quantum Computing University of Waterloo found "Many Worlds (and decoherence)" to be the least favored.[89]

A 2011 poll of 33 participants at an Austrian conference on quantum foundations found 6 endorsed MWI, 8 "Information-based/information-theoretical", and 14 Copenhagen;[90] the authors remark that MWI received a similar percentage of votes as in Tegmark's 1997 poll.[90]

[1] https://en.wikipedia.org/wiki/Many-worlds_interpretation#Pol...

klipt · a year ago
Soon: "are alien universes slowing down your internet? Click here to learn more!"

Reminds me of the Aorist Rods from Hitchhikers' Guide to the Galaxy.

hshshshshsh · a year ago
Science is not based on consensus seeking.

Science is about coming up with the best explanations irrespective of whether or not a large chunk does not believe it.

And best explanations are the ones that is hard to vary. Not the one that is most widely accepted or easy to accept based on the current world view.

ColinHayhurst · a year ago
Credibility of the article plummeted when I got to that sentence, and especially since using name dropping.
ferfumarma · a year ago
One of the biggest problems with such an assertion is that it's not falsifiable.

It could be that we are borrowing qbit processing power from Russel's quantum teapot.

whimsicalism · a year ago
the everettian view is absolutely not the view? i am not so sure.

or you mean specifically the parallel computation view?

johnfn · a year ago
You don't even have to get to the point where you're reading a post off Scott Aaronson's blog[1] at all; his headline says "If you take nothing else from this blog: quantum computers won't solve hard problems instantly by just trying all solutions in parallel."

[1]: https://scottaaronson.blog/

aithrowawaycomm · a year ago
In the same way people believe P != NP, most quantum computing people believe BQP != NP, and NP-complete problems will still take exponential time on quantum computers. But if we had access to arbitrary parallel universes then presumably that shouldn't be an issue.

The success on the random (quantum) circuit problem is really a valdiation of Feynman's idea, not Deutsch: classical computers need 2^n bits to simulate n qubits, so we will need quantum computers to efficiently simulate quantum phenomena.

jumping_frog · a year ago
Does access to arbitrary parallel universes imply that they divide up the computation and the correct answer is distributed to all of the universes or in such a collection, there will be sucker universes which will always receive wrong answers ?
paxys · a year ago
I don't understand the jump from: classical algorithm takes time A -> quantum algorithm takes time B -> (A - B) must be borrowed from a parallel universe.

Maybe A wasn't the most efficient algorithm for this universe to begin with?

kelnos · a year ago
Right, and that's part of the argument against quantum computing being a proof (or disproof) of the many-worlds interpretation. Sure, "(A-B) was borrowed from parallel universes" is a possible explanation for why quantum computing can be so fast, but it's by far not the only possible explanation.
rdtsc · a year ago
> It lends credence to the notion that quantum computation occurs in many parallel universes, in line with the idea that we live in a multiverse, a prediction first made by David Deutsch.

That's in line with a religious belief. One camp believes one thing, other believes something else, others refuse to participate and say "shut up and calculate". Nothing wrong with religious beliefs of course, it's just important to know that is what it is.

layer8 · a year ago
The Schrödinger equation inherently contains a multiverse. The disagreement is about whether the wave function described by the equation collapses to a single universe upon measurement (i.e. whether the equation stops holding upon measurement), or whether the different branches continue to exist (i.e. the equation continues to hold at all times), each with a different measurement outcome. Regardless, between measurements the different branches exist in parallel. It’s what allows quantum computation to be a thing.
wasabi991011 · a year ago
> The Schrödinger equation inherently contains a multiverse.

A simple counterexample is superdeterminism, in which the different measurement outcomes are an illusion and instead there is always a single pre-determined measurement outcome. Note that this does not violate Bell's inequality for hidden variable theories of quantum mechanics, as Bell's inequality only applies to hidden variables uncorrelated to the choice of measurement: in superdeterminism, both are predetermined so perfectly correlated.

creata · a year ago
> The Schrödinger equation inherently contains a multiverse.

Just to be clear, where in the Schrödinger equation (iħψ̇ = Hψ) is the "multiverse"?

Lionga · a year ago
Non of that honkey ponkey is needed if you just give up locality and a hard deterministic explanation like De-Broglie-Bohm gives all the same correct measurements and conclusions like Copenhagen interpretation without multiverses and "wave function collapses".

Copenhagen interpretation is just "easier" (like oops all our calculations about the univers don't seemt to fit, lets invent "dark matter") when the correct explanations makes any real world calculation practically impossible (thus ending most of physics further study) as any atom depends on every other atom at any time.

aithrowawaycomm · a year ago
I suspect the real issue is that Big Tech investors and executives (including Sundar Pichai) are utterly hopped up on sci-fi, and this sort of stuff convinces them to dedicate resources to quantum computing.
kridsdale1 · a year ago
That explains metaverse funding at least.
GenerWork · a year ago
>Do quantum computing folks really think that we are borrowing capacity from other universes for these calculations?

Doesn't this also mean that other universes have civilizations that could potentially borrow capacity from our universe, and if so, what would that look like?

ko27 · a year ago
It's a perfectly legit interpretation of what's happening, and many physicists share the same opinion. Of course the big caveat is that you need to interfere those worlds so that they cancel out, which necessarily requires a lower algorithmic bound which prevents you from doing infinite amount of computation in an instant.
korkybuchek · a year ago
> Do quantum computing folks really think that we are borrowing capacity from other universes for these calculations?

Tangentially related, but there's a great Asimov book about this called The Gods Themselves (fiction).

vessenes · a year ago
I’m partial to Anathem by Stephenson on this topic as well
qnleigh · a year ago
This is a viable interpretation of quantum mechanics, but currently there is no way to scientifically falsify or confirm any particular interpretation. The boundary between philosophy and science is fuzzy at times, but this question is solidly on the side of philosophy.

That being said, I think the two most commonly preferred interpretations of quantum mechanics among physicists are 'Many Worlds' and 'I try not to think about it too hard.'

ComputerGuru · a year ago
It doesn’t make sense to me because if we can borrow capacity to perform calculations then we can “borrow” an infinite amount of energy.
griomnib · a year ago
Climate change solved: steal energy from adjacent universes, pipe our carbon waste into theirs.
whoitwas · a year ago
Well. If you study quantum physics and the folks who found it like Max Planck, they believed in "a conscious and intelligent non-visible living energy force .. . the matrix mind of all matter".

I don't know much about multiverse, but we need something external to explain the magic we uncover.

Energy and quantum mechanics are really cool but dense to get into. Like Planck, I suspect there's a link between consciousness and matter. I also think our energy doesn't cease to exist when our human carcass expires.

varjag · a year ago
Yes this is deeply unserious tangent in supposedly landmark technology announcement.
athesyn · a year ago
It's just marketing.
hshshshshsh · a year ago
The quantum computer idea was literally invented by David Deutsche to test the many universes theory of quantum physics.
wasabi991011 · a year ago
You've mentioned this in another comment. I have to point out, even if this is his opinion, and he has been influential in the field, it does not mean that this specific idea of his has been influential.
Vegenoid · a year ago
It is not useful to spam this comment repeatedly under different people who question or disagree with many worlds. Pick one place to make your case.
m3kw9 · a year ago
So are we now concerned with the environment of another universe? Like climate activitist but for multiverses?
melvinmelih · a year ago
> It performed a computation in under five minutes that would take one of today’s fastest supercomputers 1025 or 10 septillion years. If you want to write it out, it’s 10,000,000,000,000,000,000,000,000 years.

If it's not, what would be your explanation for this significant improvement then?

shawabawa3 · a year ago
Quantum computing can perform certain calculations much faster than classical computing in the same way classical computing can perform certain calculations much faster than an abacus

Deleted Comment

Ar-Curunir · a year ago
I mean, that's like saying GPUs operate in parallel universes because they can do certain things thousands of times faster than CPUs.
GilKalai · a year ago
In the past five years I participated in a project (with Yosi Rinott and Tomer Shoham) to carefully examine the Google's 2019 "supremacy" claim. A short introduction to our work is described here: https://gilkalai.wordpress.com/2024/12/09/the-case-against-g.... We found in that experiment statistically unreasonable predictions (predictions that were "too good to be true") indicating methodological flaws. We also found evidence of undocumented global optimization in the calibration process.

In view of these and other findings my conclusion is that Google Quantum AI’s claims (including published ones) should be approached with caution, particularly those of an extraordinary nature. These claims may stem from significant methodological errors and, as such, may reflect the researchers’ expectations more than objective scientific reality.

janpmz · a year ago
I've heared claims before that quantum computers are not real. But I didn't understand it. Can anybody explain the reasoning behind the criticism? Are they just simulated?
sgt101 · a year ago
I think that this is now a very fringe position - the accumulation of reported evidence is such that if folks were/are fooling themselves it would now be a case of an entire community conspiring to keep things quiet. I mean, it's not impossible that could happen, but I think it would be extraordinary.

On the other hand the question is what does "real QC" mean? The current QC's perform very limited and small computations, they lack things like quantum memory. The large versions are extremely impractical to use in the sense that they run for 1000ths of a second and take many hours to setup for a run. But that doesn't mean that the physical effects that they use/capture aren't real.

Just a long long way from practical.

Rhapso · a year ago
So the basics:

- quantum physics are real, this isn't about debating that. The theory underpinning quantum computing is real.

- quantum annealing is theoretically real, but not the same "breakthrough" that a quantum computer would be. Z-wave and google have made these.

- All benchmark computations have been about simulating a smaller quantum computer or annealer. which these systems can do faster than a brute force classical search. These are literally the only situation where "quantum supremacy" exists.

- There is literally no claim of "productive" computation being made by a quantum computer. Only simulations of our assumptions about quantum systems.

- The critical gap is "quantum error correction", proof that they can use many error prone physical qubits to simulate a smaller system with lower error. There isn't proof yet that is actually possible.

This result they are claiming, is they have "critical error correction" is the single most groundbreaking result we could have in quantum computing. Their evidence does not satisfy the burden of proof. They also only claim to have 1 qubit, which is intrinsically useless, and doesn't examine the costs of simulating multiple interacting qubits.

whoitwas · a year ago
I'm skeptical only because they have AI in the name.
qnleigh · a year ago
What experimental data would you need to see to change your mind? Either from Google or from another player?
djoldman · a year ago
I wonder if anyone else will be forced to wait on https://scottaaronson.blog/ to tell us if this is significant.
EvgeniyZh · a year ago
He told when the preprint was published

https://scottaaronson.blog/?p=8310

qnleigh · a year ago
He's already blogged about it a bit here

https://scottaaronson.blog/?p=8310#comments

and here

https://scottaaronson.blog/?p=8329

though I bet he will have more to say now that the paper is officially out.

gloriousduke · a year ago
I was about to add a similar comment. Definitely interested to read his evaluation and whether there is more hype than substance here, though I'm guessing it may take some time.
fidotron · a year ago
The slightly mind blowing bit is detailed here: > https://research.google/blog/making-quantum-error-correction...

“the first quantum processor where error-corrected qubits get exponentially better as they get bigger”

Achieving this turns the normal problem of scaling quantum computation upside down.

bgnn · a year ago
Hmm why? I thought the whole idea was this would work eventually. Physical qubit vs logical qubit distinction is there already for a long time.

The scaling problem is multifaceted. IMHO the physical qubits are the biggest barrier to scaling.

fidotron · a year ago
> Hmm why?

In theory, theory and practice are the same.

thrance · a year ago
It also breaks a fundamental law of quantum theory, that the bigger a system in a quantum state is, the faster it collapses, exponentially so. Which should at least tell you to take Google's announcement with z grain of salt.
wasabi991011 · a year ago
This is not a "fundamental law of quantum theory", as evidenced by the field of quantum error correcting codes.

Google's announcement is legit, and is in line with what theory and simulations expect.

readyplayernull · a year ago
> It lends credence to the notion that quantum computation occurs in many parallel universes, in line with the idea that we live in a multiverse, a prediction first made by David Deutsch.

Processing in multiverse. Would that mean we are inyecting entropy into those other verses? Could we calculate how many are there from the time it takes to do a given calculation? We need to cool the quantum chip in our universe, how are the (n-1)verses cooling on their end?

deanCommie · a year ago
What if we are? And by injecting entropy into it, we are actually hurrying (in small insignificant ways) the heat death of those universes? What if we keep going and scale out and in the future it causes a meaningful impact to that universe in a way that it's residents would be extremely unhappy with, and would want to take revenge?

What if it's already happening to our universe? And that is what black holes are? Or other cosmology concepts we don't understand?

Maybe a great filter is your inability to protect your universe from quantum technology from elsewhere in the multiverse ripping yours up?

Maybe the future of sentience isn't fighting for resources on a finite planet, or consuming the energy of stars, but fighting against other multiverses.

Maybe The Dark Forest Defence is a decision to isolate your universe from the multiverse - destroying it's ability to participate in quantum computation, but also extending it's lifespan.

(I don't believe ANY of this, but I'm just noting the fascinating science fiction storylines available)

navaati · a year ago
Getting strong vibes of Asimov’s novel "The Gods Themselves" here ! For those who haven’t read it I recommend it. It’s a nice little self-contained book, not a grandiose series and universe, but I love it.
kridsdale1 · a year ago
I’d say it’s more akin to Dark Energy than anything Black Hole related.

DE is some sort of entropy that is being added to our cosmos in an exponential way over historic time. It began at a point a few billion in to our history.

Willish42 · a year ago
Thanks for throwing in references like https://en.wikipedia.org/wiki/Dark_forest_hypothesis even though this was a silly response to the science fiction implications.

I found it an interesting read and hadn't heard the term before, but it's exactly the kind of nerdy serendipity I come to this site for!

jsvlrtmred · a year ago
AFAIK a fundamental step in any quantum computing algorithm is bringing the qubits back to a state with a nonrandom outcome (specifically, the answer to the problem being solved). Thus a "good" quantum computer does not bifurcate the wavefunction at a macro level, ie there is no splitting of the "multiverse" after the calculation.
akira2501 · a year ago
Isn't it really just processing with matrices of imaginary numbers? Which is why you need error correction in the first place because the higher temperature the system the less coherent the phases all become? Thus having absolutely nothing to do with multiverse theory?

I think string theories ideas about extra curled up dimensions are far more likely places to look. You've already got an infinite instantaneous energy problem with multiverses let alone entropy transfer considerations.

thrance · a year ago
The many-worlds interpretation of quantum theory [1] is widely considered unfalsifiable and therefore mostly pseudoscientific. This article is way in over it's head in claiming such nonsense.

[1] https://en.wikipedia.org/wiki/Many-worlds_interpretation

zarzavat · a year ago
That depends on whether you think that MWI is making claims beyond what QM claims, or if you think that MWI is just a restatement of QM from another perspective. If the latter then the falsifiability of MWI is exactly the same as the falsifiability of QM itself.
newsbinator · a year ago
Unfalsifiable does not mean pseudoscientific. Plenty of things might be unfalsifiable for humans (e.g. humans lack the cognitive capacity to conceive of ways to falsify them), but then easily falsifiable for whatever comes after humans.

Deleted Comment

codeyperson · a year ago
If we are in a simulation. This seems like a good path to getting our process terminated for consuming too much compute.
prettyStandard · a year ago
AI too.

I've followed Many worlds & Simulation theory a bit too far and I ended up back where I started.

I feel like the most likely scenario is we are in a AI (kinder)garden being grown for future purposes.

So God is real, heaven is real, and your intentions matter.

Obviously I have no proof...

swores · a year ago
> "and your intentions matter."

How do you reach that conclusion?

Characters in The Sims games technically have us human players as gods, it doesn't mean that when we uninstall the game those characters get to come into our earthly (to them) heaven or have any consequences for actions performed during the simulation?

a2128 · a year ago
I doubt we would even register as a blip. The universe is absolutely massive and there's celestial events that are unthinkably massive and complex. Black hole mergers, supernovae, galaxies merging. Hell, think of what chaos happens in the inside of our own sun, and multiply that by 100 billion stars in a galaxy, and multiply that by 100 billion galaxies. Humanity is ultimately inconsequential.
swores · a year ago
Surely it would depend on what the simulation actually was?

If you imagine simulations we can build ourselves, such as video games, it's not hard to add something at the edge of the map that users are prevented from reaching and have the code send "this thing is massive and powerful" data to the players. Who's to say that the simulation isn't actually focussed on earth, and everything including the sun is actually just a fiction designed to fool us?