Imagine a ball falling on the ground.
Simulating the O(10^23) atoms in each one with a classical computer would take (say) 10^23 times the amount of work of simulating a single atom. Depending on the level of detail, that could easily take, you know, many, many years...
We don't call the ball a supercomputer or a quantum computer just because it's so much more efficient than a classical computer here.
I presume that's because it can't do arbitrary computation this quickly, right?
So in what way are these quantum computers different? Can they do arbitrary computations?
A very simple argument is that there's strong reasons to believe that energy is required to represent all information in the physical universe. You can't have "states" without mass/energy storing that state somewhere.
2^n is clearly super-linear in 'n', so as you scale to many particles, the equations suggest that you'd need a ludicrously huge state space, which requires a matching amount of energy to store. Clearly, this is not what happens, increasing the mass/energy of a system 10x doesn't result in 2^10 = 1024x as much mass/energy. You get 10x, plus or minus a correction for binding energy, GR, or whatever.
Quantum Computing is firmly based on pretending that this isn't how it is, that somehow you can squeeze 2^n bits of information out of a system with 'n' parts to it.
The ever increasing difficulties with noise, etc... indicate that no, there's no free lunch here, no matter how long we stand in the queue with an empty tray.
You simply do not need to believe this. The universe doesn't need to be "stored" somewhere.
> Quantum Computing is firmly based on pretending that this isn't how it is, that somehow you can squeeze 2^n bits of information out of a system with 'n' parts to it.
Quantum computing does not believe this. It is a theorem that you can only get n bits out of n qubits, and quantum computing speedups do not rely otherwise.
Noise is hard, but error correction is a mathematically sound response.