Say I have a wooden stick and I break it in half in less than a second. Assume a computer would need several minutes to simulate everything that would've happened in the stick. I clearly got the output faster than a computer (and with more precision), so does this imply I'm doing anything particularly fascinating?
I assume the same scenario is possible to concoct for a quantum computer. I assume it wouldn't be particularly interesting either. So what are the criteria for distinguishing those scenarios from the "interesting" cases? And how do we know which case this one is like?
There is a great video by the mathematician Richard Borcherds on this exact objection to current examples of quantum supremacy.
Our goal was to make an intuitive, fully featured poker application, that had a clean UI and captured the fun social vibe of live home games.
We have lots of advanced game settings like PLO, straddles, flipping cards heads up, time banks, and many more.
We will be refining the application and developing more features as we get feedback from users like you! So please comment with questions and/or feedback or reach out to us directly through Contact Us page on bottom left of the home screen.
We hope you have as much fun playing as we did building it.
If you have a measure of correctness, and a measure of performance. Is there a maximum value of correctness per some unit of processing that exists below a full matrix multiply
Obviously it can be done with precision, since that is what floating point is. But is there anything where you can save x% of computation and have fewer than x% incorrect values in a matrix multiplications?
Gradient descent wouldn't really care about a few (Reliably) dud values.