7

In his paper "The Argument against Quantum Computers, the Quantum Laws of Nature, and Google’s Supremacy Claims", Gil Kalai argues that quantum advantage will never be reached. For NISQ devices in particular, he argues that for a large variety of noise, the correlation between the ideal distribution and the noised one converges to $0$, meaning that the results are effectively unusable.

A common counter-argument is the Threshold theorem, which states that for an acceptable level of noise, we can error-correct a Quantum Computer. Gil Kalai however argues that:

At the center of my analysis is a computational complexity argument stating that $\gamma<\delta$

where

  • $\gamma$ is the rate of noise required for quantum advantage, and
  • $\delta$ is the rate of noise that can realistically be achieved.

Thus, Gil Kalai states that the Threshold Theorem will never be applied in practice, that the level of noise in NISQ devices will always be higher that the aforementioned threshold.

However, last year, the Google Quantum AI team published "Suppressing quantum errors by scaling a surface code logical qubit", where they show, from my understanding, that they managed to perform error-correction at threshold, meaning that correcting a Quantum Computer does not add more errors than it corrects.

Is this paper enough to invalidate all of Gil Kalai's arguments? For instance, does the fact that NISQ-generated distributions can be approximated by low-degree polynomials still hold, or is it linked to the previous argument and thus rendered void?

I don't think there has been a follow-up bu Gil Kalai on this paper, though I may have missed it.

Tristan Nemoz
  • 8,429
  • 3
  • 11
  • 39

3 Answers3

8

Note: views are my own.

I think experiments of this type will refute Gil's arguments, but I would be uncomfortable claiming that yet. I like nice clear don't-even-really-need-statistics answers to questions like this, so personally I'll be waiting for one-in-a-trillion error rates on complex states before saying the physical-noise-is-conspiring position has become indefensible.

Kind of related: I did a twitter poll intended to gauge where people thought quantum computers would break first, if they weren't possible. I don't know what proportion of the answers are from experts vs lay people, what proportion are jokes, etc etc standard disclaimers etc, but the results at least suggest that most people think a storage experiment isn't enough to detect foundational problems.

enter image description here

Craig Gidney
  • 44,299
  • 1
  • 41
  • 116
4

Noise

"Does Google's error correction paper invalidate Gil Kalai's arguments?"

The only thing that will invalidate Gil Kalai's arguments, is an actual experiment that demonstrates quantum advantage. Not an experiment that is described as "one more step towards quantum advantage". Nor an experiment that is described as "one more step towards quantum error correction or fault-tolerance".

"Gil Kalai states that the Threshold Theorem will never be applied in practice, that the level of noise in NISQ devices will always be higher that the aforementioned threshold."

The "N" in NISQ is "noise". If a device has so little noise that it's actually lower than some milestone threshold, I would say that we have graduated out of the NISQ era.

Number of qubits

The opening sentence to the Wikipedia article on NISQ defines NISQ computers as ones that will not achieve fault-tolerance or quantum advantage:

"The current state of quantum computing [1] is referred to as the noisy intermediate-scale quantum (NISQ) era, [2] characterized by quantum processors containing 50-100 qubits which are not yet advanced enough for fault-tolerance or large enough to achieve quantum supremacy."

It also defines NISQ machines to have up to 100 qubits, which comes from the abstract of Preskill's paper that coined the term NISQ. He never explicitly defined NISQ, so people resorted to defining it based on the scarce places in the paper where he did talk about the number of qubits.

Classical supercomputers can currently simulate the full wavefunction of a 54-qubit machine and in Figure 1 of this 2018 paper, you can see that simulations were possible for a 144-qubit version of Google's random quantum circuits that were used for their "supremacy" experiment.

With 100 physical qubits and the type of circuits that Google have, it's unlikely that you can outperform classical supercomputers that were simulating 144 qubits in the same type of circuit, way back in 2018. Even if you don't use the clever tensor-network algorithm that the above-linked paper used, and you use a silly algorithm like a brute-force $\exp{(-iHt)}$ calculation, classical supercomputers can fully simulate a 54-qubit wavefunction, so your error-correcting code would have to have 100/54 = 1.85 physical qubits per logical qubit.

Conclusion

It is nice to see that Google was able to claim that a "distance-5" surface code involving 49-qubits "modestly outperforms" a "distance-3" surface code involving 17-qubits, meaning that larger error-correcting codes are indeed performing "modestly" better than smaller ones. However, you are going to need way, way, way, way more physical qubits than 100 to show that a distance-N code can lead to quantum advantage in a real experiment. You would need N much larger. That's why they said "modestly outperforms" instead of "outperforms enough for fault-tolerance".

0

It seems to me that once in a while a quantum computer by chance will achieve good error correction but this will be a random occurrence and highly unlikely. When we consider gravitational waves and similar non-shieldable radiation forms, building at first consideration quantum error correction systems will need their own quantum error correction systems which will need their own quantum error correction systems and so. The number of required qubits will be huge and each level of error correction apparatus will be prone to corruption by noise.

Another consideration is the noisy aspects of the fabric of space-time. Provided the smallest length and time units are the Planck length and Planck time units, the very background of space-time has inherent noise even in consideration of regions of space-time having no classical scale gravitational waves passing through it and no classical scale curvature.

Even the otherwise monolithic Higgs field is noisy at the Planck length and Planck time levels in theory. Macroscopically, the Higgs field is more or less uniform but at the Planck scales occasionally and randomly the field vectors align in such a way so as to produce a vector field within a given portion of this scalar field.

Martin Vesely
  • 15,244
  • 4
  • 32
  • 75