4

It is my understanding that, given a quantum computer with $n$ qubits and a way to apply $m$ single- and 2-qubit gates, quantum supremacy experiments

  1. Initialize the $n$ qubits into the all-zero's ket $|000\cdots\rangle$
  2. Generate a random unitary $U$ of $m$ gates
  3. Apply the quantum gate $U$ to these qubits, e.g. produce the state $|\Psi\rangle=U|000\cdots\rangle$
  4. Measure $|\Psi\rangle$ to produce an $n$-bit classical string
  5. Measure some property the sampled string, such as a cross-entropy, and determine if quantum supremacy is achieved based on the sampled string, as compared to, say, the uniform distribution.

This can be repeated multiple times.

  • Would a claim of quantum supremacy require applying the same random unitary $U$ each time, for each sample? Or is there a different pseudo-random $U$ for each sample?

I think I'm reading that $U$ is broken up into a set of pseudo-random single-qubit gates, followed by a set of 2-qubit gates. Are either or both of these fixed, or do they change for each sample?

Sanchayan Dutta
  • 17,945
  • 8
  • 50
  • 112
Mark Spinelli
  • 15,378
  • 3
  • 26
  • 83

2 Answers2

3

Generally speaking, to prove quantum supremacy, you don't need to sample several times from the same unitary/circuit/output probability distribution. If you extract even a single sample from the output probability distribution of a circuit which you know is extremely hard to simulate classically, then you already achieved something that you couldn't do (efficiently) classically.

This is because these are sampling problems. Such problems are not about estimating some property of some output probability distribution, but rather simply about the sampling itself.

The caveat in this is that, in practice, just observing one output state from a given circuit wouldn't look all that great an achievement. In other words, one needs to gather enough "circumstantial evidence" to manage to convince most people that the claim is solid and legit. This often includes actually retrieving some statistical features of the distribution, which allows checking that the distribution was indeed the intended one. It is, however, important to realise that the problem is not that of computing such features, but rather only that of sampling from the underlying probability distribution.

In conclusion, to more directly address some of the points raised: one unitary sampled once is, in principle, enough. But one wants to gather enough evidence to make the claim as solid as possible, and for this it is useful to do things like estimating properties of the experimental output distribution.

glS
  • 27,510
  • 7
  • 37
  • 125
2

In the Sycamore paper linked in the comments, in the description of FIG. 4, the authors state:

  • ...For each $n$, each instance is sampled with $N_s$ between 0.5 M and 2.5 M... For $m=20$, obtaining 1M samples on the quantum processor takes 200 seconds, while an equal fidelity classical sampling would take 10,000 years on 1M cores, and verifying the fidelity would take millions of years.

Thus, it is clear that the authors of the Sycamore paper repeatedly apply the same unitary each time.

Thinking about it now this makes sense, you would need to sample more than once to be able to accurately estimate the fidelity of your samples.

Mark Spinelli
  • 15,378
  • 3
  • 26
  • 83