5

The Quantum Algorithm Zoo includes a host of algorithms for which Quantum Computing offers speedups (exponential, polynomial, etc). However, those speedups are based on asymptotic computational complexity (Big-O complexity).

For a realistic implementation on a quantum computer (or even a simulator), the algorithm might require other add-ons to get a satisfiable result. For e.g., for multiple quantum state tomography trials, or probabilistic cloning. I am not considering an ideal quantum computer, thus, overheads of physical to logical qubit mapping for Quantum Error Correction; overheads of nearest neighbour mapping; or experimental environmental errors are not considered. I am only considering the effect of projective measurement and no-cloning.

How can I compare a quantum algorithm taking into account such factors? The overheads might be polynomial, linear, or even constant, so asymptotically it will outperform the corresponding classical algorithm. But, asymptotically means, for a large enough problem size. I am interested in determining if that cross-over point (quantum supremacy problem size) is realistic for my case.

The specific algorithm I am currently working with is an improvement of Grover's search as proposed by Biham et. al, where the initial amplitude distribution is not uniform, and there are multiple solutions.

Aritra
  • 323
  • 1
  • 8

1 Answers1

0

I am only considering the effect of projective measurement and no-cloning.

Consider that a (long) realistical quantum computation would have to involve quantum error correction which only works if the errors are rather limited: Your projective measurement will be rather close (to within 1% or so) to not incorporating errors other than quantum projection noise. This is usually irrelevant because a typical algorithm will not require you to do state tomography but instead distill a binary result into the qubits to be measured in the end. If it would involve state tomography, the (asymptotic) overhead would enter the big-O notation.

There is indeed some overhead from the no-cloning theorem: The (quantum part of the) result of e.g. Shor's algorithm must be read more than once to form the greatest common denominator. Such an overhead obviously depends on the algorithm; in the case of Shor's algorithm, it is small (typically 2 to 3, in detail depending if you are willing to take care of small factors classically, as is simple). A similarly (but not quite as) small overhead occurs if you want to generalize Grover's algorithm to the case where an unknown number of solutions exist.