6

It is a known trick in quantum computing to use additional ancilla qubits and uncomputation to construct efficient quantum circuits.

I wonder, are there some rigorous results that estimate how big this effect could be?

For example, is it possible to define a unitary on $n$ qubits such that its optimal implementation (with the minimal depth of a circuit) requires an exponential amount of additional ancilla qubits?

By the way, a similar question could be asked just for classical reversible circuits. I guess this paper https://arxiv.org/abs/1504.05155 could help solving it in this case.

Danylo Y
  • 8,076
  • 13
  • 24

1 Answers1

2

CW because I can't make anything rigorous


I can imagine a situation where unary encoding with $2^n$ qubits is more efficient, in terms of circuit depth, than binary encoding with only $n$ qubits. I can see a use-case where a large number of ancilla qubits are used to more efficiently execute a circuit with the unary encoding, all of which are then uncomputed back to $|0\rangle$.

For example, conventional classical computer engineering and circuit design makes much use of decoders/demultiplexers and multiplexers to switch between the two different encodings.

This is sort of happening in the recent decoded quantum interferometry (DQI) algorithm of Jordan et al. with the one-hot or $k$-hot Dicke states. (Both DQI and the Yamakawa-Zhandry algorithm realize some non-unitary process by implementing a unitary process and decoding errors associated with the desired non-unitarity, all while in superposition.)

As another example, I can envision temporarily switching between second quantization, using $O(d\log_2 n)$ qubits to record which of $n$ modes $d$ fermions occupy, to first quantization, which uses $O(n)$ such qubits. The exponentially more extra qubits used in first quantization amount to additional ancillae. See, e.g., Fault-Tolerant Quantum Simulations of Chemistry in First Quantization by Su et al.


On the above note there probably are some rigorous results in graph theory, showing that for $d$-sparse graphs on $n$ vertices, the adjacency-matrix representation using $O(n^2)$ bits is much more efficient in terms of time-complexity than the adjacency-list representation which uses $O(n d\log_2 n)$ bits. For example, I would much rather calculate the lowest eigenvalue or determinant of a matrix than calculate it with a list - I would convert the list to a matrix, then do the necessary matrix operations. The extra bits are certainly helpful ancilla!

Mark Spinelli
  • 15,378
  • 3
  • 26
  • 83