7

From a few VQE tutorials online I see that they normally start with something like:

VQE is a way of getting a good estimate for the upper bound of the ground state of a quantum system's Hamiltonian. The Hamiltonian is known.

Then they proceed to show you a single qubit Hamiltonian as a 2x2 matrix, and go through the VQE algorithm to get the ground state energy (the smaller of the two eigenvalues).

Cool, but as devil's advocate, I could have just pulled out my high school maths to analytically get the smaller of the two eigenvalues. And with larger matrices, a classical computer will do the job well (which afaik is not a hard problem).

My question then is at what point does a Hamiltonian's matrix representation become so large, that even though we know it, we'd go to the trouble of probing the quantum system itself rather than solving for the eigenvalues on a classical computer? And how does that relate to the current and near future statuses of classical and quantum computing?

Please feel free to completely sidestep my question and nudge me back on track if I'm off-paradigm or started from the wrong premise.

Alexander Soare
  • 656
  • 4
  • 16

1 Answers1

7

The computational advantage of using quantum computers can be reached if the classical resources (memory; number of operations), required to solve a particular problem, grow exponentially in a certain parameter, while the quantum resources (memory; number of operations; number of measurements) grow polynomially in the same parameter.

Finding the lowest eigenvalue of an arbitrary matrix may very well NOT be a problem which you can solve on a quantum computer with an exponential improvement over the classical methods. (Please someone correct me if I'm wrong.)

However, if you are interested in finding the lowest eigenvalue of a Hamiltonian representing some physical system, there is a good chance that using quantum computers (in general) and VQE (in particular) may give you some advantage.

Consider, as an example, a Hamiltonian describing a system of $n$ particles. The Hilbert space size in this case grows with the number of particles as $N=O(\operatorname{exp}(n))$. For any reasonable encoding, the number of qubits required to encode the state of such a system in a quantum computer grows as $O(\operatorname{poly}(n))$. If you want to use VQE, the other parameters you are concerned with are:

  • The number of Pauli operators in the expansion of the Hamiltonian — grows as $O(\operatorname{poly}(n))$ for local interactions.
  • The number of gates required to prepare an ansatz state — grows as $O(\operatorname{poly}(n))$ for many known cases (e.g. look up UCCSD for quantum chemistry: one, two).

Therefore, in this case all the quantum resources grow polynomially in $n$:

Memory; number of gates; number of measurements $=O(\operatorname{poly}(n))$.

In the classical case, one would have to generally deal with vectors and matrices in $N = O(\operatorname{exp(n)})$ dimensions. Therefore, in this case we clearly have an improvement in memory and, quite likely, in the number of operations (asymptotically) as well.

mavzolej
  • 2,271
  • 9
  • 18