8

I'd like to know if there's anything that can be said about whether and when we can efficiently prepare a state corresponding to the second-lowest eigenvalue $|\lambda_1\rangle$ of a given Hamiltonian, or in any other way learn what this energy is?

For example, let $G$ be a sparse, $d$-regular graph having adjacency matrix $A$ with Laplacian $\mathcal L=dI-A$. We can simulate $W=\exp(-i\mathcal L t)$ easily enough.

My intuition, which I don't know how to formalize, is that given two vertices $s$ and $t$ that are far from each other, even across a (Cheeger) bottleneck, the spectral decomposition of a superposition such as:

$$\vert\psi\rangle=\frac{1}{\sqrt{2}}(\vert s\rangle-\vert t\rangle)$$

likely has a large component corresponding to this second-lowest eigenstate.

Example of Approach

Indeed Guoming Wang's paper on quantumly measuring the resistance between nodes reminds us that the resistance metric is based on this Laplacian; the bottleneck between $s$ and $t$ certainly increases the resistance therebetween.

Surely if $G$ is disconnected, with $s$ on one component and $t$ on another component, then this may be very hard. But if $G$ is otherwise promised to be connected, e.g., is $|\psi\rangle=\frac{1}{\sqrt 2}\big(|s\rangle-|t\rangle\big)$ a good ansatz, is $|\lambda_1\rangle$ accessible through eigenvalue sampling of $|\psi\rangle$?

Mark Spinelli
  • 15,378
  • 3
  • 26
  • 83

1 Answers1

5

This is the goal of the Variational Quantum Deflation algorithm. Let us consider an Hamiltonian $H$ which has an efficient Pauli decomposition. First of all, we can perform VQE on $H$, which allows us to find the lowest eigenvalue $\lambda_0$ and the associated eigenvector $\left|\lambda_0\right\rangle$. Remember that in VQE, our cost function is: $$\langle\psi(\theta)|H|\psi(\theta)\rangle\,.$$ A strategy that seems natural but doesn't work is to say "Oh, but now we just have to do the same thing in the space orthogonal to $\left|\lambda_0\right\rangle$ and we're done", and though the reasoning holds mathematically, we don't have a way to restrict our parameters $\theta$ to only generate states that are orthogonal to $\left|\lambda_0\right\rangle$. However, what we can do is enforce our optimizer to do it for us by choosing the following cost function: $$\langle\psi(\theta)|H|\psi(\theta)\rangle+\beta\left|\left\langle\psi(\theta)\middle|\psi\left(\theta_0\right)\right\rangle\right|^2$$ with $\theta_0$ being the parameters that VQE originally found and $\beta$ being a well-chosen constant (more on that later). That is, we have $\left|\psi\left(\theta_0\right)\right\rangle\approx\left|\lambda_0\right\rangle$ for some measure of similarity. For the argument's sake, let us assume that $\left|\psi\left(\theta_0\right)\right\rangle=\left|\lambda_0\right\rangle$. Since we know $\theta_0$, we can compute the overlap between $\left|\lambda_0\right\rangle$ and $|\psi(\theta)\rangle$ by the means of a SWAP test for instance.

Intuitively, the algorithm will try to minimize both the energy via the leftmost term, but also the overlap with $\left|\lambda_0\right\rangle$ using the rightmost term. If $\beta=0$, the algorithm will just return $\theta_0$, and if $\beta=+\infty$, the algorithm will return any set of parameters that produces a state orthogonal to $\left|\lambda_0\right\rangle$. In fact, it is not hard to show that this cost function is minimized at $\lambda_1$ for $\left|\lambda_1\right\rangle$ as long as $\beta>\lambda_0-\lambda_1$: first we can easily show that $|\psi(\theta)\rangle$ must be in the linear span of $\left|\lambda_0\right\rangle$ and $\left|\lambda_1\right\rangle$, which then gives you the following cost function: $$\lambda_1+p\left(\lambda_0-\lambda_1+1\right)\,.$$ You want this cost function to be minimized at $p=0$, which means that we must have $\lambda_0-\lambda_1+\beta>0$.

But now, we don't know $\lambda_1$, so we don't know how to choose $\beta$. There are several solutions for that, one being to estimate the largest eigenvalue of $H$ via VQE (you just have to flip the sign of the cost function), or you can upper-bound the largest eigenvalue using the Pauli decomposition. In all cases, since $\lambda_{\text{max}}-\lambda_0\geqslant\lambda_1-\lambda_0$, you're good to go.

Mathematically, we could also just set $\beta=10^{120}$ and call it a day, but the problem with having a very large $\beta$ will be the optimisation algorithm: the larger $\beta$, the less incentive there is to be close to $\left|\lambda_1\right\rangle$ instead of the other eigenstates associated to larger eigenvalues, which is why choosing it wisely is important.

Tristan Nemoz
  • 8,429
  • 3
  • 11
  • 39