7

In Quantum simulation of chemistry with sublinear scaling in basis size Ryan Babbush and other authors from Google Quantum team argue, when talking about performing Quantum Phase Estimation in 1st quantization, that

The reason for our greatly increased efficiency is that the complexity scales like the maximum possible energy representable in the basis.

But they do not really give a reason why this is the case or point to a reference to understand it better. Does anyone know why this is indeed true?

Pablo
  • 603
  • 3
  • 11

1 Answers1

4

The answer is in Eqn. 8 of the referenced paper, and its surrounding paragraph:

The overall complexity [of simulating $e^{-i(A+B)t}$ using the interaction picture formalism and the Linear Combination of Unitaries (LCU) method] depends on the value of $\lambda$, which is the sum of the weights of the unitaries when expressing $B$ as a sum of unitaries. To simulate within error $\epsilon$ the number of segments used is $\mathcal{O}(\lambda t)$, and [a cutoff energy introduces additional poly-logarithmic factors]. The complexity in terms of LCU applications of $B$ and evolutions $e^{-iA\tau}$ is therefore ... $\mathcal{\widetilde O}(\lambda t) $.

After Eqn. 13 they further argue that $\lambda$ in the interaction picture is fairly bounded by $\mathcal{O}(\eta^{5/3} N^{1/3})$, with $\eta$ the number of electrons and $N$ the number of basis orbitals. Around Eqn. 18 they argue the effective complexity of implementing each step is $\mathcal{\widetilde O}(\eta)$. This puts their overall complexity at $\mathcal{\widetilde O}(N^{1/3}\eta^{8/3}t)$, as claimed in the abstract.

jecado
  • 1,251
  • 6
  • 18