2

I would like to know how to perform trotterization of a time-dependent operator (such as a Hamiltonian) on a gate-based quantum computer? I've seen examples for time-independent Hamiltonians, but I would like to know which is the theory for time-dependent ones.

Martin Vesely
  • 15,244
  • 4
  • 32
  • 75
bjail66
  • 155
  • 6

1 Answers1

2

There's really not any difference. Imagine I'm trying to simulate a Hamiltonian $$ H(t)=f(t)H_1+g(t)H_2 $$ from time $t=0$ to $T$. I'm going to break this down into $N$ little time steps $\delta=T/N$. It's up to you to determine what value of $N$ is large enough to give a reasonable accuracy of simulation. So, at each step $n$ ($t$ between $\delta (n-1)$ and $\delta n$), you're simulating an evolution, approximating it as a constant $H(\delta (n-\frac12))$, i.e. $$ e^{-iH(\delta (n-\frac12))\delta}. $$ This you will further approximate as $$ e^{-i H_1f(\delta (n-\frac12))\delta/2}e^{-i H_2g(\delta (n-\frac12))\delta}e^{-i H_1f(\delta (n-\frac12))\delta/2}. $$ This will build up into a long sequence, just as in the time independent case, except that the different time steps have different weights on the different terms.

DaftWullie
  • 62,671
  • 4
  • 55
  • 140