6

I want to play with the following problem. Given a unitary $U$ with parameters $\theta$ I want to optimize these parameters to make $U$ match some target unitary $V$, i.e. to find $\operatorname{argmin}_{\theta}||U(\theta)-V||$. This is in spirit of the quantum compiling problem, but here I want to build the parametrized circuit on my own.

Ideally I want to

  1. Build unitary $U(\theta)$ as a parametrized quantum circuit. This means that I do not want to explicitly construct matrix representations of gates etc.
  2. Use some well-developed framework for machine learning to minimize the loss function.

I am most familiar with qiskit and I can do (1) there rather easily using parmetrized circuits. However, I do not know if it is possible to convert a parametrized circuit into a matrix form so that it could be fed to automatic differentiation software.

If the problem is not easily solvable in qiskit I am willing to consider any other software stack with the main requirement being the ease of use.

Nikita Nemkov
  • 1,725
  • 6
  • 22

4 Answers4

6

So this answer depends a bit on the size of the unitary, but if you're building a compiler chances are that they are only a few qubits large.

A simple approach would just be to define a function that evaluate the norm and then you can use any classical optimization routine to find the parameters. For instance, using Qiskit, you could do

import numpy as np
from scipy.optimize import minimize
from qiskit.circuit import QuantumCircuit, Parameter
from qiskit.quantum_info import Operator

a, b = Parameter('a'), Parameter('b')

this is a very bad universal 2q gate, but just a toy example

u = QuantumCircuit(2) u.rx(a, 0) u.rx(b, 1)

target circuit

v = QuantumCircuit(2) v.x(0) v.x(0) V = Operator(v).data # matrix representation of v

def loss(x): U = Operator(u.bind_parameters(x)).data # here use the norm of your choosing (L2 is generally not so good since it's phase dependent) diff = np.linalg.norm(U - V) return diff

initial_guess = np.array([1, 1]) result = minimize(loss, initial_guess, method='cobyla') print(result) print('Final circuit') print(u.bind_parameters(result.x).draw())

Which returns

     fun: 9.170313512083841e-05
   maxcv: 0.0
 message: 'Optimization terminated successfully.'
    nfev: 39
  status: 1
 success: True
       x: array([ 6.22450966e-05, -6.73425047e-05])
Final circuit
     ┌─────────────────────────┐
q_0: ┤ Rx(6.22450965882598e-5) ├─
     ├─────────────────────────┴┐
q_1: ┤ Rx(-6.73425047399226e-5) ├
     └──────────────────────────┘
Cryoris
  • 2,993
  • 8
  • 15
2

If you're willing to modify the approach slightly, there are many options for you to approach this. Namely, by changing your optimization problem from $\text{argmin}_\theta ||U(\theta) - V||$ to $$ \text{argmax}_\theta \text{Tr}(V^\dagger U(\theta) ) $$

then you can implement the "Hadamard test" of Fig. 4 in (Khatri, 2018). This new cost is useful because it can be sampled directly from the output of a quantum computer (your current cost function probably can't be, at least not efficiently). Such a cost function can be optimized using a limited form of automatic differentiation in Tensorflow Quantum or Pennylane (I am not a qiskit user but I assume it is possible with that library as well).

The above libraries work by packaging a circuit that computes the cost function $C(\theta)$ and some subroutine to compute the derivatives $\partial_\theta C$ into a self-contained computational node. Those derivatives might be computed using a parameter shift rule, or finite difference. But the point is that since the node returns both $C$ and $\partial_\theta C$, it can be chained together with other computational nodes to support gradient (back)propagation via the chain rule.

However, it sounds like you might be asking how to perform automatic differentiation on the cost function by converting the circuit itself into a series of computational nodes, each capable of returning a function and its derivative. This kind of automatic differentiation could hypothetically be done in quantum circuit simulators, but it is fundamentally unphysical and any optimization of that kind could never be performed on hardware. The two libraries I mentioned above (again, not sure about qiskit) do not support this kind of task.

forky40
  • 7,988
  • 2
  • 12
  • 33
1

It's more natural that in this context using Pennylane, here is an example for State Rotation using unitary gates:

Basic tutorial:

Qubit Rotation

Farhad
  • 105
  • 8
0

Have you seen BQSKit? At a high level its a portable and powerful compiler, but it works by doing exactly what you are after. There it is called instantiation, can you build a circuit similar to how you would build it in qiskit, then just call circuit.instantiate(V). You can read more about instantiation in this paper.

edyounis
  • 96
  • 2