2

Encoding classical information into a quantum computer is a bottleneck of quantum machine learning. I want to learn which algorithm for state preparation is the best (in complexity) currently. The scheme is limited to QRAM-like, and the algorithm can be used for amplitude encoding.

Saul_better
  • 365
  • 1
  • 8

1 Answers1

3

While the term "best" or "state of the art" can be subjective and often evolves (especially in the field of quantum machine learning), a recent protocol that offers an approach to (what I think is) the challenge you've mentioned is presented here, of which I will provide a brief summary.

Overview

The key idea is to use controlled rotations to prepare the desired quantum state. The protocol's efficiency is characterized by the number of required gates and the success probability of the state preparation.

The final state of the register following the protocol is given by: $$ |\Psi\rangle = \frac{1}{N} \{c_0R|0\rangle + c_1R|1\rangle + \dots + c_{2^n-1}R|2^n-1\rangle\} +\sum_k O\left((\frac{c_k}{R})^3\right), $$ where $R$ is a freely set parameter and $c_k$ is the data to be encoded. This is the desired state to the lowest order.

For dense datasets, where a significant fraction of the data is on the same order as the maximum value, the protocol is highly efficient. The success of the protocol is: $$ P_{\text{success}} \propto \rho\epsilon $$ for a maximum acceptable relative error of $\epsilon$ in any individual value in the data, and $\rho$ is the density measure given by $$ \rho = \frac{1}{2^n} \sum_k \left(\frac{c_k}{c_{\text{max}}}\right)^2, $$ i.e. $\rho = 1$ when all $c_k$ are equal and $\rho \ll 1$ for sparse data. With the time needed to implement the steps of the protocol $\propto log(n)$ (for $n$ qubit controlled rotations), the time needed to successfully prepare the encoded state scales as: $$ T \propto \frac{\log(n)}{\rho\epsilon}. $$

Why does this protocol stand out?

  • This protocol is efficient for dense datasets with a vast number of elements. Traditional classical algorithms struggle with such datasets because they process each data element individually, leading to an exponential increase in required resources. In contrast, this protocol avoids such exponential resource scaling, making it more efficient for these datasets. Sparse datasets will result in a lower success probability however, but for dense datasets, the protocol offers a promising approach.
  • The protocol's implementation requires additional ancilla qubits and quantum gates. However, their number scales linearly with the qubits in the processing register, which means they scale logarithmically with the $N$ real numbers being encoded.

This is just a brief overview, and I recommend reading the paper for a more comprehensive understanding. Furthermore, new techniques and protocols are discovered regularly, and what is considered "best" today might be surpassed by newer methodologies in the future.

banercat
  • 865
  • 5
  • 19