The main disadvantage of HHL algorithm for solving $A|x\rangle = |b\rangle$ is that exponential speed-up is reached only in case we are interested in value $\langle x|M|x\rangle$, where $M$ is a matrix. In case we want to know solution $|x\rangle$ itself, we neeed to do measurement which cancel the speed-up out completely.
In the article Experimental realization of quantum algorithm for solving linear systems of equations a method encoding matrix $A$ into a Hamiltonian and using adibatic quantum computation for solving the linear system is proposed. Unfortunately, the authors do not discuss complexity of the algorithm in terms of the dimensions of matrix $A$ but only a condition number of $A$.
This article inpired me to translate problem $A|x\rangle = |b\rangle$ to a binary optimization task:
- each row of $A|x\rangle = |b\rangle$ is $\sum_{j=1}^{n}a_{ij}x_j = b_i$
- hence solution of $A|x\rangle = |b\rangle$ fulfills $f(x)=\sum_{i=1}^{n}(\sum_{j=1}^{n}a_{ij}x_j - b_i)^2=0$
- minizing $f(x)$, we come to the solution
- since variables $x_i \in \mathbb{R}$ (I do not assume complex matrix $A$ and vector $|b\rangle$), we have to express them in binary number system, e.g. $x_i = \sum_{k=-m_1}^{m_2}2^k x_{ik}$, where $m_1, m_2 \in \mathbb{N}$ specify accuracy and $x_{ik}$ is $k$th bit in binary representation of variable $x_i$
In the end, we came to binary optimization task $$ \sum_{i=1}^{n}\left(\sum_{j=1}^{n}a_{ij}\sum_{k=-m_1}^{m_2}2^k x_{jk} - b_i\right)^2 \rightarrow \min $$
Such minimization can be done with algorithms like QAOA or VQE, or a quantum annealing machine. However, in this case we do not have any proof (only empirical) that such algorithms bring about any advantage over classical computing.
But a QUBO solver based on Grover's algorithm is proposed in Grover Adaptive Search for Constrained Polynomial Binary Optimization. Since Grover's algorithm provides quadratic speed-up we are now better off than in case of quantum annealers (or QAOA or VQE).
Additionally, assuming that we are able to construct qRAM, we can employ improved Grover, introduced in The Effective Solving of the Tasks from NP by a Quantum Computer, which reaches exponential speed-up.
In the end, we would be able to solve a linear system with exponential speed-up offered by HHL and at the same time to get rid of HHL disadvantages.
Does this idea make sense or is there some flaw? I would appreciate any comment and ideas.
EDIT: I did some further research in the above. I realized that my first proposal ignored signs. So, I added for each variable a signum binary variable (qubit). Each binary representation of variable in the linear system is now multiplied by term $(2s_i - 1)$. The term is -1 for $s_i = 0$ and 1 for $s_i = 1$. As a result, we are able to solve any linear system in real numbers. However, introducing new binary variables for signs complicates the optimization. Because of squares in the objective function, we get terms like $s_is_jx_ix_j$. This means that the optimization is no longer QUBO but HUBO (higher-order unconstrained binary optimization).
Fortunatelly, this can be overcome with method discussed e.g. in paper Towards Prediction of Financial Crashes with a D-Wave Quantum Computer, allowing to convert many-body interaction Hamiltonian to Ising one but at cost of ancilla qubits. In the end we are left with QUBO task but with more qubits - signs and ancillas.