3

Combinatorial optimization is often mentioned as a potential application of quantum computers. One of the main paradigms here is to reduce combinatorial optimization to an Ising problem, which is mathematically known as QUBO (quadratic unconstrained binary optimization). A QUBO then can be solved by a variety of methods, like quantum annealing or variational algorithms.

There are well known obstacles to building and scaling quantum computers. But here I want to discuss a different potential problem with this approach. Assume you have a large and reliable enough quantum computer that can find approximate solutions to QUBO. Will that actually be useful for real world problems?

While this is not the technically a quantum computing question, I believe this is the right place to ask. For one, the entire business of reducing different combinatorial problems to QUBO seems to be motivated by quantum computing alone (and perhaps classical analog Ising solvers). This gives me the first red flag, and raises the question

  1. Is it ever a good idea to reformulate your combinatorial optimization as QUBO? (Except if you absolutely have to, because you work in QC?)

I have not found any refs where people with your standard digital computer would for some reason put their logistics or graph problem into a QUBO form. From my own limited experience, the way QUBO deals with constraints is super inefficient.

For instance, assume you're doing some logistics problem and need to parameterize a path between $n$ locations. One standard way to do this is by introducing boolean variables $x_{t,i}$, with $x_{t,i}=1$ meaning that you visit location number $i$ at time $t$. Multiple visits to the same location, and being at different places at the same time is not allowed. This can be enforced by constraints $\sum_i x_{t,i}=\sum_{t} x_{t,i}=1$. There is a standard way to incorporate these constraints into QUBO as penalty terms, but there is a problem.

The number of valid solutions is $n!\sim 2^{n\log n}$, while the number of possible QUBO configurations is $2^{n^2}$. So the fraction of feasible configurations in QUBO formulation is exponentially small. I believe that a typical QUBO landscape is trapping, and most local minima do not correspond to feasible solutions. Is there any evidence that this is not a dealbreaker?

Almost all benchmarks I saw were for unconstrained problems, like MaxCut. There, any solution is a feasible solution, and techniques like quantum annealing make sense to me. However, what about generic real-world problems with constraints?

  1. Are there any studies discussing the difficulty in finding feasible solutions for QUBO reformulations of large scale constrained problems?

If QUBO reformulation is indeed a poor choice for most constrained problems, perhaps there are real-world problems, which are unconstrained? Here my question is mostly about the commercial applications. Yes, QUBO can model spin glasses, but this is physics. Yes, MaxCut and other graph problems are theoretically useful, but in practice these graph problems can be solved very efficiently anyway.

  1. Are there industrially relevant problems that are QUBO-native, and that classical solvers struggle with?

Thanks!

Nikita Nemkov
  • 1,725
  • 6
  • 22

3 Answers3

4

There are several important classes of problems that are well-suited to QUBO, such as quadratic assignment problems, Max-Cut, and graph coloring, to name a few. For these types of problems, it is reasonable to expect that certain quantum approaches might offer an advantage over classical methods.

However, the largest and most widely studied class of problems, both in industry and in the mathematical theory of optimization, are linear problems with linear constraints. These problems have been extensively researched, and a rich and elegant theory underpins them. For instance, the Simplex algorithm, regarded as one of the most important theoretical and practical algorithms of the century, is designed for linear problems with linear constraints. This algorithm, along with its subsequent commercial refinements, is exceptionally hard to surpass in general.

Given the existence of Simplex-like algorithms, it is difficult to justify the use of heuristics, especially those that require conversion to QUBO. The only situation in which optimization specialists might opt for a heuristic is when the problem is extremely large and a feasible solution is needed quickly. And even then, this would be a custom-made heuristic that exploits the structure of the problem.

In my opinion, converting linear problems with constraints into QUBOs is a thing of the past and is mostly popular among physicists who are more focused on the quantum aspects than on the mathematical theory of optimization or practical considerations.

  1. The answer to your first question is no, unless the problem is naturally QUBO. A rigorous explanation of why this approach is problematic can be found in Lagrangian Duality in Quantum Optimization: Overcoming QUBO Limitations for Constrained Problems. If quantum computing is necessary, it is better to use Lagrangian duality theory (formulated by Von Neumann and Dantzig), which is simpler and more efficient.

  2. I’m not aware of any studies discussing the difficulties of solving problems after converting them to QUBO. Such a study likely wouldn’t make much sense in the optimization community. In the optimization community, QUBOs are not commonly used, and no one would convert a problem into QUBO unless it is naturally suited to that formalism. A partial answer to your question could also be found in the abovementioned study.

To broaden the scope of your question, it’s worth noting that quantum optimization doesn’t have to involve QUBO, Hamiltonians, or QAOA-like methods. In the distant future, we might adopt approaches similar to Oracle-based methods, such as running Simplex-like algorithms in superposition or implementing objective functions with constraints directly using quantum arithmetic circuits.

MonteNero
  • 3,344
  • 7
  • 24
2

I think this is a general problem we're struggling to deal with. We take a problem that exists as a MILP (like your one) which has a lot of rich structure that can be exploited by classical solvers with some success (nothing great as the problem is NP-complete).

We then throw away all the structure to turn the problem into a QUBO in hope that the potential speedups from a quantum computer out weighs the speedups we saved from exploiting the structure. Until we get big enough quantum computers that allow us to fully understand how these speed ups scales, we simply don't know.

Coincidently, some quantum algorithms have been made to address your problem of constraints being ignored and the space blowing up. I've got a slightly different one going on arxiv next week, I'll link to it as well once its up. and it deals with the milp you mentioned.

In theory you can convert your qubo instance into a max cut, and then work out how close to the ground state you have to be for a feasible solution. My guess is you're right and you will need to be within $\frac{1}{poly(n)}$ or worse $\frac{1}{exp(n)}$ within the ground state of the maxcut instance which probably is not attainable for the larger problems.

Ethan Davies
  • 768
  • 1
  • 8
-2

QUBO form is a quadratic programming with binary variables. That is nothing new in field of optimization. That form is very much needed in optimization problems.

In generally it is many times good when different problem formulations are possible. That enforces more easily not to think an optimization problem as easy path to solution with always same algorithms etc., but as an optimization problem that can have many different fruitful approach to solve them effectively.

Example when a quadratic programming with binary variables is best formulation for optimization problem: portfolio optimization.

Matti Sarjala
  • 296
  • 1
  • 8