In Empirical Algorithmics, researchers aim to understand the performance of algorithms through analyzing their empirical performance. This is quite common in machine learning and optimization. Right now, we would like to know something about the relative performance of quantum algorithms and their classical counterparts based on preliminary data from quantum computer emulators. My concern is that we might see encouraging empirical data that shows quantum computers with better scaling using simulators up to about 35 qubits, but then this advantage will disappear in the future once we have more data. What are the best practices for analyzing relative performance of classical and quantum algorithms in a robust way that gives insight about scaling? Or is this simply not yet possible?
Asked
Active
Viewed 207 times
1 Answers
1
The pitfalls of heuristics apply to quantum.
Perhaps the best practice for comparing quantum and classical heuristics without access to quantum hardware is to run them on simulators that try their best to model real hardware. After seeing that your algorithm does well on a simulation of 35 noiseless qubits, the next step would be to see if it does well on a simulation of 35 noisy qubits. Noisy simulations of QAOA have shown that noise significantly decreases the effectiveness of the algorithm.
Your concern that the algorithm might look like it's scaling well for small problems but roadblocks come up with larger problems is valid. There is no guarantee that this won't happen. This comes with the territory.
Victory Omole
- 2,332
- 1
- 10
- 24