10

IBM have recently announced their 127 qubit Eagle processor. Other approaches, such as Rydberg arrays, have now 256 qubits, as for example in QuEra's QPU QPU.

While these are without a doubt outstanding techical acheivements, I am wondering what is their intended use case. The reason I'm asking is that as far as I know, a rough estimate that the number of qubits that can be entangled without error correction is about $1/\sqrt{\epsilon_{2Q}}$, where $\epsilon_{2Q}$ is 2 qubit gate error. Also, results such as this one that show that even circuits with width 16 are extremely limited in depth (another relevant metric here is IBM's record $\log_2(QV)=7$ as of today).

Given this, what kind of circuits/applications can be executed with today's error rates that would require the full > 100 qubit processor?

Martin Vesely
  • 15,244
  • 4
  • 32
  • 75
Lior
  • 1,270
  • 4
  • 17

2 Answers2

7

I think that the main reason behind is to tackle technical difficulties connected with building huge number of qubits. Having hundred of qubits brings about issues with interconnection, connections to microwave links for programming and measurement etc. If you have a look at decoherence times and quantum volume of the Washington, there is a little improvement even in comparison with 5 qubits processors. To sum up, currently there is no use case in real world, we only need to build prototypes to solve current technical issues and maybe to discover some new issues, previously unanticipated.

Martin Vesely
  • 15,244
  • 4
  • 32
  • 75
3

This paper may be of interest in regards to this question; we were able to execute QAOA circuits using all 127 qubits of ibm_washington (and the computation was not complete noise, due to the short depth circuit construction). The error rates are quite high, so the use case is not there, but it is possible to actually use all of the qubits on the chip.

Elijah Pelofske
  • 746
  • 4
  • 10