17

At the beginning of quantum computation, David Deutsch made a strong claim that the Many Worlds interpretation of quantum theory was at the foundation of his ability to do what he did. There was a lot of interest in his claim, and Many Worlds had something of a resurgence at the time. Amongst other strong claims, I'm particularly aware that Chris Fuchs has pressed a Bayesian perspective that is rather different from the Many Worlds interpretation.

My perception, however, is that quantum computation/information/communication is now much less in thrall to any particular interpretation, unless we say there is such a thing as the minimalist interpretation. It seems that most quantum computation work is reported in journals in as close to the unadorned Fock space formalism of quantum optics as possible, often cut down to a small, finite number of harmonic oscillators, in one of the many formalisms and ansätze of statistical or condensed matter Physics, or, of course, the discussion revolves around qubits. Nonetheless, quantum information is not called quantum particle information —it's not part of particle physics— and the Bell and other inequalities that form the basis of the general rejection of classical particle physics are central instead of being as peripheral as they often seem in particle physics, so it seems, to me, that there is at least something of a change of perception, of zeitgeist, if you will.

Quantum computation for a long time was a subject of keen interest in the academic quantum foundations conferences that I went to back in the day. Is that interest still there? As Peter Shor points out in a comment, this interest is still there, but Is any of that interest felt in the wider Physics community that has relatively little practical interest in quantum foundations? [Peter Shor Answered that too —in a way that agrees with my prejudices, so it must be right— while I was writing the EDIT below, which tries to address one way in which I think the zeitgeist is being affected.]

EDIT: I suppose a significant number of Physicists will occasionally read PRL articles on quantum information that are highlighted by the editors, and it would be hard to avoid all Scientific American, New Scientist, and Nature articles on the subject. Someone considering grant sources would presumably at least briefly consider whether their work could be applied to QI (I'm bored already with the comp../inf../comm..) and occasionally talk to the QI group at their Institution (almost everybody has one of those by now, right?). Insofar as they do, then it seems inevitable that some QI viewpoints must be increasingly out there. Bell's theorem is surely more a topic of informed discussion than it was 20 years ago. To make the question more specific, and to include my bias in a more explicitly leading way, Do people agree with the implication of my second paragraph that “particles” are less in the air than they were in the heyday of Particle Physics? [If you not unreasonably think that's too specific, feel free to Answer the original and more general Question of the title. The more specific Question still admits at least evidence of the sort given by Peter Shor, and shouldn't be thought of or addressed as only a Yes/No question.]

Hat Tip to Roy Simpson for this Question (his is the credit, mine is the blame).

Peter Morgan
  • 10,016

3 Answers3

7

Since I am mentioned in the question, I shall make a few points about what I understand about the history of Quantum Computation.

Originally Deutsch was proposing quantum computation as a "thought experiment" which would prove the Many Worlds Interpretation, or at least provide a theoretical basis for constructing experiments. Several claims were made at the launch of his quantum computation approach, one of which related to the efficiency of the quantum algorithms that could be constructed, versus the efficiency of the corresponding Turing algorithms. This mismatch in efficiency was presented as an argument that the quantum case was demonstrating some form of parallelism.

What has happened in the years since is that the field of Quantum Complexity Theory has grown, and all the quantum algorithms have been shown to belong to a class called BQP which is larger than the serial P class, but yet smaller than the larger parallelism classes known classically (like NP). This leads to some puzzlement as to what quantum algorithms actually prove about physics reality. Had a more dramatic result been proven (which theoretically could still happen), such as BQP = NP, then physicists might have taken note and started to wonder why this was true.

In the meantime interest in the Many Worlds Interpretation has continued outside of the Quantum Computation framework arising from the Quantum Gravity perspective; and I suppose that few Quantum Computation Engineers see their work as relating to cosmological issues, although Deutsch likely remains the exception with his Multiverse ideas.

One aspect of the Deutsch model which seemed incongruous to me, was the restriction to finite qubits in the "Universal Quantum Computer" model. [Part of the reason for this being that one could directly match the Turing Machine which was explicitly finite.] From a practical perspective having a finite number of qubits might make sense (then again perhaps not...), but as a thought experiment one should perhaps extend the basic notion of quantum computation to a (countably) infinite set of states. I am not sure whether this idea has been followed up, but it seems more foundational than the engineering challenges currently considered in papers, and might lead to more useful discussions on QM Interpretations from the Quantum Computation perspective.

Roy Simpson
  • 4,791
2

The overlap between "interpretations of quantum mechanics" and "quantum computing" is still imprinted onto the descriptions of various research groups and grant proposals but as far as I can say, pretty much everyone realizes that there is no actual overlap in the research.

It's been clear from the beginning which interpretations are viable and which interpretations are not - a few basic experiments (and their theoretical descriptions) are enough to test an interpretation. An interpretation must be "de facto equivalent" to e.g. the Copenhagen interpretation or any other equivalent interpretation, otherwise it's wrong. And everyone realizes that the selection of the "personally preferred interpretation" from the equivalence class is a matter of personal taste and physicists usually don't talk about it.

So the quantum computation business has become a specialized and advanced "engineering field" which is standing on totally firm foundations and doesn't need to ask any questions about them. On the other hand, much of the continuing research of "foundations of quantum mechanics" has become a philosophical field whose purpose is to unlearn certain things that have already been learned, or solve the solved questions again and again. The two disciplines have totally diverged in practice.

Luboš Motl
  • 182,599
-4

Quantum Computation is testing the validity of QM. Which I guess is an interpretation to those people that think that QM is written in stone at the base of the universe. So it is testing interpretation, and thus may change it.

Real physical attempts at Quantum Computation lean heavily on QM working perfectly. So these are then experiments in the validity of QM, even if the lab doing the experiment does not think so.

Was not going to include this, but Luboš above does, so here it goes: Quantum Computation in my opinion shows just how silly QM is. QM predicts that you can build a ~zero (energy) cost super capable computer that will be able to calculate anything. When someone tells you about something that sounds too good to be true, do you just go ahead and buy?

Tom Andersen
  • 1,428