1

Kerr media, or mediums displaying the optical Kerr effect, are used in some optical quantum computers - on Chuang and Nielsen's Quantum Computing and Quantum Information, pgs. 289 - 290, it says,

Nonlinear optics provides one final useful component for this exercise: a material whose index of refraction $n$ is proportional to the total intensity $I$ of light going through it:

$$n(I) = n+n_2I$$

This is known as the optical Kerr effect [...]

Examining the Wikipedia article for the Kerr effect mentions three (!) different Kerr effects: the magneto-optic, electro-optic, and just plain old optical Kerr effect. Since the phrasing and equations match, I assume the book means the optical Kerr effect (also known as the AC Kerr effect).

The major difference between the optical/AC Kerr effect and the electro-optic/DC Kerr effect is for the DC version to work, one must manually apply the electric field to the medium, whereas for the AC version, the light going through produces the effect itself. So far, so good.

Now, the problem in my understanding arises when considering the term "intensity" ($I$) and what it means. When talking about optical quantum computers, we're talking about single photons, meaning the only way to vary the energy of the photon is by varying the wavelength (though this is a very small difference in energy).

Reading some papers, it talks about the light being of higher intensity to cause a larger Kerr effect...but it's a single photon. How can you vary intensity here, in a way large enough to really change the effect's significance?

auden
  • 7,085

1 Answers1

2

As far as the intensity of a single-photon goes, the relevant quantity is calculated as usual from the energy density as $I=uc$, where $c$ is the speed of light, and the energy density $$ u=\frac{\hbar\omega}{V} $$ is given by the photon energy $\hbar \omega$ (normally no bigger than a few eV) divided by something called the mode volume $V$. The mode volume is a crucial quantity but it will change from situation to situation, and it is essentially the volume occupied by the mode in question. For a cavity mode, a good approximation is the cavity length times the focal spot width, for a fiber it's the volume of the fiber, and so on. It will rarely be less than a cubic millimeter.

If you put those naive numbers in, you will get intensities that are some $22$ orders of magnitude weaker than the atomic unit of intensity, and that gives a good feeling for just how much work there is to do. A naive experiment where you just expect one photon to influence another single photon through a good old-fashioned Kerr-effect nonlinearity is simply not feasible.

That's not to say that the goal is unreachable, and indeed there are many approaches that provide credible roadmaps to nonlinear photon-photon interactions (and interesting partial results in that direction), but they do involve nontrivial work on the interaction medium. Rococo's link on Rydberg-atom media is a good example, atoms near nano-photonic structures is a promising avenue (example), and so on, but none of the proposals is (to my knowledge) scalable enough that you can shoot for an interaction-based photonic quantum computers. Simply put, photons just don't interact with each other strongly enough that you can make two-photon gates with them.


And, as you pointed out in the comments, this leaves open a gap in terms of how the existing photonic quantum computers operate if you can't use any two-photon entangling gates. This is a nontrivial question, and the answer is that to do quantum computation you need entanglement between the constituent qubits, but you don't actually need to do that during 'runtime'.

Instead, the relevant model is called measurement-based quantum computing, and the idea is essentially that you start off with a highly entangled multi-qubit state as a resource, and then you perform a bunch of single-qubit gates and measurements (along with feed-forward so e.g. each measurement can influence what gates and measurements are implemented on the next qubit down the line) without any further entangling operations.

This then shifts the burden to the creation of the entangled state, but that can be done at the start. This is usually through spontaneous parametric down-conversion, plus a bunch of Bell-state measurements, so that's where the nonlinearity ends up: the probability is still low, but you can keep trying until you have a suitable state, and then you run on with the calculation.

For further details on this scheme, I'll refer you to this PhD thesis.

Emilio Pisanty
  • 137,480