36

I try to read up on diffraction limiting and gaussian beams, but it always gives a result saying it’s characterized by an angle which is the edges of a cross section of a cone. Which says nothing on the physical limits of how I choose that angle. It just describes (once far away from the narrowest point) the shape of a cone: width is proportional to distance from the narrowest spot.

So what’s to prevent you from describing a laser that's arbitrarily tight, at some arbitrary distance, and learning that the required angle is very close to 0?

What, in principle, prevents this from happening?

In Robert L. Forward’s “hard science fiction” novel he based the lightsail description on actual research but did not go into detail in the novel. As a plot point, the senders had to enlarge the size of a focusing device (probably a zone plate) to send the beam for breaking, so the larger sending aperture was necessary for a longer distance. Now they didn’t finish in time due to funding but saved the mission by doubling the light frequency instead. So that sounds like a diffraction effect.

I suppose the relationships of what is possible is simply scaled by wavelength, and once you divide that out there is some relationship between the possible size of the emitter, size of the target, and separation between them? Why does making the emitter larger allow the target to be smaller?

To use some concrete numbers, suppose the target is a lightsail 1 light year away, and it is 1 Mm in diameter. The wavelength in Forward’s story was green light, and if higher frequency allows better focus than the best beam would be the highest frequency that doesn’t start causing issues by breaking bonds in the atoms reflecting it, so just past visible where UV begins. What size emitter (final focusing device) would be needed?

Qmechanic
  • 220,844
JDługosz
  • 5,357

2 Answers2

40

So what’s to prevent you from describing a laser that's arbitrarily tight, at some arbitrary distance, and learning that the required angle is very close to 0?

The half angle of divergence is given by

$$\theta = \frac{\lambda}{\pi w_0}$$

where $w_0$ is the beam diameter at its narrowest point (the waist, or focal point), and $\lambda$ is the optical wavelength.

Typically with a laser the waist point is at the output aperture of the laser cavity, and the beam diverges from there. If you built your laser with a converging output, you'd push the waist point out along the z direction (the direction of propagation) but you'd also reduce the waist diameter, so ultimately increase the divergence angle.

So you can't choose to produce an arbitrarily small divergence angle unless you're prepared to build a laser with an arbitrarily large output aperture.

To use some concrete numbers, suppose the target is a lightsail 1 light year away, and it is 1 Mm in diameter.

1 light year is about $10^{16}$ meters. So you need a divergence angle on the order of $10^6 / 10^{16}$, or $10^{-10}$ radians. You need a beam waist of

$$ w_0 > \frac{\lambda}{\pi \theta} $$

If your wavelength is 500 nm, this means a waist of at least 1600 m. In practice I expect there would be "unique engineering challenges" in designing optics close enough to ideal to achieve this kind of divergence. I've never heard of beam divergence being measured in units smaller than milliradians, but I don't know what's been achieved in hero experiments.

Will Vousden
  • 1,239
The Photon
  • 29,651
13

As with all other fields in physics, much of what one observes in optics is governed by scales. These scales are typically: the wavelength $\lambda$, the size of the beam (smallest beam radius $w_0$ or aperture radius $R$) and the propagation distance $z$ (or focal length $f$).

For simplicity let's assume that we have a Gaussian beam (even if we don't, we can still use these relations, with the understanding that things can then only be worse). If I produce a laser beam with a flat wave front (no curvature) with a given size and given wavelength, then that beam will inevitable start to diverge with a beam divergence angle $\theta$ given by $$ \theta = \frac{\lambda}{\pi w_0} . $$

Now say we want to try and counteract the divergence by adding a converging phase to the beam. It would be like sending the beam through a lens first. For comparison we keep the size of the beam with the converging phase the same as before. Sure enough the beam will first converge to a focal point, but then after that it will start to diverge again. This time the divergence angle would be larger, because the minimum size of the beam, which was obtained at the focus is now smaller and the beam divergence angle is based on this smallest size.

What is the size of the focal spot? A lens actually implements a Fourier transform. So the relationship between the beam size before the lens and the size of the focus (beam radius at focus: $w_f$) is governed by this Fourier relationship between Gaussian functions: $$ \tag{1} w_f = \frac{\lambda f}{\pi w_0} $$ Often the beam size is given by the aperture size of the system (lens). In such a case one can replace $w_0$ by the radius of the aperture $R$. Then we see that for a given focal length and wavelength the size of the focal spot becomes smaller as the aperture becomes larger. (That's why they needed a large aperture in the story.)

What if we try to produce the focused spot at an astronomical distance away from us? Turns out there is a maximum distance at which we can `throw' the beam waist (focus). This distance is given by the Rayleigh range of the output beam: $$ z_{\rm Rayleigh} = \frac{\pi w_f^2}{\lambda} . $$ So we see that, in order to increase this distance, we would also have to increase the size of the beam. So in the end one cannot win.

flippiefanus
  • 16,824