168

The video “How Far Can Legolas See?” by MinutePhysics recently went viral. The video states that although Legolas would in principle be able to count $105$ horsemen $24\text{ km}$ away, he shouldn't have been able to tell that their leader was very tall.

enter image description here

I understand that the main goal of MinutePhysics is mostly educational, and for that reason it assumes a simplified model for seeing. But if we consider a more detailed model for vision, it appears to me that even with human-size eyeballs and pupils$^\dagger$, one might actually be able to (in principle) distinguish smaller angles than the well known angular resolution: $$\theta \approx 1.22 \frac \lambda D$$

So here's my question—using the facts that:

  • Elves have two eyes (which might be useful as in e.g. the Very Large Array).
  • Eyes can dynamically move and change the size of their pupils.

And assuming that:

  • Legolas could do intensive image processing.
  • The density of photoreceptor cells in Legolas's retina is not a limiting factor here.
  • Elves are pretty much limited to visible light just as humans are.
  • They had the cleanest air possible on Earth on that day.

How well could Legolas see those horsemen?


$^\dagger$ I'm not sure if this is an accurate description of elves in Tolkien's fantasy

Kyle Oman
  • 18,883
  • 9
  • 68
  • 125
Ali
  • 6,052
  • 6
  • 36
  • 50

9 Answers9

123

Fun question!

As you pointed out,

$$\theta \approx 1.22\frac{\lambda}{D}$$

For a human-like eye, which has a maximum pupil diameter of about $9\ \mathrm{mm}$ and choosing the shortest wavelength in the visible spectrum of about $390\ \mathrm{nm}$, the angular resolution works out to about $5.3\times10^{-5}$ (radians, of course). At a distance of $24\ \mathrm{km}$, this corresponds to a linear resolution ($\theta d$, where $d$ is the distance) of about $1.2\ \mathrm m$. So counting mounted riders seems plausible since they are probably separated by one to a few times this resolution. Comparing their heights which are on the order of the resolution would be more difficult, but might still be possible with dithering. Does Legolas perhaps wiggle his head around a lot while he's counting? Dithering only helps when the image sampling (in this case, by elven photoreceptors) is worse than the resolution of the optics. Human eyes apparently have an equivalent pixel spacing of something like a few tenths of an arcminute, while the diffraction-limited resolution is about a tenth of an arcminute, so dithering or some other technique would be necessary to take full advantage of the optics.

An interferometer has an angular resolution equal to a telescope with a diameter equal to the separation between the two most widely separated detectors. Legolas has two detectors (eyeballs) separated by about 10 times the diameter of his pupils, $75\ \mathrm{mm}$ or so at most. This would give him a linear resolution of about $15\ \mathrm{cm}$ at a distance of $24\ \mathrm{km}$, probably sufficient to compare the heights of mounted riders.

However, interferometry is a bit more complicated than that. With only two detectors and a single fixed separation, only features with angular separations equal to the resolution are resolved, and direction is important as well. If Legolas' eyes are oriented horizontally, he won't be able to resolve structure in the vertical direction using interferometric techniques. So he'd at the very least need to tilt his head sideways, and probably also jiggle it around a lot (including some rotation) again to get decent sampling of different baseline orientations. Still, it seems like with a sufficiently sophisticated processor (elf brain?) he could achieve the reported observation.

Luboš Motl points out some other possible difficulties with interferometry in his answer, primarily that the combination of a polychromatic source and a detector spacing many times larger than the observed wavelength lead to no correlation in the phase of the light entering the two detectors. While true, Legolas may be able to get around this if his eyes (specifically the photoreceptors) are sufficiently sophisticated so as to act as a simultaneous high-resolution imaging spectrometer or integral field spectrograph and interferometer. This way he could pick out signals of a given wavelength and use them in his interferometric processing.

A couple of the other answers and comments mention the potential difficulty drawing a sight line to a point $24\rm km$ away due to the curvature of the Earth. As has been pointed out, Legolas just needs to have an advantage in elevation of about $90\ \mathrm m$ (the radial distance from a circle $6400\ \mathrm{km}$ in radius to a tangent $24\ \mathrm{km}$ along the circumference; Middle-Earth is apparently about Earth-sized, or may be Earth in the past, though I can't really nail this down with a canonical source after a quick search). He doesn't need to be on a mountaintop or anything, so it seems reasonable to just assume that the geography allows a line of sight.

Finally a bit about "clean air". In astronomy (if you haven't guessed my field yet, now you know.) we refer to distortions caused by the atmosphere as "seeing". Seeing is often measured in arcseconds ($3600'' = 60' = 1^\circ$), referring to the limit imposed on angular resolution by atmospheric distortions. The best seeing, achieved from mountaintops in perfect conditions, is about $1''$, or in radians $4.8\times10^{-6}$. This is about the same angular resolution as Legolas' amazing interferometric eyes. I'm not sure what seeing would be like horizontally across a distance of $24\ \mathrm{km}$. On the one hand there is a lot more air than looking up vertically; the atmosphere is thicker than $24\ \mathrm{km}$ but its density drops rapidly with altitude. On the other hand the relatively uniform density and temperature at fixed altitude would cause less variation in refractive index than in the vertical direction, which might improve seeing. If I had to guess, I'd say that for very still air at uniform temperature he might get seeing as good as $1\rm arcsec$, but with more realistic conditions with the Sun shining, mirage-like effects probably take over limiting the resolution that Legolas can achieve.

Kyle Oman
  • 18,883
  • 9
  • 68
  • 125
23

Let's first substitute the numbers to see what is the required diameter of the pupil according to the simple formula: $$ \theta = 1.22 \frac{0.4\,\mu{\rm m}}{D} = \frac{2\,{\rm m}}{24\,{\rm km}} $$ I've substituted the minimal (violet...) wavelength because that color allowed me a better resolution i.e. smaller $\theta$. The height of the knights is two meters. Unless I made a mistake, the diameter $D$ is required to be 0.58 centimeters. That's completely sensible because the maximally opened human pupil is 4-9 millimeter in diameter.

Just like the video says, the diffraction formula therefore marginally allows to observe not only the presence of the knights – to count them – but marginally their first "internal detailed" properties, perhaps that the pants are darker than the shirt. However, to see whether the leader is 160 cm or 180 cm is clearly impossible because it would require the resolution to be better by another order of magnitude. Just like the video says, it isn't possible with the visible light and human eyes. One would either need a 10 times greater eye and pupil; or some ultraviolet light with 10 times higher frequency.

It doesn't help one to make the pupils narrower because the resolution allowed by the diffraction formula would get worse. The significantly more blurrier images are no helpful as additions to the sharpest image. We know that in the real world of humans, too. If someone's vision is much sharper than the vision of someone else, the second person is pretty much useless in refining the information about some hard-to-see objects.

The atmospheric effects are likely to worsen the resolution relatively to the simple expectation above. Even if we have the cleanest air – it's not just about the clean air; we need the uniform air with a constant temperature, and so on, and it is never so uniform and static – it still distorts the propagation of light and implies some additional deterioration. All these considerations are of course completely academic for me who could reasonably ponder whether I see people sharply enough from 24 meters to count them. ;-)

Even if the atmosphere worsens the resolution by a factor of 5 or so, the knights may still induce the minimal "blurry dots" at the retina, and as long as the distance between knights is greater than the distance from the (worsened) resolution, like 10 meters, one will be able to count them.

In general, the photoreceptor cells are indeed dense enough so that they don't really worsen the estimated resolution. They're dense enough so that the eye fully exploits the limits imposed by the diffraction formula, I think. Evolution has probably worked up to the limit because it's not so hard for Nature to make the retinas dense and Nature would be wasting an opportunity not to give the mammals the sharpest vision they can get.

Concerning the tricks to improve the resolution or to circumvent the diffraction limit, there aren't almost any. The long-term observations don't help unless one could observe the location of the dots with the precision better than the distance of the photoreceptor cells. Mammals' organs just can't be this static. Image processing using many unavoidably blurry images at fluctuating locations just cannot produce a sharp image.

The trick from the Very Large Array doesn't work, either. It's because the Very Large Array only helps for radio (i.e. long) waves so that the individual elements in the array measure the phase of the wave and the information about the relative phase is used to sharpen the information about the source. The phase of the visible light – unless it's coming from lasers, and even in that case, it is questionable – is completely uncorrelated in the two eyes because the light is not monochromatic and the distance between the two eyes is vastly greater than the average wavelength. So the two eyes only have the virtue of doubling the overall intensity; and to give us the 3D stereo vision. The latter is clearly irrelevant at the distance of 24 kilometers, too. The angle at which the two eyes are looking to see the 24 km distant object are measurably different from the parallel directions. But once the muscles adapt into this slightly non-parallel angles, what the two eyes see from the 24 km distance is indistinguishable.

Luboš Motl
  • 182,599
12

Take the following idealized situation:

  • the person of interest is standing perfectly still, and is of a fixed homogeneous color
  • the background (grass) is of a fixed homogeneous color (significantly different from the person).
  • Legolas knows the proprotions of people, and the colors of the person of interest and the background
  • Legolas knows the PSF of his optical system (including his photoreceptors)
  • Legoalas know the exact position and orientation of his eyes.
  • Assume that there is essentially zero noise in his photo receptors, and he has acccess to the ouptut of each one.

From this, Legolas can calculate the exact response across his retina for any position and (angular) size of the person of interest, including any diffraction effects. He can then compare this exact template to the actual sensor data and pick the one that best matches -- note that this includes matching manner in which the response rolls off and/or any diffraction fringes around the border of the imaged person (I'm assuming that the sensor cells in his eyes over-sample the PSF of the optical parts of his eyes.)

(To make it even more simple: it's pretty obvious that given the PSF, and a black rectangle on a white background, we can compute the exact response of the optical system -- I'm just saying that Legolas can do the same for his eyes and any hypothetical size/color of a person.)

The main limitations on this are:

  1. how many different template hypotheses he considers,
  2. Any noise or turbulence that distorts his eyes' response away from the calculable ideal response (noise can be alleviated by integration time),
  3. His ability to control the position and orientation of his eyes, i.e. $2m$ at $24km$ is only $0.01$ radians -- maps to $\approx 0.8\mu m$ displacements in the position of a spot on the outside of his eyes (assumed $1cm$ eyeball radius).

Essentially, I'm sketching out a Bayesian type of super-resolution technique as alluded to on the Super-resolution Wikipedia page.

To avoid the problems of mixing the person with his mount, let's assume that Legolas observed the people when they were dismounted, taking a break maybe. He could tell that the leader is tall by just comparing relative sizes of different people (assuming that they were milling around at separations much greater than his's eye's resolution).

The actual scene in the book has him discerning this all while the riders were mounted, and moving -- at this stage I just have to say "It's a book", but the idea that the diffraction limit is irrelevant when you know alot about your optical system and what you are looking at is worth noting.

Aside, human rod cells are $O(3-5\mu m)$ -- this will impose a low-pass filtering on top of any diffraction effects from the pupil.

A Toy Model Illustration of Similar Problem

Let $B(x; x_0, dx) = 1$ for $x_0 < x < x_0+dx$ and be zero other wise; convolve $B(x; x_0, dx_1)$ and $B(x; x_0, dx_2)$, with $dx_2>dx_1$, with some known PSF; assume that this the width of this PSF if much much less than either $dx_1, dx_2$ but wide compared to $dx_2-dx_1$ to produce $I_1(y), I_2(y)$. (In my conception of this model, this is the response of a single retina cell as a function of the angular position of the eye ($y$).) I.e. take two images of different sized blocks, and align the images so that the left edges of the two blocks are at the same place. If you then ask the question: where do the right edges of the images cross a selected threshold value, i.e. $I_1(y_1)=I_2(y_2)=T$ you'll find that $y_2-y_1=dx_2-dx_1$ independent of the width of the PSF (given that it is much narrower than either block). A reason why you often want sharp edges is that when noise is present, the values of $y_1, y_2$ will vary by an amount that is inversely proportional to the slope of the image; but in the absence of noise, the theoretical ability to measure size differences is independent of the optical resolution.

Note: in comparing this toy model to the Legolas problem the valid objection can be raised that the PSF is not much-much smaller than the imaged heights of the people. But it does serve to illustrate the general point.

Dave
  • 4,203
8

One thing that you failed to take into account. The curve of the planet (Middle Earth is similar in size and curvature to Earth). You can only see 3 miles to the horizon of the ocean at 6 feet tall. To see 24 km, you would need to be almost 100m above the objects being viewed. So unless Legolas was atop a very (very) tall hill or mountain, he would not have been able to see 24km in the first place due to the curvature of the planet.

Jim
  • 89
5

Deconvolution can work but it only works well in case of point sources as e.g. pointed out here. The principle is simple; the blurring due to the finite aperture is a known mathematical mapping that maps a hypothetically infinite resolution image to one with finite resolution. Given the blurred image, you can then attempt to invert this mapping. The blurred image of a point source that should have affected only one pixel if the image were completely unblurred, is called the point spread function. The mapping to the blurred image is competely defined by the point spread function. There are various algorithms that are able to deblur an image to some approximation, e.g. Richardson–Lucy deconvolution or Wiener filter method.

In practice you can't deconvolve an image perfectly, because this involves divding the Fourier transform of the blurred image by the Fourier transform of the point spread function, and the latter will tend to zero at large wavenumbers. This means that you will end up amplifying the noise at high wavenumbers and it is precisely at the high wavenumbers that the small scale details are present. So, the resolution you can obtain will ultimately be limited by the noise.

Count Iblis
  • 10,396
  • 1
  • 25
  • 49
5

Legolas probably only needs one eye if he has enough time and can make sufficiently accurate spectral measurements.

First, note that Legolas was watching on a sunny day; we'll assume that between incident intensity and albedo that object were reflecting on the order of $ 100 \mathrm{W}/\mathrm{m}^2$ light, which is about $10^{22}$ photons per second. At 24 kilometers, that's down to about $10^8$ photons per $\mathrm{cm}^2$.

We're not sure how big Legolas' eyes are, as the books don't say, but we can assume they're not freakishly huge so are on the order of 1 cm in diameter, which gives him about $6 \cdot 10^{-5}$ radians angular resolution, or roughly $1.5 \mathrm{m}$. As already described, this ought to be adequate to count the number of riders.

Now there are two factors which are hugely important. First, the riders are moving. Thus, by looking at temporal correlations in spectra, Legolas can in principle deduce what the spectra of riders are distinct from the background. We can also assume that he's familiar with the spectra of various common objects (leather, hair of various colors, and so on). He thus can make a sub-resolution mixture model where he hypothesizes $n$ objects of distinct spectra and tries to find the size/luminance of each. This is probably the trickiest part, as the spectra of many items tend to be rather broad, giving substantial overlap in spectra. Let's suppose that the object he's looking for has only a 10% difference in spectral profile from the others (in aggregate). Then with a one-second integration time he'd have photon shot noise on the order of $10^4$ photons but a signal of about $A\cdot10^7$ photons where $A$ is the fractional luminance of the target object within the diffraction-limited field of view.

Since super-resolution microscopy can resolve items approximately proportional to the SNR (simplest example: if a source is all in one pixel, all in another, or a fraction in between, you basically just have to compare intensity in those two pixels), this means that Legolas could potentially find a bright object to within on the order of $1.5 \mathrm{mm}$. If he uses the gleam off a helmet and stirrup, for instance, he could measure height adequately well and pick out details like "yellow is their hair".

Kyle Oman
  • 18,883
  • 9
  • 68
  • 125
Rex Kerr
  • 2,199
4

In the spirit of your question, having two eyes and assuming you can use them as an array (which requires measuring the phase of the light-something eyes don't do) allows you to use the distance between them for $D$ in the resolution equation. I don't know the spacing of an elf's eyes, so will use $6 cm$ for convenience. With violet light of $\lambda = 430 nm$, we get $\theta \approx 1.22\frac {430\cdot 10^{-9}}{0.06}=8.7\cdot 10^{-6}$. At a distance of $24 km$, this gives a resolution of $21 cm$. You can probably distinguish horsemen, but height estimation is very hard.

The other issue is curvature of the earth. If the earth radius is $6400 km$ you can draw a right triangle with legs $24, 6400$ and discover the other is $6400.045$, so he only needs to be on a $45 m$ high hill. Ground haze will be a problem.

1

Here is another possibility which hasn't been mentioned yet. If an object A can be completely hidden behind another object of similar shape B, then B must be larger than A. Conversely, A passes behind B and remains partly visible the whole time, this is evidence that A is larger than B (or that A is not passing directly behind B, let's ignore that possibility for now).

In Legolas' situation, if the leader has some distinguishable feature (shiny helmet, different coloured jacket) and Legolas can see some of this color while the leader passes behind others in his group, then I would conclude that the leader is taller. The resolution is not important in this case. Legolas can tell which object is in front because the amount of leader-coloured photons will be reduced, as for a planet passing in front of a distant star.

craq
  • 807
0

There is also a geometric limitation for seeing that far. I have Q&A'ed it on mathematics.SE. If standing on level ground, Legolas would have been able to see only 4.8km far due to the planet's curvature (assuming that Middle Earth is on a planet resembling ours). To see that far, he would have to have climbed a hill or tree of about 50m height.

M.Herzkamp
  • 1,488