16

All the explanations in the internet about pinhole cameras show two rays from outside passing through a small hole and produce an inverted image inside. Why doesn't it work with a big hole? Or basically without holes? Why are we not seeing images of the outside world through a door or window? The rays still pass through, and even better they don't have to invert.

With big holes, it's like light is just illuminating the space, no colors (or white?) but with small holes, why are there colors from the outside?

arjunsiva
  • 271
  • 1
  • 5

6 Answers6

34

The pinhole projects a distinct image of itself onto the screen for each point in the subject. The image of the subject is composed from all of the overlapping images of the pinhole that are projected from each different point on the subject

Diagram showing rays from a single point, through a pinhole

If the pinhole is circular, then each of those overlapping images is a solid disk of light. Photographers call those overlapping disks circles of confusion.* The "confusion" is the photographer's way of saying that they defocus the image.

It should be obvious that by making the pinhole larger, you increase the size of the circles—you add "confusion," and the image of the subject will be less sharp.


* When a lens is used, and when the subject is in sharp focus, and when there are bright points of light far enough in the background to be substantially out of focus, then photographers call the bright circles, "bokeh," and they are considered to be a desirable, artistic effect.

Solomon Slow
  • 17,057
32

With big holes it still works. The image quality just degrades quickly. Under some simplifying assumptions (ray optics and incoherent light if you are interested), the image of a pinhole camera can be calculated using a convolution of the object and the pinhole. This might be abstract, but it just means that the image is a smeared out version of the object, and the amount of smearing is determined by the size of the aperture. Here's what that looks like in the image below. The 'object' is a bunch of point sources (small LEDs) and the letter T in the center. In the third row you can see that the shape of the aperture can be recognized in the image because of how convolutions work.

Image showing what a convolution looks like for 3 different apertures

Here is the code if you want to play around:

# %%
import numpy as np
import matplotlib.pyplot as plt
import scipy

%%

n = 100 n_LEDs = 8 x = np.arange(-n // 2, n // 2) y = np.arange(-n // 2, n // 2)

X, Y = np.meshgrid(x, y)

Start with the letter T

obj = (((np.abs(X) < 2) & (np.abs(Y) < 5))) + ( (np.abs(Y + 5) < 2) & (np.abs(X) < 5) ) obj = obj.astype(int)

Add some LEDs (point sources)

for l in range(n_LEDs): i, j = np.random.randint(n * 0.1, n * 0.9, size=2) obj[i, j] = 3

Define the point spread functions (which happen to be the same as apertures

for incoherent light in the ray optics regime)

PSF_smallpinhole = (X == 0) & (Y == 0) PSF_largepinhole = X2 + Y2 <= 5*2 PSF_triangle = ( ((np.cos(np.pi / 6) X + np.sin(np.pi / 6) * Y) >= -5) & ((-np.cos(np.pi / 6) * X + np.sin(np.pi / 6) * Y) >= -5) & (-Y >= -5) ) PSF_list = [PSF_smallpinhole, PSF_largepinhole, PSF_triangle]

fig, axes = plt.subplots(ncols=3, nrows=3, figsize=(6, 8), dpi=300, layout="constrained")

for PSF, axrow in zip(PSF_list, axes): # Create the image by convolving object and PSF image = scipy.signal.convolve(PSF, obj, mode="same")

axrow[0].imshow(obj)
axrow[0].set_title(&quot;Object (LEDs)&quot;)
axrow[0].axis(&quot;off&quot;)

axrow[1].imshow(PSF, cmap=&quot;gray&quot;)
axrow[1].set_title(&quot;Aperture (pinhole)&quot;)
axrow[1].axis(&quot;off&quot;)

axrow[2].imshow(image)
axrow[2].set_title(&quot;Image&quot;)
axrow[2].axis(&quot;off&quot;)

plt.show()

6

The problem with those diagrams is that they only show the rays which "succeed".

However, light rays don't "know" to pass through the hole. For most objects (excluding things like mirrors and lenses which have their own complications), the light rays spread out in all directions.

No Hole

What's creating the image is not the pinhole itself, but rather the object the pinhole is part of, which is blocking all the light rays which aren't part of the image.

With a pinhole

Note that all the light rays in the second picture are present in the first, but you can't make out the clear separation of red, green and blue objects in the first because of all the overlapping light rays. It's a uniform white image, because of all the overlapping light rays are "smeared out" across the screen.

If you have a larger hole, you block some of the light rays, but you allow more of them to pass

Big Hole

You still get some separation -- the light from the red object doesn't overlap with the blue one -- but each point on the objects being imaged get "smeared out" over a larger section of the screen. The larger the hole, the larger the area things get spread across, and the fuzzier the image becomes. With a large enough hole, the light spreads across most of the screen and the image is an indistinguishable blob of white.

R.M.
  • 403
3

A pinhole camera works on concept of geometric projection i.e. it allows a single ray to form a sharp image (now again you cannot define the thickness of a ray but can think the hole to be extremely thin). Think of it as when you see sunlight from a small pore from curtain, you see a sharp image but when that hole becomes big, the image becomes fuzzy. Same case here.

3

One intuition: if a pinhole camera works with a small hole, we can think of a large hole as a big collection of small holes. Granted, these small holes don't have any physical boundary around them, but most explanations of a pinhole camera focus more on the light going through, not the light being stopped, so in fact the same principle applies: all the light going through any one of these imaginary small holes will form a nice pinhole projection image.

Except, because we have an enormous number of these small holes right next to each other, all of their projected images overlap! So instead of having crisp images, the edges get blurry. Blurry enough and they stop looking like distinct parts of an image, it just looks like uniform light.

In between, with a hole that's small but not small enough, you can see fun little aperture patterns like @AccidentalTaylorExpansion's answer.

Sam Jaques
  • 1,397
2

Most diagrams of optics only show the light that helps make a picture. In the diagram, you see a ray of light bouncing off a person's head, through the pinhole, and onto the detector, and maybe another ray of light coming from the person's feet. But you shouldn't forget that actually there is always light going all over the place. There are rays of light bouncing off your head in all directions, and same for your feet and every other part of you. If you just take out a piece of film or a CCD, rays of light from your head will hit every part of the film or CCD, and same for your feet.

What the pinhole does is throw away all the rays except for the ones that help you make an image. With the pinhole, the only rays from the top of your head that make it to the detector are the ones that pass through the pinhole and hit a spot near the bottom. And the only rays from your feet that hit the detector are the ones that pass through the pinhole and hit the detector higher up. If you make the hole bigger, more rays from your head make it through and hit a wider patch of the detector, and the image gets fuzzier.

Mark Foskey
  • 3,965