18

Suppose I have a source object that is not time varying, to be concrete let's say it's a galaxy. Is there anything additional that can be learned or done with multiple short exposure images of exactly the same field as compared to a single long exposure, given that the total integration time is identical? I'm thinking of things along the lines of noise suppression, background removal, image processing magic...

So far the only thing I can think of is that a long exposure could saturate the detector (I'm thinking CCD here). Short exposures could avoid this, allowing for accurate photometry across the entire image. I've tagged this [astronomy] since that's the area of application I'm most familiar with, but perspectives from other fields are welcome.

Kyle Oman
  • 18,883
  • 9
  • 68
  • 125

6 Answers6

23

Stacking is something that is done all the time in infrared astronomy. This is done because CCD technology doesn't work for wavelengths in the range of roughly 2 to 10 microns, and beyond, so they use infrared detector arrays like the HAWAII line of infrared arrays by TeleDyne. Typically, though somewhat less so as time goes on, the infrared arrays will have defects like stuck pixels, cross talk noise, etc.

To work around this, the users of such arrays will take an image, move the telescope by a fraction of the field of view, take another image, and repeat as necessary. This allows them to reject bad pixels and smooth out pattern noise. The other advantages of doing this are that it increases the dynamic range of the stacked image, as you asked about, and allows for the removal of transient signals like: asteroids, satellites, and cosmic ray hits. The LSST will use image pairs to aid in cosmic ray rejection.

The downside of this approach is that the process of reading the data out of the detector it adds an amount of noise to the signal (known as "read noise"). Because of this, your sensitivity will go up like the square root of the number of images (roughly, square root of time), instead of linearly with time. Because of this, if you want high dynamic range you're better off not just stacking, but also varying your exposure time - short image(s) for the bright parts of the field, and long images for the faint parts, with the rejection of the saturated pixels done in software. If you're taking this sort of approach with a CCD, you're going to want to rotate the fields between the long exposure because when CCDs saturate they tend to cross bleed along a row.

I don't know how possible this is in a non-professional setting, but an approach some cameras use is something called a "drift scan" (for example, the SDSS imaging detector). See, CCDs read out the charge from the pixels by shifting the whole image across the pixel array, and reading out when the charge reaches an edge of the chip. If you move the charge across the chip at the same rate the image of the sky is moving across the chip, you can continuously scan a strip of the sky.

Sean E. Lake
  • 22,927
16

The voice of bitter experience, here, to tell you about a problem that a properly working observatory shouldn't have to worry about. But I did the time I was working on a "serious" astronomy project.


Stacking medium length images provides some protection against faulty tracking. In the event of a tracking failure during a single long exposure there is little you can do to recover, but a few tracking failures during a run of five to twenty medium length frames will still leave you with more than half the data (and a headache in-so-far as combining sets of frames that have different relative pointing, but it can be done).


My experience came in the early 1990s when putting a cooled CCD behind a 14" scope putting the whole thing on a mountain without human oversight and letting schoolkids submit observing requests over the internet (by FTP, because this was before the web) was a really cool new idea. But for cost and alignment reasons the project was using software tracking. Written is vBasic. We could get 30 second runs all the time. Five minute runs with some regularity, and essentially never got more that twenty minutes without a tracking fault.

But by stacking ten or so three minute runs we were able to image down to the nineteenth magnitude even with the box on top of the physic building for testing. And we actually tracked a MACHO-micro-lensing event light curve and matched the big boys which was my first "real" science experience.

15

If your exposures are short enough (a fraction of a second), you can even combat turbulence in the atmosphere. The trick is to do very many short images then pick the ones where a (bright) point source is sharpest and only stack those.

The technique is called Lucky Imaging and can deliver images as sharp as the Hubble Space telescope from ground-based instruments.

As an aside, your question could be - what should be my criterion for when not to stack images? - because the advantages to doing so, in terms of bad pixel rejection, cosmic ray removal and dynamic range, are so great. For optical CCD images, the break-even point is normally when the readout noise becomes a negligible contributor to the signal to noise of whatever you are trying to measure. Another consideration can be how long it takes to read out the CCD, which results in "dead-time".

Lucky Imaging relies on special electron-multiplying CCDs that can be read out very rapidly with modest readout noise, at the expense of a dispersion in the gain (number of output electrons per input photon). Most other astronomical CCDs minimise readout noise at the expense of readout times of tens of seconds, but are highly linear.

ProfRob
  • 141,325
10

I haven't done this for astronomy, but have used an astronomy CCD down a microscope for electroluminescence and have also used cooled imaging CCDs for spectroscopy. Although I have often set the experiment up to measure potential time-variation, there are quite a lot of advantages even for an unchanging source.

  • Dynamic range: If your CCD has 12 bits, you can represent $2^{12}$ different values. That's quite a lot, but if you are interested in points that are $2^{10}\approx 1000$ times brighter than other points of interest, the dimmer ones will have only 100 counts when the brighter ones saturate. You have essentially unlimited dynamic range adding in software. This is often necessary in spectroscopy, where pixels are summed on-chip to reduce readout noise making saturation very likely.
  • Charge bleeding: a very full pixel can leak charge into neighbouring pixels. It doesn't happen fast unless it's saturated but it still happens.
  • Cosmic rays and other random events: They're often called cosmic rays, but some also come from nuclear decay. These are bright spots or tracks on single images. Multiple images allow you to deal with these outliers (there are a few techniques, but for a simple example you could exclude the bottom n% and top n% of values when averaging/summing, on a per-pixel basis).
  • Experimental issues: you can re-register images to deal with your instrument not being as stable as you'd like (if you have reference features), or you can discard images if a stray light source affected a subset of frames (someone turned the roomlights on, an aeroplane flew past your telescope, etc.). This is also relevant if your sample died or your telescope tracked until there was an obstruction/light source in the field of view.
Chris H
  • 804
7

I don't know about astronomy, but one reason it can be useful in normal photography is to combat camera shake by auto-aligning the images before blending them. This can be useful if you want to take a really long exposure but there isn't a stable place to set up a tripod. The image alignment can be done in Photoshop or using a utility called align_image_stack. With patience it should even be possible to take hand-held long exposure shots this way, though I admit I haven't tried that.

N. Virgo
  • 35,274
0

A significant increase in dynamic range can be achieved by taking exposures with different exposure settings. Wind,vibration, periodic errors etc. Can be combined with all other techniques. It is used in spectrum analyzers under the name of video averaging. Can also be used in radio astronomy. I wonder if it could have been adapted to address the reciprocity failure in film?

barry
  • 312