Error Range: How Many Image Frames Should I Stack?

Affiliation
American Association of Variable Star Observers (AAVSO)
Fri, 06/14/2013 - 23:04

Hello! I've started to get the mechanics of taking images consistent. For example, for my time series of the Cepheid ASAS182612 the error range seems to be from 0.01 to 0.05 mag with single 60 second images.

    Another example, being dimmer, T PYX has a larger error range about 0.1 to 0.25, with one datga point with an error of about 0.5 mag. These are typically single or a stack of two 60 second images through my 8in LX200 classic with ST-402ME BVIC filters.

    What type of error range should I strive for with my data? With this information, I can try to estimate the number of images to stack. As I plan my imaging runs, is there a way to estimate the number of images I should obtain or to estimate the correlation I that I m ight expect betyween the number of images stacked and the error range for my system? I realize that this would depend on the magnitude of the variable being imaged. And that there would be a diminishing benefit with the narrowing of the uncertainty range as the number of images are stacked. Thank you and best regards.

 

Mike

Affiliation
American Association of Variable Star Observers (AAVSO)
Error Range and Stacking Images

Mike, 

That is one of the $64,000 questions in doing photometry and the answer is that it depends on what you are doing. If you are making exoplanet light curves then you are looking for UNCERTAINTY of around 0.002 mags or less in most cases. I emphasize the term uncertainty, because if you are trying to do very accurate absolute photometry, then you have to be concerned about systematic errors in addition uncertainty (random error). 1

If you are participating in campaigns such as support of the Hubble observations of cataclysmic variables, 0.1 magnitude uncertainty - about the same as one expects for visual  observations, should be OK. Arne has frequently stated that the goal for determining magnitudes of comp stars is error (which includes systematic errors) of less than 0.02. It seems realistic to try for uncertainty of that order of magnitude. 

There are a couple of things to consider besides your image depth. If you are doing multi-filter photometry you want the effective time of observation through the various filters to be the same. To accomplish that you take them in pairs - IRVBBVRI is the usual order, I assume because image depth in B is usually the least. That means you are combining data from multiples of two images. 

You write about stacking images. I assume you mean actually aligning and combining the images. If you do that, use the processing option that only moves whole pixels and doesn't re-allocate values among pixels when aligning.  Otherwise, alignment but can introduce error in the photometry. Unless the images are so faint that your software can't recognize star centroids, averaging photometric measurements and stacking accomplishes the same noise reduction. It all depends on whether you prefer working with images or spreadsheets. 

Calibration frames remove noise, but also add noise to your images. The standard deviation in dark current (the dark current SNR) in any pixel is SQRT(D) where D is the total dark current count from the images you average. 

You have N*/SQRT(N*+ npix/nb)(Ns + Nd +Nr2))

N* = total number of star photons in your measurement aperture
npix = total number of pixels in your measurement aperture
nb = total number of pixels in your sky background annulus
Ns = average sky background electrons per pixel
Nd = average dark current electrons per pixel
Nr = read noise electrons per pixel 

Let's say the measurement aperture has a radius of 8 pixels which gives npix = 25 and that the number of pixels in the background annulus is more than 10x this amount so that we can ignore npix/nb. Also, let's assume average dark current is 15e per second per pixel and exposure times are 

180 sec data images and data darks
10 sec for flats and flat darks. 
0.1 sec for bias frames

Dark currents per frame are then
2700e average for data images and data darks
150e for flats and flat darks
1.5e for bias frames. 

and the dark current is the square of the dark current uncertainty since it arises from Poisson noise.

If you only had 1 frame of each then the total dark frame noise contribution squared under the square root sign would be the sum of all of the dark currents 25*(2700+2700+150+150+1.5). The dark current contribution from calibration frames is larger than from the data image. If you average 10 of each calibration frame then this dark current sum inside the square root calculation becomes 25*(2700+270+15+15+.15)e. Now the dark current is increased by only about 11% which results in a 5% increase (after taking the square root) to the dark current noise. If you increase the number of calibration frames to 20 then the net contribution of calibration frame noise is less than 3% of the data frame's. Reductions from median combining are not quite as good but are close and by the time you get to 20 images essentially indistinguishable except you don't have to worry about weeding out dark frames with inconveniently placed random defects like cosmic ray hits.

You could theoretically just take single dark frames 10 or 20 times as long to accomplish the same dark current noise reduction in your calibration frames. However, practicalities get in the way. Cosmic ray hits are the most obvious issue lobbying against a small number of long darks. Another is that some pixels with high dark currents may saturate and therefore won't scale properly to the shorter time. It is common however to use scaled data darks to calibrate flats rather than taking separate flat darks. 

Of course the other really important calibration variable is in flat frames. changes in attitude between flat data and flat frames kill you and it can happen simply by flipping the meridian in an equatorial mount. non flat light source for flat frames easily happens with dome flats and can happen with a light box that isn't squarely placed on the end of the OTA. Change in focus between data images and flat frames. I have had a particular problem with that one when making exoplanet transit light curves with an aluminum tube telescope.  You don't want break in the transit light curve while you re-focus but the run may be 6 hours long with a 15o C temp variation. 

If you are doing high accuracy absolute photometry, then you have to pay attention to systematic errors as well. They may be within your equipment such as variations in focus across the FOV, and, of course,  transformations. You have to determine your zero point error and if you are trying for high accuracy you should at least determine the effect of 2nd order extinction to see if they are significant. Often they aren't unless you are doing all-sky photometry unless your comps and target have very different colors. Zero point error is important for absolute photometry. I find it is frequently significantly larger than the intrinsic uncertainty in the differential photometric measurements of the target star. 

(1) The term absolute photometry in this context means determining apparent magnitude of the target (m) as opposed to simply a differential magnitude (delta m) or relative changes in magnitude. it is the same as the WebObs term "standard magnitude." It does not mean determining the Absolute Magnitude (M). The confusion in terminology concerning photometric measurements drives me nuts. 

Brad Walter, WBY