Hello! When I image bright objects, I try to obtain enough images so that the total exposure time is 10 seconds and the images are within my camera's linear range.
At times, a good exposure time is a bit awkward. For example, my X HER images in V filter now are between 2 seconds and 1 second. At 2 seconds, a number of images are saturated because of scintillation (so estimating total images needed to get to 10 sec can be difficult), while 1 second exposure can led to images with high errors since comps may be underexposed. Additionally, the PinPoint may have trouble plate solving the field in VPHOT.
So, I'll often use both integration times in a run and discard the 2 second images that are saturated. Up to now, I've only been using the 1-second or the 2-second sequence (when I have enough raw images) rather than stacking all images.
Is there any problem with photometry when if I were to stack both the 1-second and 2-second images?
I would think the exposure in the FITS header of the final image might be a bit funky, but I'm not sure if that matters for the photometry. Best regards.
Mike
Good question, Mike. I have no idea and would also be interested in the answer.
Ed
Mike,
"Is there any problem with photometry when if I were to stack both the 1-second and 2-second images?"
Have you actually tried doing this? I would expect photometry software would not permit it. For this to work, the exposure time shown in the header would have to be adjusted to compensate for the different exposures in the subframes. I'm not aware of any stacking software that does this. Perhaps some astrophotography software can do it, but I wouldn't trust that for photometry.
To calculate the flux (see page 47 of the Photometry Guide), the photometry software divides the averaged or median counts in the measuring aperture of the stacked image by the exposure time shown in the header (which comes from the camera software). If the header says 1 second, but some of the sub-exposures were actually 2 seconds, the results would be fubar.
With a CCD, why not try 1.5 second exposures? If you're using a CMOS camera you could just stack more and more 1 or 1.5 second exposures until you get the SNR you want in the comps.
Phil
hmmm...but photometry software that most people use here in general does not try to compute absolute flux in proper units of energy per unit area per unit time, it just computes *relative* flux for comparison and target star via differential photometry, and for that alone the exact exposure time is not important for the resulting magnitude value.
I think there are complications tho:
First, what do we mean exactly by "stacking" ? If we are *summing* (calibrated) images, then summing 10 x 1s + 5 x 2s should in principle (except for random noise effects) result in the same ADU count as any other *sum* of images lasting for a total of 20 s. And ideally your stacking software will sum the individual exposure times, will use a data type that avoids overflows and will keep the gain meta data from the original exposures and then the meta data is all correct and fine ... if the stacking software works this way.
The other and perhaps more often used method for stacking is average stacking. Now if you *average* images of different exposure time, and the software does this in a naive way by just adding all the images together and then dividing the pixel values by the number of frames, you make a 'mistake' by giving unequal weight to photons in the different frames. Still this does not per se skew the magnitude value unless it changed significantly during the measurement, as this affects target and comparison stars alike.
But the meta-data in the stacked image still might be important in other ways than computing the magnitude value itself. First, there is the timestamp. That may not be significant if we are talking about a few seconds of exposure for (say) a long period variable, but some applications actually do need high precision timing. Also if we are talking about combining images with different exposure times over longer timespans, this gets interesting. So will the computed mid-point of the stacked image, given its meta-data, match the actual mid-time of the observation from all the single images? You*ll need to check, with or without different exposure times.
The other thing is that photometry software might try to estimate the Signal-To-Noise-Ratio for the stars in your image. and it might try to do so taking every noise source into account. For example your photometry software might ask you to specify the dark current noise (that scales with exposure time), read-out noise, and might try to infer the gain from the meta data or your input so it can compute the shot noise as well and combine the noise effects in units of electrons.... but are all the meta data fields computed for the stack in a consistent way so that the SNR calculation makes sense? This gets slightly non-trivial for average-stacking. Until you checked this, I would not trust the SNR estimates from your photometry software when done on a stacked image.
Sorry for the rant but this is complicated stuff when you go to the details.
CS
HB
Experiment:
Last night I took a series of six images of the same field. The first three images had 15s exposures; the last three had 30s second exposures. I stacked all six using VPhot. VPhot did not object.
The stacked image has a time stamp (in the available images list and in the header) which was half way between the first and last image times. This, of course, is not quite accurate since the first three images took 45s and the last three 90s. As Bikeman pointed out this would not be an issues for slowly varying stars.
The exposure time in the stacked image header and the available images list is shown as 30s.
I agree that for differential photometry stacking images with different exposures can work, but I don't see any advantages in doing this.
Phil
Hello! Thank you for your note and trial.
The only time I consider doing this is when I'm imaging an LPV that is very bright, typically in I filter. I want to get a total of 10 seconds of integration time in order to minimize the effects of scintillation.
Recently, I have been trying to image X HER. I've been able to image in I filter down to 0.2 seconds, stacking 50 images of so.
My integration time in V is a bit unwieldy - typically between 2 and 3 seconds at present. At times, seeing is excellent and focus is spot on so that 3 second integrations leave a number of the images saturated. I've taken both 2-second and 3-second images during the run to try to estimate which integration time to use. So, I'll have a number of 2-second images that are not saturated, but the error goes up and the final stack may have plate solving problems since the other stars in the field may be too dim, especially if focus is off a bit. I may have several 3-second integrations, but not enough to reach the 10-second mark. As a result, I've been wondering if I can stack the 2-second and 3-second images together in order to improve the quality of the final image for photometry and enable the image to be plate solved. Otherwise, I may not have any image to obtain information from that image run.
I've had this problem with I filtered images as well. However, at 0.2 seconds, plate solving becomes difficult since the other stars in the field are very faint.
As X HER dims, this won't be an issue since I'll be able to obtain single integration times in both V and I filters.. Best regards.
Mike
Defocussing, or using a mask over the front of the telescope with a central circular hole.
Not sure if images from either of these would defeat plate solving.
Roy
True. But I run my set up unattended, so I have to adjust image integration time rather than aperture in these situations. Best regards.
Mike
Yes, that would constrain your options.
Roy