I'm working my way through the way transform works and I've been trying to calculate the transform coefficients using the excel spreadsheet which I've downloaded from the site. I'm using a DSLR by the way.
I took about 100 images of M67 over the space of about an hour. M67 is not ideal because of the field of view but I'm not too worried about that at the moment because I'm just trying to understand how it all works. My idea was to calibrate and stack all the images in order to reduce the SNR. However the spreadsheet allows for a series of images and it then takes the average of the magnitudes for use in the calculations. Instead my idea was that there would be one magnitude which would come from a single stacked image.
I can now see that I could have stacked the images in sets of 10 and got 10 magnitudes for the series. Would that be better than just stacking all 100 images for the purpose of calculating the transform coefficients?
Since I'm only looking at fixed stars it didn't seem to matter that the images were taken over a period of time.
Also someone told me that it doesn't make sense to take averages of magnitudes since they are logarithmic. Is that right?
Thanks
Steve
Ok, why not start with the fundamental question first:
> Also someone told me that it doesn't make sense to take averages of magnitudes since they are logarithmic. Is that right?
There is a lot of truth to that statement but one also has to consider how relevant it is in real life situations (spoiler: in almost all cases it isn't).
It is true, the photometric magnitude values can be thought of as a logarithmic measure of the photon count per unit time per unit aperture area. So averaging the magnitude values will give you a different result compared to computing the magnitude after averaging the photon counts (the latter is what we do when we "stack" or co-add images first). Which one is "more correct"? I guess most people would agree that for a photometric measurement with an ideal sensor (no sensor noise, infinitely fast shutter), it should make no difference whether you have a single (say) 3 minute exposure or if you took 3 images a 1 minute exposure one after another, and combine the 3 measurement results because you would have literally seen the same photons. In that sense, the correct way to get a magnitude measurement that would return the same result in both scenarios (one measurement vs combining the 3 individual measurements) is to take the average on the photon counts and not take the average the magnitudes. This is done when you stack your images, so that is "more correct".
But does it matter at all? It turns out to not matter for all practical considerations if the magnitude numbers that we average differ by only a small value, like you would expect when the difference is on the order of our measurement errors, so in the range of (say) up to 0.1 butr most of the time closer to 0.01.
Here is a little example calculation with somewhat realistic number ranges to demonstrate the difference:
Let's say we have pixel values that vary like you would expect them if we are talking about photon counts:
count =
19737 19884 20047 19974 20066 20043 20236 20183 19931 19782
We can convert these to magnitudes, with an arbitrary zeropoint Z for this example, like this :
m = -2.5 log(count) +Z
so for Z = 20 , we get this :
m=
9.24509318381659 9.23468832315868 9.23753569948332 9.25117727713403 9.25932450618301
lets take the average (arithmetic mean) of our magnitudes, this is
mean(m) = 9.24809186412488
(with a standard deviation of ca 0.01, not uncommon for measurement uncertainties in amateur CCD photometry)
Ok, let's do it the other way round, instead take the mean of the counts first:
mean(count) = 19988.3000000000
and now convert that to a magnitude:
-2.5log(mean(count)) +20 = 9.24806035237535
So yes the values are different, but the difference in this toy example is just 0.0000315 ...you would anyway round reported magnitude values to 2 or 3 decimal places so actually for the reported value it makes no difference whatsoever in this case.
So we average magnitudes etc all the time, knowing that this is strictly speaking not the 100% correct way to do things but also knowing that in the end it doesn't matter (provided we are averaging values that are not vastly different).
The beauty of braking up your observation into several shorter observations is of course to give you an idea about your statistical measurement error. It's just a shortcut to then average the magnitudes of the individual measurements (and take the standard deviation to get an idea about the said error) rather than converting back to physical units proportional to photon count per unit time per unit area and then do the averaging with that and then convert to magnitude again.
Hope this rant helps a bit to qualify the "it's wrong to average magnitudes" statement.
HB
P.S.:
Bonus, an example more representative for uncertainties as in visual observations:
r= 210 204 198 212 231 221 213 189 186 199
-2.5 log(mean(r))+20 = 14.2137519300671
m= -2.5 log(r)+20
m= 14.1944517631652
14.2259245814353
14.2583370243462
14.1841603476781
14.0909700502696
14.1390193157872
14.1790509914032
14.3088454895669
14.3262176394552
14.2528673089757
mean(m) = 14.2159844512083
(standard deviation close to 0.1).
Again, no difference that the is significant compared to the precision with which you would report the magnitude.
Thanks Bikeman, excellent answer.
There was a reason I decided to stack all 100 images which is to do with the SNR. Suppose I have 64 images. I can measure the magnitude of a particular star on each image and then average the magnitudes. Alternately I can stack all 64 images and measure the magnitude of the same star. I believe that the two methods are equivalent. I've done this and I get the same magnitude. However there is one thing I'm not sure about. The guidelines say that the SNR should be greater than 100. It could be that in each individual image the SNR of that star is less than 100. Let's say its 50 so that's too small. If I stack them the SNR of the same star should be eight times greater i.e. 400 which is above 100 so should be ok. In those circumstances are the two methods still equivalent?
Steve
I would think that in almost all cases where a certain "minimum SNR" is required this is intended to be a proxy for the uncertainty of the *submitted* magnitude measurements, and not as a limit for individual images going into *one* measurement.
CS
HB
For transforms it may not matter whether you stack or average. For variable star measurements I prefer to average, because a standard deviation can then be calculated.
My personal bias is to do this, rather than quote SNR.
Roy