Eliminating Comparison Stars

Affiliation
American Association of Variable Star Observers (AAVSO)
Fri, 08/19/2022 - 14:52

I have a question about the validity of reducing your reported error of your target star by removing outliers in the Photometry Report.  

In the VPhot Users Guide it says:

Another useful way to check comp stars is available in the photometry tool in VPHOT.

To the left of the image, click on “View Photometry Report”. At the bottom of the resulting page you will see a table of comp stars. Those highlighted in dark red are giving results that disagree more significantly from the average and may be considered less accurate. This does not mean the photometry is bad, they may just be too faint to use for the exposure time in your image, or other factors may come in to play. Deselect these stars in the table and click on “refresh” above and see what your results look like. Chances are the error reported in the target star result will be improved substantially. You can continue this process until it becomes a point of diminishing returns (your reported error will never be better than the SNR error reported in the Target Star Estimates report), or use the first result to guide your list of acceptable comparison stars.

To me this sounds like cherry picking your data.  I can take a set of 14 comp stars that initially gives me an error of 0.057 and by eliminating all but one of them get the error down to 0.003.  I'm curious how valid this is.  Should you always eliminate all the comp stars except the one that gives you the lowest error rate?

The same issue comes up when transforming your data using the Two Color Transform Tool.  There you are also given an estimate of which comp stars are contributing to the overall error rate.  Do you just throw out comp stars to get this down as low as possible?

Thanks for your thoughts on this.

Bill

Affiliation
American Association of Variable Star Observers (AAVSO)
No!

Bill:

You asked "Should you always eliminate all the comp stars except the one that gives you the lowest error rate"? The quick answer to this is ABSOLUTELY NOT! Chasing a smaller uncertainty is cherry picking your data and may not yield an uncertainty that is truly representative of the uncertainty of your measurement..

Even though the VPhot Guide states "You can continue this process until it becomes a point of diminishing returns", the intent is to indicate that some of the estimated target magnitudes from the individual ensemble comps 'may' be outliers (dark red highlight) due to some image distortion or catalog error? The intent of the term "diminishing returns" is to lead to the removal of comps that yield very divergent magnitudes, not comps that yield small (not zero) changes. Unfortunately, this is ambiguous and probably should be clarified. ;-(

The real problem with using one comp versus an ensemble is that the error reported by most photometry software for one comp analysis is based only on 1/SNR rather than the standard deviation that is calculated from multiple estimated comp magnitudes. The error based only on 1/SNR is rarely representative of the uncertainty associated with the typical 'repeatability' of your equipment and procedure. The single image error reported from one comp is therefore typically (always?) smaller than the reported error from an ensemble.

However, to generate a more representative measure of your uncertainty, it is better to collect multiple images (perhaps 3), analyze them separately, and calculate the mean and standard deviation of all images. This reported standard deviation is much more representative of your system/methodology precision (random error). In fact, try this with both one comp and an ensemble. You will find that this random error for either one comp or an ensemble is much more consistent. Try it and report here?

Another unfortunate issue associated with an ensemble error is that some comp sequences may use reported comp magnitudes gathered from different catalogs. In such cases, any systematic error (bias) associated with these different catalogs is included within the calculated precision (standard deviation). It artificially increases the calculated uncertainty of the target since inaccuracy is mixed with imprecision. Even then, see what your experiment from the previous paragraph indicates. Stick with one catalog for your comps (e.g., APASS).

Your reported random error is meant to tell the user/researcher how uncertain your data is, not how small you can force your random error to be by comparing apples and oranges (1/SNR and standard deviation).

Let's look for any disagreement / rebuttal?!  ;-)

Ken 

 

 

Affiliation
American Association of Variable Star Observers (AAVSO)
Disagreement/rebuttal

Ken,

Thanks for the reply.  I waited a couple of days to see if anyone wanted to disagree or rebut what you said.  I guess not.

I agree that the manual should be clarified about the removal of outlier comp stars.  Sounds like only the most egregious ones should be taken out.

You mentioned sticking with a single catalog (e.g. APASS) for the comp stars.  How do you see this?  I've just been choosing "Load AAVSO Comp Stars" from the VHPOT menu and don't see how you can tell what catalog they are from.

You mentioned analyzing several images of the field and averaging the results for a better magnitude estimate.  Is that fundamentally different than stacking several images and analyzing the stacked result?

Thanks again for your thoughts.

Bill

Affiliation
American Association of Variable Star Observers (AAVSO)
Disagreement/rebuttal

"Sounds like only the most egregious ones should be taken out."

Strongly agree.

 

".... sticking with a single catalog (e.g. APASS) for the comp stars."

Disagree. 

I don't see using stars from various sources as a problem.    Ensembles work best with lots of comps (once you have removed the worst offenders).  Also, using only one source ensures that any systematic error inherent in that source would be exaggerated rather than averaged out.

I think using comps from several reliable sources would improve the accuracy of the results. 

"I've just been choosing "Load AAVSO Comp Stars" from the VHPOT menu and don't see how you can tell what catalog they are from." 

From the Catalogs menu click Show AAVSO Sequence.  There is a numerical superscript associated with each magnitude in the list.  Scroll down to the bottom of the list to see the sources.  There are lots of reliable sources in that list.  (Unfortunately, there is no official list with information on the sources). 

Phil

Affiliation
American Association of Variable Star Observers (AAVSO)
Reliable Comp Stars

Phil,

Seems like AAVSO shouldn't have comp stars in the sequence from catalogs they don't feel are trustworthy.  I guess I can see it if there just aren't enough better stars.

Bill

Affiliation
American Association of Variable Star Observers (AAVSO)
What Catalog?

Bill et al:

The sequence team makes every effort to use only trustworthy catalogs. As you can imagine, new catalogs have been developed over time. If some older sequences were generated from older catalogs, and the target sequences have not been updated, some comp mags may be less reliable. Over time, this situation is corrected/improved!

In other cases, targets may vary over a very large magnitude range. A second catalog may be needed to extend the sequence over a wider mag range. Even if each catalog is internally accurate, they may exhibit a different bias across the full range of the two catalogs.

We may also be noting relatively small systematic differences between catalogs but when observers try to push their reported precision to millimag levels, using one catalog helps do that BUT it may (or may not) improve accuracy. When rapid variability (temporal change) is being measured with a fast cadence time series and frequency analysis is undertaken, precision is more important than accuracy! Using comps from one catalog helps provide smaller random errors because no systematic error between catalogs impacts the measured standard deviations. See table in previous post.

Ken 

Affiliation
American Association of Variable Star Observers (AAVSO)
What Catalog?

Ken,

You make several good points here.      

In this discussion my emphasis has been on accuracy.   With ensembles I think using many comps from a variety of sources works best (after removal of just the worst outliers).  This results in relatively large VPhot generated uncertainty estimates, but (I think) more accurate measurements.

In observing projects such as you describe, where precision is more important, I agree using a single source is better.  My only quibble would be, for those projects why use ensembles at all?  Would not a single comp work even better?

From your following post:  "...when observers try to push their reported precision to millimag levels, using one catalog helps do that BUT it may (or may not) improve accuracy."

I think some people believe that extensive culling of comps from ensembles in order to produce very small uncertainty estimates ("cherry picking" to use Bill's description) improves the accuracy of their measurements.  This misunderstanding can be seen in light curves when scatter in the data points is compared with unrealistic uncertainty estimates.

As you mentioned in a previous post, a revision of the VPhot manual's instructions for culling ensembles might help.

Phil

Affiliation
American Association of Variable Star Observers (AAVSO)
Precision not Accuracy

Phil et al:

You may be correct about accuracy but I was only trying to address precision. Below, I have created an example of the impact of different catalog magnitudes on the precision (Std dev) and accuracy (agreement with known/true mag). Hopefully the following table shows what happens when comp magnitudes from different catalogs (with different systematic error (bias) are used. Unfortunately, all columns may not line up properly after I post it(?):

 

Target Magnitude calculated from comp with 3 different catalog magnitudes                 

Catalog A has positive bias of 0.030                   

Catalog B has positive bias of 0.020                   

Catalog C has negative bias of 0.020                   

All catalogs have identical precision (0.004)                

                     

Target Mag  w/Cat A Mag      w/Cat B Mag      w/Cat C Mag   Known/True Mag

Measure1    11.285                11.275               11.235             11.250

Measure2    11.280                11.270               11.230             11.250

Measure3    11.275                11.265               11.225             11.250

                     

Mean           11.280                11.270               11.230             11.250

Std Dev        0.004                 0.004                 0.004               0.000

                     

Cat AB Mean      11.275               

Cat AB Std Dev   0.007           

                     

Cat AC Mean      11.255               

Cat AC Std Dev   0.005           

                     

Cat BC Mean      11.250               

Cat BC Std Dev   0.022           

                     

All Cat Mean       11.260               

All Cat Std Dev    0.023  

 

Ken

Affiliation
American Association of Variable Star Observers (AAVSO)
Ken and Phil,I want to…

Ken and Phil,

I want to thank you for adding to this discussion.  Your comments have been very useful.

(I realize the questions below are drifting a bit off topic from my original comp star question, but it seems like an appropriate followup)

I just finished submitting my 6 observations from last night (images in V & B).  With only throwing out the worst comp stars my errors ranged from 0.02 to 0.06.  These seem higher than most other submissions I've looked at and I wonder what I can do to improve things in the future.  Or maybe these are just fine and most others are "cherry picking" to get the 0.01 and below errors.

Here is what I do.

  • I'm using a smallish 100mm refactor and a nice CMOS camera (ASI2600MM Pro).
  • Filter wheel with Baader B & V filters
  • Right now I'm imaging LPVs, mainly because I'm learning and it is easy to compare my results with those of others.
  • Generally take 15-30 sec subs and stack at least 10 of them in each band.
  • Stack the images in ASTAP and then submit to VPhot for analysis.
  • My final submissions are transformed with coefficients derived from two standard fields (averaged).

Any thought on how to decrease my errors, or should I just accept what I'm getting?

Affiliation
American Association of Variable Star Observers (AAVSO)
Keep the faith!

Bill:

1. I wish all our observers would put as much thought, effort and care into their analyses!

2. I think most reported precisions less than 0.01 mag are not representative of the true random error observed from repeated analysis. They almost always only represent 1/SNR of a bright target calculated from one bright comp (SNR>>100)! Please try to analyze a set of ten images (close in time) for one bright check star and report what your mean and standard deviation are? Try with one comp and also with an ensemble.

2. Most importantly, you transform your magnitudes. Only about 10% of submitted data is transformed. It is the most important procedure to improve accuracy and make our AID lightcurves exhibit less scatter. Remember the frequent forum discussion about quality of our data.

3. Use flat images collected each night for calibration. It helps a little. I use a running average master flat from 7 images from 3 nights (21 flats).

4. Do you take 15-30 sec subs because of poor tracking or bright targets?  Stacking images does typically improve SNR by sqrt N.

5. What are the SNRs for all your targets and comps? Best if all are above SNR=100, which implicitly yields an error of 0.01.

6. Your random errors are reasonable. Trust them! Or move your scope to the Atacama Desert. ;-)

7. Do you try to improve check star accuracy (agreement between observed mag and known mag)? More important than precision! Note that this conclusion may not be true IF the temporal variation (for frequency analysis) of the variable is only/most important to your study! You do transform your data, so the answer is YES!

Ken

Affiliation
American Association of Variable Star Observers (AAVSO)
Keep the Faith

Bill,

This is good advice from Ken.  I have just a few comments.

"3. Use flat images collected each night for calibration. It helps a little. I use a running average master flat from 7 images from 3 nights (21 flats)."

Accurate flats are critically important, but how often we need to make new flats depends on each observer's unique circumstances.  In my opinion, the most important flat correction is for vignetting.  Even the slightest alteration, or possible accidental alteration, of the position of the camera or other components in the optical train requires new flats.  I think you should take new flats each night if you set up your system anew for each observing session.  Also, if you have a dusty environment you will need to take flats frequently. 

If you have a permanently mounted scope in a reasonably well enclosed shelter or observatory, and if you are very careful about minimizing dust exposure, I would say you can go a lot longer without making new flats.  I make new flats every four to six weeks.

 

"5. What are the SNRs for all your targets and comps? Best if all are above SNR=100, which implicitly yields an error of 0.01."

This is important, especially for the target.  For ensembles VPhot combines the s.d. of the measurements of the comps with the inverse of the SNR of the target to give the uncertainty estimate.  Even if you have good agreement in the comps, a low SNR in the target can produce a large uncertainty estimate.  This can happen if the target is significantly fainter than the comps.

For comps, I'll use those with low SNR's when they are not obvious outliers.

 

"6. Your random errors are reasonable. Trust them!"  

Hear! Hear!  You wrote that you are observing LPV's that have lots of observations in the light curves.   Work on how well your measurements conform with those of experienced observers, i.e. your accuracy.  At this stage, I think that is the best way to judge your photometry.   Pay less attention to the conservative VPhot ensemble uncertainty estimates.

 

"7. Do you try to improve check star accuracy (agreement between observed mag and known mag)? More important than precision!"

This is true, but it can be tricky.   If your check star is close to the same color as your target this can work as a guide to your accuracy.  The problem is that you are observing LPV's, most of which are red. 

Unless there has been a change in policy, the AAVSO Sequence Team guidance is to avoid red stars.  If you pick your check star from the AAVSO comp sequence you may have trouble finding a check star red enough to make this work.

Phil    

 

 

 

Affiliation
American Association of Variable Star Observers (AAVSO)
Phil,

Thanks for the…

Phil,

Thanks for the additional comments.

I always remove comp stars that have a SNR less than 100.  I also remove ones near the edge of the field because I don't have a field flattener and they have a higher FWHM.  

For the target, I almost always have a SNR  greater than 100.  In one case the star was particularly red and the SNR in B was down around 30.  I went ahead with the analysis anyway and the VPhot error was in the 0.09 range.

Affiliation
American Association of Variable Star Observers (AAVSO)
RE: Keep the faith!

Ken,

Sorry for taking a couple of days to respond.  I had to collect some data and analyze it (which took several hours).

In an attempt to answer your point #2:

  • I took 60 subs of RU Vul in both B & V.  The subs were 15 secs each.
  • I then stacked then using groups of 6 (i.e. 01-06, 07-12, 13-18 ...) for each band and uploaded the stacks of 6 to VPhot.  This gave 10 images in each band.
  • I used the Two Color Transform using a a previously generated sequence that had the most egregious comp stars removed from the sequence.  This gave 5 comp stars (I also removed comps near the edge since this had higher FWHM).
  • I recorded the V-mags and B-mags (and errors) into a spreadsheet and used the AVERAGE and STDEV functions to get the values below.

For V:

  • Average mag - 9.359
  • StdDev - 0.003
  • Average error from VPhot - 0.021

For B:

  • Average mag - 11.178
  • StdDev - 0.009
  • Average error from VPhot - 0.059

So the standard deviation is less than the VPhot error but I'm not sure if they are really comparable.

In answer to point #4, I take shorter subs and stack them because my mount doesn't track perfectly and also because some of my stars will saturate if I go over a minute or so.  I don't have a permanent observatory and have to roll my scope out each night.  Polar alignment stays close but is not perfect. I can generally get 30s without problems.  Sometimes a minute is pushing it.

ETA:  I also stacked all 60 subs in each band and analyzed it to se how it compared to my average of the 6 sub stacks.

The 60 image stacks had:

V-Mag = 9.361 (err: 0.021) compared to the above average of 9.359

B-mag = 11.180 (err = 0.059) compared to the above average of 11.178

So it seems that stacking and analyzing gives a very close value to analyzing each individual image and averaging the results.