I was taking observations of V380 OPH and found something that I need some help working through. Most of the observations submitted are from one individual and they are all in the range of 14.5 to 15 magnitude. My observations are consistently about a half magnitude higher (i.e. 14.0-14.5). I'm in the process of recalibrating my TC's to see if there is any issue here. If I run the transformation applier app with the test coefficient option set, I get an error of .009 in the V- band. Any thoughts? I'm fairly new to the AAVSO program so I'm thinking there is something that I'm missing.
Here is the file when I run it for the TA option
#TYPE=EXTENDED
#OBSCODE=SJMJ
#SOFTWARE=VPhot 4.0.34
#DELIM=,
#DATE=JD
#OBSTYPE=CCD
#NAME,DATE,MAG,MERR,FILT,TRANS,MTYPE,CNAME,CMAG,KNAME,KMAG,AMASS,GROUP,CHART,NOTES
V0380 Oph,2460108.96193,13.883,0.009,V,NO,STD,103,-7.529,na,na,1.7107,na,X28727BCJ,|CMAGINS=-7.529|CREFERR=0.061|CREFMAG=10.266|VMAGINS=-3.911
Here is the file in Ensemble mode
#TYPE=EXTENDED
#OBSCODE=SJMJ
#SOFTWARE=VPhot 4.0.34
#DELIM=,
#DATE=JD
#OBSTYPE=CCD
#NAME,DATE,MAG,MERR,FILT,TRANS,MTYPE,CNAME,CMAG,KNAME,KMAG,AMASS,GROUP,CHART,NOTES
ASAS J175019+0608.7,2460108.96193,12.657,0.245,V,NO,STD,ENSEMBLE,na,123,12.191,1.7107,na,X28727BCJ,|KMAG=12.191|KMAGINS=-5.748|KREFERR=0.054|KREFMAG=12.301|VMAGINS=-5.282
V3119 Oph,2460108.96193,14.676,0.246,V,NO,STD,ENSEMBLE,na,123,12.191,1.7107,na,X28727BCJ,|KMAG=12.191|KMAGINS=-5.748|KREFERR=0.054|KREFMAG=12.301|VMAGINS=-3.263
V0380 Oph,2460108.96193,14.027,0.245,V,NO,STD,ENSEMBLE,na,123,12.191,1.7107,na,X28727BCJ,|KMAG=12.191|KMAGINS=-5.748|KREFERR=0.054|KREFMAG=12.301|VMAGINS=-3.911
V3105 Oph,2460108.96193,15.640,0.248,V,NO,STD,ENSEMBLE,na,123,12.191,1.7107,na,X28727BCJ,|KMAG=12.191|KMAGINS=-5.748|KREFERR=0.054|KREFMAG=12.301|VMAGINS=-2.299
John:
1. Why did you apparently not select/include a check star for your single comp sequence? No check star name reported. Could stick with 123 since it's used in the ensemble sequence. Check stars help identify any potential, obvious errors in the process. Ideally, you should select a check star with about the same mag as the target.
2. Note that the error (precision) reported in the ensemble photometry is large (0.245). This indicates that the different comps in the ensemble gave significantly different estimated target magnitudes. Note red color in comp table on photometry report. I suspect one comp shows significant inconsistency? Select another?
3. Can you check the other observer's results and note what comp star they may have used?
4. I'd start by checking/resolving these issues and proceed from there.
Ken
Ken, Thanks for the help.
Here are my comments:
1. Why did you apparently not select/include a check star for your single comp sequence? No check star name reported. Could stick with 123 since it's used in the ensemble sequence. Check stars help identify any potential, obvious errors in the process. Ideally, you should select a check star with about the same mag as the target.
#NAME,DATE,MAG,MERR,FILT,TRANS,MTYPE,CNAME,CMAG,KNAME,KMAG,AMASS,GROUP,CHART,NOTES
V0380 Oph,2460108.96193,13.883,0.009,V,NO,STD,103,-7.529,na,na,1.7107,na,X28727BCJ,|CMAGINS=-7.529|CREFERR=0.061|CREFMAG=10.266|VMAGINS=-3.911
Isn't the "103" the CName? Also, the only reason I did the single sequence was to to use the transfrom apply check box to determine fliter err which was .022.
2. Note that the error (precision) reported in the ensemble photometry is large (0.245). This indicates that the different comps in the ensemble gave significantly different estimated target magnitudes. Note red color in comp table on photometry report. I suspect one comp shows significant inconsistency? Select another?
I reran the ensemble using check star 150 (mag 14.647), deselected the hard red stars (which I did before) and also deselected any stars with a SNR < 200. Still got a magnitude of 14.041.
3. Can you check the other observer's results and note what comp star they may have used?
It looks like they did not run an Ensemble but ran a different tool with one comp and one comp/check star. I reran the photometry report using the comp and check stars that the observer used (133, 139) and the magnitude was 14.2 with an error of .01. Still out of range based on all the previous observations from that site. I also noticed that he/she is using LesvePhotometry as their tool. I'm doing to download it and try the same images with their tool and see if there is a descrepancy.
4. I'd start by checking/resolving these issues and proceed from there.
I'll post another comment when I run the same image through the observer's software. Not using Vphot may have produced different results.
1. Yes, you have selected/used one comp star designated 103 (CNAME) and reported its instrumental magnitude (CMAG).
However, note the next na,na in the report. They should be KNAME and KMAG values not na, Therefore, you did not include a check star in your sequence. Just an oversight? A check star should be used to assess the accuracy of your process. It is a known comp star that has been assumed to be a target star and its magnitude calculated by comparing it to the same comp star (103). In the VPhot photometry page, the calculated Check star magnitude should agree with the known Check star magnitude reported in parentheses.
Ken
The output you are referencing was from the run where the TA option of Test coefficients was checked. In that case, the documentation says that the comp star stats are "moved" to the target star. I think that is why the comp star items are NA. If I run it with the box unchecked it does provide a value without NA.
John:
1. Now that I understand where your aavso reports came from, I think I understand what you did (or did not do)?
2. To conduct a TA Transform Coeff Test, you need to be running an aavso report(s) that includes data from a set of images that include at least two filters. That is the way TA works normally. It appears from your report that you were only using an aavso report from V filters? Have you previously successfully run a normal TA transformation of some of your data?
3. The Coeff Test box assumes that you are running a 'normal' transformation of a non-transform aavso photometry report BUT it exchanges the check star for the target.
4. In so doing, TA Coeff Test adds a line to the report output that reports the transformed magnitude of the check star that is listed with its AUID name.
5. The report in the bottom box of the TA page, reports the difference between the transformed magnitude of the check star and its known magnitude.
6. In my test, my transform coeffs yielded an agreement of 0.003 mag for my Check star. BTW, the Coeff Test does still appear to require only one comp to provide reasonable transformed mags of the check star.
Hopefully, this makes sense?
Ken
3. I regularly use either LP or VPhot. IF you use the same comp star(s) you should get the same calculated target magnitude with both photometry tools. I've proven this to myself many times.
2. When you got 14.041, what was the error (precision)? Hopefully better the 0.245 mag?
So you got a calculated Check star mag of 14.041 compared to the known mag of 14.647? This is a difference of 0.606. This difference is quite large! So the result is suspect, except if there is a very large difference between the comp star color (B-V) and check star color.
Ken
PS: It will be interesting to see if you get the same result with LP?
Let me go back to square one. Here is the text from the photometry report target section and comparison stars section. I'd like your comments/feedback on what you would do next given this set of data.
Target Mag Err Std Err(SNR) SNR Sky<*
123 (12.301) 12.253 0.237 0.237 0.002 526 273
V0380 Oph 14.094 0.237 0.237 0.009 115 271
Comparison section is as follows:
Star IM Max SNR X Y Sky Air B-V V-mag Target estimateActive
103 -7.529 4831 1825 653.932 395.548 263 1.721 1.077 10.266 13.887 x
104 -7.323 4870 1847 2119.766 1482.825 227 1.696 1.041 10.441 13.856 x
119 -5.810 1293 574 1067.811 843.108 272 1.713 1.173 11.936 13.839 x
133 -4.762 652 252 1187.685 931.157 272 1.711 0.783 13.302 14.156 x
139 -4.211 516 147 1123.706 802.654 271 1.713 0.728 13.918 14.221 x
147 -3.127 346 58 1156.375 903.277 272 1.712 1.236 14.699 13.918 x
150 -3.335 420 78 1120.720 764.939 272 1.713 0.894 14.975 14.402 x
154 -3.087 399 55 1103.027 858.826 270 1.713 0.822 15.447 14.626
157 -2.466 319 28 1126.100 857.723 272 1.712 0.737 15.717 14.275 x
87 -9.555 30085 4638 1743.943 234.675 254 1.709 0.528 8.696 14.343 x
90 -8.727 16241 2987 1004.924 1166.617 274 1.712 1.314 8.969 13.788 x
93 -8.180 9488 2799 1722.830 870.791 262 1.705 1.573 9.339 13.611
99 -8.348 10149 2895 475.939 582.321 261 1.722 0.473 9.908 14.348 x
John:
I suspect that several bright comp stars are saturated based on SNR (>1000). Look at the star profiles for each comp. Do several of them look flat at the top?
The reported gain setting of your camera in VPhot may not be correct? This would make my suspicion incorrect.
Ken
PS: An error (precision) of 0.237 is unreasonable. A skype/zoom session may be helpful?
I did check and didn't see any flattening. The MAX value (which I believe should be the ADU highest value) never got higher than 30K on any star. I'm running 16bit so saturation would be around 65K, correct?
Reported gain in the FITS header is correct.
What I just don't get is that the vast majority of my observed error values are .1 or less. I just started trying to pick up some of the targets listed in Alert 754 and they all have high error values.
Let's table this for a day or so. I'm going to try to measure a Landolt field and see what the results are. I'll get back when I have some more information.
I think it was a simple matter of increasing the exposure from 120s to 240s. Once I did that, the error fell to .10 and the magnitude was within ..02 of another observation yesterday.
Thanks for all the help. Still relatively new at this.
I may have spoken too quick. Tried a couple of additional observations at the increased magnitude and I'm back to about a magnitude difference when the error is less than .15 or so. Any thoughts on how to track this down. I did a recalibration on my transform coefficients just to see if there was some problem there, but the new values are very close to the old ones.
What is odd is that many of my observations on other variables are spot on to other people's observations. Seems to be occuring on fainter objects but that may just be a red herring. Is there any good way of finding out whether your base obsevation is good to begin with. I always thought that if the SNR was greater than say a couple hundred and the error was reduced to less than .15 or so you had a good observation.
I've been looking for something that provides more detail on how to evalute the photometry report for quality but haven't really found anything.
If it's something systemic with the scope (like very bad bias, darks, or flats), it would show up in my deep sky images (which it does not)
John:
1. Could you plot the target mags vs comp color to see what the trend is? You should see a linear relationship with non-zero slope. You could calculate the linear best fit. There will be scatter. This non-zero slope trend is what is removed by transformation. I perceive more of a jump in mags rather than linear trend? Is there a red leak in a filter? BTW, what filters are you using?
2. Calculate the target and check mag obtained from several replicate images (5-10?) and determine the mean and standard deviation (STD). In general, you should be able to achieve a standard deviation (precision/reproducibility) of <0.03 mag and hopefully about 0.01 if your SNRs are ~100 or better. STD of 0.15 is not that great (SNR~7). STD of 0.237 is just unreasonable/bad! What STD do you normally achieve with your process?
3. Select several comp stars as check stars (pretend targets in vphot). Use several comps stars (5-10?) You obviously need lots of comps to do this. May not be easy unless you find more APASS comps yourself. You could image a Landolt Field to do this process. Calculate the mean and STD of each check star. Then calculate the Difference (accuracy/bias) between the calculated and known (in parentheses) magnitude for each check star. Tabulate/plot both the STD and Difference and Color of each check star. Ideally, the bias would also be small (<<0.1-0.3?) but this depends on check color compared to the average ensemble color. Transform your magnitudes as well. The bias should be reduced significantly with transformation! What Bias do you normally achieve with your process.
If you observe that your photometry process 'normally' yields errors (random/precision or systematic/bias) of several tenths mags rather than a few hundredths magnitude, there is a problem? If the errors are poorest when you image a highly colored (red or blue) target, it may help identify the source of the problem?
Thoughts? Helpful or not?
Ken
PS: You could share a few of your images to MZK in vphot. I could look at them too.
I'll start looking at the issues you pulled up. The more I look,however, I think the problem isn't in the transforms.
My reasoning is that, last night, I took about 15 sets of observations last night (about 150 images). In the two color transformational measurements my obsevations were very close (say +- .1 or less mag) to other observers values. And perhaps more important, if there was a trend up or down, I was following that trend.
When I tried to observe some of the stars listed in the alert mentioned above, the problems start to surface. Particularly, V380 Oph. That seems to be a consistent .5 magnitudes brighter in my obsevations that in the other person observing (DFS).
I've shared two sets of images (30 secs and 240 secs).
There are two other stars (V794 Aql and V704 And), that are consistently higher but not as bad as V380 Oph. All of these observations were in the V band only.
I think it's a good idea to concentrate on V380 Oph since it's repeatable.
If it was systemic to the rig, I would think that all observations would be off to some extent, so it's probably something dumb that I am doing either in this particular set of imaging or how I manipulate Vphot.
B and V filters are Baader
Thanks very much for the help.
I think I finally got to the root of the problem. I processed the same set of files in Pixinsight and DeepSkyStacker. The Pixinsight calibration process produced magnitudes in line with expectations while Deepskystacker produced the bad results. I was using DSS because it was easier to automate than Pixinsight. I can not explain why it only occurs on certain stars and not others but I have verified the results to my satisfaction.