Vis v. V

Affiliation
Variable Stars South (VSS)
Thu, 05/25/2017 - 05:54

Hi all

Whenever I (regularly) write or talk about the AAVSO and VSOing I say that visual and ccd observers complement rather than compete. Now, I'm the first to admit my eyes aren't the sharpest tool in the shed, but I've seen some absolute shockers of supposed Johnson V measures. The trouble is that people might believe some of these!

V quality control is for the V people. I'm merely a concerned vis onlooker.

Best to all.

Affiliation
Vereniging Voor Sterrenkunde, Werkgroep Veranderlijke Sterren (Belgium) (VVS)
You are right

Hi Paw,

I only can say that you are right. I am convinced that some photometric observers don't know what they are doing. My only relief is that those observations will be removed when someone is using the data for some kind of publication, at least I hope so.

Affiliation
American Association of Variable Star Observers (AAVSO)
Flag them!

LCGv2 allows you to report a discrepancy, by clicking on the observation in question, the field is at the bottom of the observation data pop-up window.

Mike

 

Affiliation
American Association of Variable Star Observers (AAVSO)
discrepant observations

As Mike suggests, use the new LCG and flag any wildly deviant points as discrepant.  That helps the small HQ staff in fixing minor problems, and assigning a mentor for the tougher problems.  There are discrepant points for both visual and CCD observers, for some of the same reasons (wrong ID, wrong date, transposed digits, forgotten "less than" sign, etc.) as well as for others (wrong comp star ID/magnitude, frozen filter wheel, not transforming data, etc.).  As CCD observers tend to be more prolific, tend to trust their sotware/automation more (rather than examining the data after submission), and the LCG highlights the CCD observations with color rather than the grey/black points that tend to fade into the background, you are more aware of such transgressions when using the LCG.

Everyone can help to improve the Database!

Arne

Affiliation
Vereniging Voor Sterrenkunde, Werkgroep Veranderlijke Sterren (Belgium) (VVS)
Use the LCG2

Yes Arne, you are right. We observers must use more often the LCG en flag those discrepant obervations. I tend to use the LCG often but lack flagging those bad observations. I will do it in the future. Better, I just did it with a V-filter observation of R CrB that was to faint smiley

Affiliation
American Association of Variable Star Observers (AAVSO)
DSLR and CCD examples

One of my motivations for becoming a Councilor was to work on data quality problems.  I attach two write-ups of trouble I found in DSLR and CCD observations in the archive.  I have "anonymized" the observer identifiers, with the exception of PGD (Gerry Persha).  Gerry is the inventor of the SSP single-channel photometers, and I use his data for comparison against the imaging observers. 

Tom

Affiliation
Vereniging Voor Sterrenkunde, Werkgroep Veranderlijke Sterren (Belgium) (VVS)
Write-ups

Wow Tom, that are very impressive write-ups. I think that also a lot of DSLR and CCD observers are doing it for the numbers, not the quality. I hope that when a reseacher uses the data he will throw out the bad data. But I know that any reseacher that is doing serious research will do so.

Affiliation
American Association of Variable Star Observers (AAVSO)
"...doing it for the numbers."

I know that, at least in the past, the AAVSO has given observers recognition and awards based on the number of observations they have submitted.  I think this is certainly appropriate for visual observers and PEP observers, but does this also apply to DSLR and CCD observers?   If so, it seems that this would tend to encourage the poor practices that Tom describes. 

Phil

Affiliation
American Association of Variable Star Observers (AAVSO)
write-ups

First, I want to congratulate Tom (and Jim!) on their efforts to quantify uncertainties in the AID.  As Tom said, there are no tools to easily investigate the AID, and from my work with Matt Templeton, I know how time-consuming such investigations can be!  These two documents are great starting points for discussion.

PEP observations are the "gold standard" in my opinion.  Each star on the program has a unique comp and check star, with very carefully measured magnitudes and colors.  The observer has to follow a multi-part procedure for each observation (17 steps, if I remember correctly), a time-consuming effort, to produce a single measure that goes into the AID.  The program stars were carefully selected to not have large color change during their investigation, and to be slowly varying so that the 15-30mins per observation can use mean colors for the target and therefore be transformed.  The raw measurements are processed using a single program at HQ (for the most part!).  The results are impressive, as you can see by looking at the light curves of any star on their program.  I think Tom and Jim are both PEP observers and are used to this quality.

The CCD stars that Tom describes in his document are almost exclusively cataclysmic variables.  The vast majority of the submitted observations are unfiltered, using a variety of comparison stars, and are long time series, primarily in support of CBA projects.  There, the emphasis is on high internal precision, high cadence, and with offsets taken out by the researcher during the analysis phase.  The main goal is period analysis, not long-term photometric accuracy.  The observations are usually submitted to the CBA, and then later submitted to the AAVSO for archiving and other public use.  Not all CBA members submit their data to both organizations.

So I am less worried about offsets between unfiltered observers on these cataclysmic variables.  That said, there are lots of CCD observations submitted on other objects, and I agree that there are many cases of imprecision.  As Director, I decided that it was nearly impossible to quality-check the submissions, since we were accumulating 1-2 million observations per year.  Instead, I concentrated on automating the checking for typical errors like wrong dates, along with improving the charts/sequences and providing tools and documentation that, when used, would help the observer improve his/her techniques and data quality.  On a couple of occasions, I ran "forum" campaigns on targets, and I provided real-time mentoring, which I think helped a few observers.  Stella has now inherited the database quality issue, and I'm looking forward to seeing how she approaches things!

It is possible, on "normal" stars, to obtain high accuracy.  I consider normal stars to be those where there are no major emission lines or absorption features, and where variability is slow enough that multiple-filter observations and transformation can be used.  An example of this are the many papers coming out of Ulisse Munari's Asiago Nova-Symbiotic (ANS) group.  He requires his team to use the same comparison stars and techniques, processing the data with the same software, and with an absolute requirement of good uncertainty calculations.  He rejects observations with more than 12mmag absolute uncertainties.  The light curves are exquisite in my opinion, and demonstrate that the goal is achievable.

For the AID, it is always a tradeoff between many factors: getting amateurs to observe and report observations, getting coverage of a wide variety of stars, obtaining observations needed in support of various campaigns, etc.  Many of the submissions are coming from external organizations like the BAA or AFOEV over which you have no control.  There are many ways to approach the problem, and a broad-ranging discussion with some resultant action items makes perfect sense to me.

I think DSLR observers have had less opportunity to learn the right techniques, and have no "champion" currently at HQ for mentoring.  You should also note that visual observations have similar problems.  Almost every time I look at a light curve, I see some visual measure that is far out of line, and mark it as discrepant, but there is no way that current staff can police every light curve.  I'm in full agreement that we can, and should, do better!

Arne

Affiliation
American Association of Variable Star Observers (AAVSO)
DSLR analysis

Very interesting discussion indeed.

 

Tom, you end your DSLR comments, based on the rho Cas long time light curve,  with this sentence:

"It is not unreasonable to conclude that data quality declined as DSLRs upplanted PEP"

In a way I agree. But if you plot the PEP obs for rho Cas in the last 5 years, we see that the number of PEP observations has declined are are now much coarser with large gaps in it. So my interpretation of the same data would be: DSLR observations have helped to partially fill the gaps caused by the decline of PEP. So in the end DSLR measurements have helped data quality. And the DSLR lightcurve of rho Cas certainly looks better than that from Visual observations.

I don't think there is anything we can realistically do to bring hundreds of observers to buy PEP equipment and operate it on a regular basis. It's just not going to happen. But we do have all these astrophotography guys and gals that already have DSLRs and CCDs and telescopes/telephoto lenses and who we might get to contribute photometry measurements.

In summary: Yes, those DSLR measurements, especially by beginners, will be worse than PEP measurements, but mostly better than visual obs., that is no surprise. But isn't it true that they will be better than no measurement at all? (EDIT: e.g. I can't see a single PEP observation for your final case study star, rho Cas, in 2017 (!), even tho it's one of the leading "could blow up in a SN any day now" candidate stars IIRC).

But I don't want to sound too negative at all, I think your collection of what can go wrong with DSLR observations is containing so many useful ideas on improvements that I find it difficult to reply to it in a single message. Where to start?

I think many problems you identified involve missing or contradictory or non-sensical data in submission reports that could be detected automatically.

Airmass: Airmass for the target star could be computed automatically when submitting the data if there was an optional way to communicate the coordinates of your observation site (just for computing the AM, not to be made public, even tho you could of course compute approximate coordinates from multiple measurements). Most observers have a limited number of sites from which you observe, so if you had a way to store a short list of obs-sites in your observer profile and then select one during submission, the AM calculation (if desired) could be automatic. I understand this is a feature offered by off-line tools, it could also be integrated in the AAVSO submission web backends.

Note that if the obs-site coordinates are known, some consistency tests can be performed to see if the observation time make sense (check if object is above horizon at all, or it's a daytime measurement of a faint star ..)

Filters: For DSLR measurements, very few filter settings actually make sense, and I wonder whether you should get an error (or at least a warning) when submitting something else, and remove the non-sensical filters from the Webobs individual obs submission form for DSLRs. TR,TG, TB obviously make sense, but Johnson B, V, R only make sense if the data is also transformed. If you submit DSLR data as Johnson V without the transformation flag, the obsever should be automatically warned that this makes little sense. I'm not sure what "unfiltered" in the DSLR context would mean: Would that be photometry on the added RGB channels ? But why, if you can separate the channels (EDIT: ok, perhaps for extremely faint stars)? So "unfiltered" should trigger a little warning as well perhaps. 

Systematic errors: I think the current transformation campaign will help to educate CCD and DSLR observers about transforming to standard photometric systems, that is a a good start to address some of the systematic errors resulting in observer specific biases.

I think we can agree that the quality of DSLR (and CCD) data submitted is worse than it could be. Via tool support and (I guess more importantly) education we should be abe to steadily improve the quality for any given observer.

CS

HBE

 

 

Affiliation
American Association of Variable Star Observers (AAVSO)
Concentrate on important stars first

It's certainly impossible to review every star lightcurve and make corrections with any reasonable plan and staff limitations! So, one idea might be to concentrate on those stars that are currently being studied by professionals and others. My weekly data usage reports show me what stars people are interested in, and I am sure that software could easily be modified to rank the stars by interest, and then have staff or volunteers "clean up" those higher priority lightcurves first.

Regarding visual observations being placed at the "bottom of the heap" in accuracy, I would like to disagree with that! Certainly there has been more spread in those observations, with some extreme outliers, BUT, if reasonable experience and care is taken, visual should be good to at least +/- 0.1 mag in total error (including random and systematic) because it is done under intelligent control. This process, while inherently less precise, and less voluminous than CCD observing, forces the observer to verify their observations as they are being made, that they are "sensible" w.r.t. the nearby comp stars, rather than just "press a button" and let the CCD planner go about doing its thing, gathering data without any human oversight!

The necessity for human oversight of automated CCD observing is understood at AAVSONet, where a dedicated team of volunteers examines every image taken, and makes note of any issues which might affect photometric accuracy - clouds, bad flats, moisture, trailing, moonlight/gradations, etc. These problems are quite common, and cannot be left to automated analysis!

Mike

Affiliation
American Association of Variable Star Observers (AAVSO)
DSLR vs PEP

Hi all,

Tom, I am very sorry but I am not sure to agree the way your DSLR paper addresses this issue. We all know that there are very discrepant reports, I often check the status of campaigns I am in and every time there are discrepant reports, sometime highly discrepant ! But this is not systematically the DSLR, far, and guys with large scope, expensive sensors, pristine skies are also part of it ! Your paper compares the best of the PEP to the worst of the DSLR and at end make a very negative judgment on DSLR in general, I am sorry but this is not fair ! 

To have a better comparison of techniques values you could have a look at the Eps AUR paper from Thomas that includes the three techniques: https://www.aavso.org/media/jaavso/2868.pdf

I would not enter here into much details but there are a couple of aspects I would discuss: you seem to consider all DSLR reports should go under TG. I am sorry but this is not what has been defined in the past and confirmed recently: transformed green results shall go under Johnson V. That works very well for most spectral types. The color correction at fluxes level that is easy for DSLR works even better than the classical transform. 

You seem to consider a sigma of 10~20 mmag is questionable and then some wrongly report 3 mmag instead. They are cases, poor skies, faint stars, near detection limit where such 10~20 mmag are an excellent result, and as Heinz-Bernd says such results are often better than nothing. The b Per campaign is an example where we have had to observe under very difficult sky condition using both DSLR and CCD, the sigma was not always superb, at end it's a success. Guys reporting 3 mmag instead actual is due both to the software that compute it as 1/SNR instead the sigma of a series and also the AAVSO that lets it as an option !  The difference could be very large depending the sky condition. This is to the AAVSO to define the way the raw results shall be processed to generate the report. 

I think a lot of problems are due to the software that generate automated reports to the AAVSO, I think their functions are not covering our needs, do not provide / push to review the results before reporting. This should be addressed. Personally I have not been satisfied by what was available on 2008 and developed my own overall solution specific to DSLR. 

Description of the instruments: yes it would be nice to have it in the header but it's not available, then the AAVSO asks to be very concise when using the comment, having it in each report point would take a lot of space in the database.

 

Clear Skies !

Roger

 

 

Affiliation
American Association of Variable Star Observers (AAVSO)
Quick Note

I must run off to Pine Mountain, but I want to quickly make a few points.

1.  I'm not suggesting people go back to PEP, I'm just using PEP as a source of reliable data against which to compare.

2.  For CCD analysis it was much harder to identify reliable observers to use as benchmarks.  Time series observations provided a scenario in which problems clearly stood out.  If time series are dominated by CV work, then problematic CV data would naturally dominate my examples.

3.  We need an extended discussion about what we aspire to in data quality in the AAVSO, and how to achieve it.

Tom

 

 

Affiliation
American Association of Variable Star Observers (AAVSO)
What if WebObs had users inspect previews of their light curves?

When an observer submits a datafile, WebObs currently displays a table listing the observations in the file and asks the user to inspect it. If the file contains more than a few observations, it's unlikely that an observer is going to inspect each line in that table. It would be easier for an observer to spot discrepant data if WebObs showed a preview of the light curve and asked the user to inspect it for common problems. There could be two plots: one showing the light curves of the target and check stars from the user's datafile (along with airmass if present), and another superimposing the user's photometry over other observers' data for that target.

At the very least, this would be a way for observers to identify obvious issues with their data, such as a systematic offset with respect to other observers' photometry.

Best Regards,
Colin

Affiliation
American Association of Variable Star Observers (AAVSO)
light curve previews

I must admit that I always feel a bit guilty when I look at the light curve right after submitting new data. If you are quick to submit your observations, chances are that there are only a few observations by other observers close to your observation. So you will compare with data one or a few days ago. What will you do if your data doesn't fit in "nicely" with the previous data? Go back to the reduction and turn some knobs until the data matches the other observations? Or just delete it? Both is problematic, because it means that the observations going into the DB are not independent, there will be a bias towards the fast-submitter data, which is arbitrary. And reports of unexpected star behaviour might even be suppressed.

What I personally would prefer to have is a service that (say) once per month would offer to let an observer download a PDF with a collection of lightcurves for objects he/she has submitted data for during the last 30 days, with your own observations highlighted, of course. Would make great wall-paper as well :-). I guess the most prolific observers would be overwhelmed by this, so maybe limiting by default to the top N (say 10) objects by the number of personal submissions could be a good idea.

People might object that this will need a lot of computing power. But if you think about it, plotting one LC per month per observer per observed object is more efficient than plotting one preview LC per object per observation submission! We are talking about several 100 PDF "booklets" to generate each month. A single PC (or rented computing cloud node) could do that.

CS

HB

 

Affiliation
American Association of Variable Star Observers (AAVSO)
Bleary-eyed...

...after a spectacualar night in Central Oregon, but I'll try to be coherent.

First off, let me apologize if I gave the impression that I thought certain kinds of technologies, or the observers who like to use them, are superior.  Imaging, single-channel, and visual techniques each have their strengths and weaknesses.  And I have uploaded measurement howlers of my own, which I then had to frantically purge from the AID.  The practice of good photometry is hard - it really is.  

I worked on the Chandra data system for several years, and a comment by the archive scientist keeps coming back to me.  It went something like this: Once bad data get into the archive, it's very hard to get them out.  It may be that I have overstated the quality issues we face, but the larger problem is that we have no clear idea of how bad the situation is.

The advent of electronic measurement in AAVSO has significantly changed the character of our work.  Visual observers have their own difficulties, but I will venture to say that the quality problems we find in today's visual estimates are the same as those found decades ago.  To the extent that a researcher must smooth out the "wrinkles" in visual data, the techniques for doing so can be applied over very long time spans, and this helps make our century+ of visual estimates so useful for long-term studies.  Electronic measurements bring with them a whole new collection of potential problems - sources of systematic error that can be very difficult for the researcher to untangle.  If there is enough "noise" in the archive, researchers will, at some point, decide that unscrambling the eggs is not worth the trouble.  My concern is that if we can't get our systematic problems under control, the value of the AID for long-term studies will be seriously compromised.

Contributors to this thread have already made a number of good points, and there's plenty of material I want to add, myself.  I'm going to try presenting my thoughts in focused servings rather than extended commentaries.   There's plenty for all of us to talk about, and what's most important right now is simply having the conversation.

Tom

 

Affiliation
American Association of Variable Star Observers (AAVSO)
Prioritizing the issues

I think that the problems identified by Tom fall into two categories: (1) errors that actively undermine the scientific utility of the data, and (2) arguably suboptimal observing practices that nonetheless yield acceptable data. It seems to me that the former is a more pressing issue than the latter, although both should be addressed.

I'm actually more concerned by the issues in the DSLR writeup than the CCD writeup; as Arne correctly pointed out, the CCD analysis focuses primarily on time-series photometry of CVs, but the science objectives in this context are different than for other types of photometry. For example, ~0.1-mag zeropoint offsets are generally acceptable for CV work, but would not be in other contexts, such as searching for dips in Boyajian's star. By contrast, the objects covered in the DSLR writeup tend to be less tolerant of zeropoint offsets; the final figure, showing ~0.4-mag offsets in DSLR observations of Rho Cas, is particularly disconcerting.

Tom -- would you agree that the most serious issue that you identified is the existence of large zeropoint offsets in light curves of objects that require a high degree of observer-to-observer consistency? If so, let's tackle that one first.

Colin

P.S. Tom: on page 2 of your CCD writeup, there's a figure showing the light curve of a CV as well as a light curve of a very faint check star that varied from V~19.5-20.5. I agree with you that there was variable transparency, but I think that the check star magnitude must be instrumental. Typical amateur equipment cannot achieve such high-precision photometry of such a faint star; moreover, decreased transparency generally results in increased noise in standard magnitudes and dips in instrumental magnitudes. Since the observer used ensemble photometry, he/she should have reported standard check-star magnitudes instead of instrumental. In spite of the issue with the check star, I think that the quality of that particular light curve is very good, given the science objectives of CV observations.

Affiliation
American Association of Variable Star Observers (AAVSO)
I think you will find just as

I think you will find just as many problems with CCD data as with DSLR.  It was easier for me to locate DSLR difficulties because a number of DSLR progam stars are also PEP targets, and I could compare DSLR results with those from an expert "pepper."  One need only go back to Nova Delphinus 2013 to see all kinds of CCD results that were badly out of whack (https://www.aavso.org/nova-del-2013-photometry).  As I noted earlier, various inconsistencies pop out at you when looking at a time series, and since we have a lot of CV time series', the examples in my CCD writeup were biased towards that kind of data.

I don't see any specific kind of photometric fault as more important than others, and getting the bugs out is going to take time.  We need a long-term strategy, and the most important point, in my view, is to get the ship heading in the right direction.  Simply raising awareness of the difficulties is an important step.

Affiliation
American Association of Variable Star Observers (AAVSO)
Dangerous path

Comparing your obsrvations to others in the DB can be problematic. It reminds me of the common trick kids do in school, copy someone else's work, assuming it is better! But, in fact that is not always the case...

Of course, the "law of large numbers" is your friend, so if your observations, during the same time interval, are significantly askew from the MANY others around the mean, you, not all the others are far more likely to be wrong.

Yet, it is also possible that some stars vary significantly over short time intervals, and such "simultaneous" comparisons are thus invalidated. (Eg. EX Hya, varies several tenths of magnitude over time scales of seconds to minutes!)

The best way is to have some "gold standard" reference (such as PEP was mentioned), but of course that is far too limited, and does not work for fainter objects. So, we do need to discuss the best ways to do data validation, that is both practical and accurate.

Mike

Affiliation
American Association of Variable Star Observers (AAVSO)
comparing to observations in the database

I think that comparing to other observations in the database is helpful, if done properly. To use an example from Tom's DSLR writeup, such a comparison would show Rho Cas DSLR observers that at least one person's observations are afflicted by a major zeropoint offset. This would (hopefully) encourage each observer to double-check his/her analysis before submitting the data. If necessary, the observers in question could contact each other and try to resolve the discrepancy. I think that this technique is particularly appropriate for slow variable stars whose brightness is reasonably constant on timescales of days.

What I'm saying is that if an observer submits data which obviously disagrees with other observers' data, the observer should be reasonably confident that the variation is astrophysical in origin -- and not simply the result of an error in the observer's data analysis.

(That said, I wouldn't recommend this for visual observing because it might bias people's subjective estimates.)

Colin

Affiliation
American Association of Variable Star Observers (AAVSO)
First Steps

Here's the first tidbit I'd like to throw out...

There is data quality experiment going on as we speak.  A few experienced observers are taking CCD photometry of Arlo Landolt's standards.  We are working with his list of northern stars from 2013AJ....146..131L, which form a 24 hour "ring" at roughly 50 degrees declination.  There are pairs of stars in the list that are close enough to fit on a CCD frame, so that one star can be used as a comparison for the other.  The available pairs have a variety of brightness and color contrasts ranging from easy to difficult.  Our plan is to see how well the different observers can agree with the established magnitudes, and to troubleshoot the deviations.  This project can lead in a couple of different directions: first, it lays the groundwork for a future CHOICE class, where observers could sign up to test and improve their photometry.  Second, we could add a bunch of Arlo Landolt's stars to the AID to serve as test targets.  Observers could measure them at their convenience, and use standard AAVSO tools to process and view the results.  Expert professional photometrists who do high-precision work will spend significant time observing standard stars every night in order to keep close watch on quality.  With a year-round selection of standard targets, our own observers could take regular test measurements with no more effort than that needed for their program stars (a corresponding set of targets can easily be chosen for the southern hemisphere).

I believe Landolt's stars are generally too dim to be useful for DSLR obsevers (I should disclose that I have never worked with DSLRs and very limited experience with CCDs), but there is a potential source of brighter target stars.  The VSX has entires for quite a number of stars that were once considered variable but are now thought to be constant.  Admittedly, most of there stars are on the dim side, but there are brighter ones, and we are experimenting with some of those as test targets for PEP.  We had originally planned to use these VSX stars for the CCD experiment, but questions arose as to just how constant some were, and it was decided to play it safe with the Landolt targets, at least for now.

The great advantage of following standard stars is that the "right" values for measured magnitudes are already known.  They provide a tool for motivated observers to improve their work.  But, by themselves, the standards don't help sort out problems in data for program stars, nor do they help us reach the observers who lack an inclination to examine their results.  I'll share some thoughts on these points in later posts. 

Tom

PS:  We could use one or two additional observers for the Landolt project.  Contact me directly if interested.

Affiliation
American Association of Variable Star Observers (AAVSO)
Data Quality Campaign / Experiment - Landolt Standards

Tom:

Why not just get ALL of our observers (CCD and DSLR) involved in the "experiment" I bet virtually every observer would participate in such a campaign in the interest of improving their data. Don't limit it to a small group of "experienced" observers. Find out what a large number of observations from a large population of observers show. I suspect it will show that individual observer CV offsets may be as large as 0.5 mags and V offsets as large as 0.05 mags. Of course, that is the question? A large database of magnitudes with associated information concerning choice of filters (specific type), comp stars (single versus ensemble), comp color, transformed vs. untransformed mag would be "best" way to see what we can achieve?

We have the chance to do it right. If designed properly, you might even get a sufficient number of observers to generate magnitudes by using various procedures (i.e., C vs V vs TG filter, single vs ensemble comps, transformed vs untransformed) to yield some statistical significance. Why not take the opportunity to try and see what the results indicate?

Ken 

Affiliation
American Association of Variable Star Observers (AAVSO)
baby steps

The experiment is a pathfinder to see what quality is achieveable, and we need experienced practicioners to do that.  Just coordinating the work of a handful of observers takes considerable effort - a mass campaign is not in the cards right now.

People in the northern hemisphere who can handle bright targets could try their hands at alpha Coma Berenices.  It is a binary system, but it won't go into eclipse for a very long time and is effectively constant.  The star is already in the AID and it transits around the end of astronomical twilight.  Start a thread and have some fun!

Tom

 

Affiliation
American Association of Variable Star Observers (AAVSO)
DSLRs are just sensors, not per se telescopes

I believe Landolt's stars are generally too dim to be useful for DSLR obsevers (I should disclose that I have never worked with DSLRs and very limited experience with CCDs), but there is a potential source of brighter target stars.

I'm a bit puzzled when I read comments like this. It seems to indicate that DSLRs and CCDs are two entirely different types of beasts.

DSLR for our purposes here is essentially a type of sensor, you can put any optics you like in front of it Actually many people who are doing DSLR photometry had done astrophotography before, using their DSLRs together with telescopes to take images of anything from the moon to nebulae to comets to whatever that there is out there, so it's not that "DSLR" automatically implies "telephoto lens" or otherwise "small aperture". In the past few years dedicated astronomy cams (the stuff that is called "CCD" but nowadays is often not actually a CCD but a CMOS sensor, just like most DSLR sensors) have converged quite a bit, e.g. modern DSLR sensors have 14bit resolution or more, while astro"CCDs" can have chip and pixel dimensions not unsimilar to DSLR sensors, and can have anti-blooming gates (yikes!).

Yes, DSLR sensors have a Bayer filter matrix, which limits their quantum efficiency in any given band, but, since you make measurements in 3 bands at the same time (of which you want to use the Green and perhaps the blue channel for photometry) while you have to split your obs time with monchrome CCDs for different filters, the effect is actually not so huge that it automatically puts many Landolt standards out of range for DSLR sensors. 

CS

HB

 

 

Affiliation
American Association of Variable Star Observers (AAVSO)
Sensor type

In a sense, the distinction is merely this: if you upload your data in the "DSLR" category, then it is DSLR; if you upload in the "CCD" category, it is CCD.  As you point out, with the advent of CMOS sensors, the term CCD is, itself obsolete.  From a functional viewpoint, one could draw the distinction based upon whether the color separation takes place in RGB bands inside the sensor, or with photometric filters in front of it.

As I said, I have never done DSLR photometry.  My impression is that it was originally practiced with conventional photographic lenses attached to camera bodies, and the targets were generally brighter than those sought by telescope users.

Whether a camera body is attached to a lens or a telescope for any given observation brings me to another favorite topic - metadata - about which I will go into later.

Tom

Affiliation
American Association of Variable Star Observers (AAVSO)
DSLR sensitivity

Yes, Heinz-Bernd is right, there is no significant sensitivity difference between CCD and CMOS sensors that equip the DSLR. Today the QE record of CMOS is 95% ! No CCD is at this level. The DSLR sensitivity at pixel level for the green chanel has no reason to be different than a CCD with a V filter. The difference is just you have two pixels instead four but you have blue and red at same time (and SAME time is important). Then the Nyquist-Johnson noise of present CMOS is much lower than what the CCD technology has reached (thanks to massive parallel reading instead serial) but ok for photometry this is not usually critical, we are in shot noise. The issue is just the size of the optics, number of us are limited because we are using photo lens or small telescopes. Another limitation is the light polution, most of us here in EU observe under significant LP, moving to one of the few dark area takes a lot of time, not very productive ! Not my choice, and I limit my observations to mag 15 using a 200/800 newton, or some lens, an EOS M3 on top of my building in a well lit city.  

I do not personally use Landolt stars as with my VSF technique I have little need for and then their position in the sky is not ideal for me. M67 from time to time is enough. But I know guys, here in France, using Landolt with DSLR and 8" scope, in suburb of large cities. 

But ok, I am not sure this is the best thing to do to improve the data quality. I would prefer to use that time to document the reasons for discrepancies, I think we know them well, and then educate the observers. I don't see such experiment useful, we know well what we could achieve when the conditions are ok. The  first problem is to get such good sky condition ! We can just have hope for... 

Clear Skies !

Roger

Affiliation
Variable Stars South (VSS)
Thanks all...

Thanks all for the respectful and constructive response to my post. Vis '&' V would have been a better name. For my part, I keep a casual eye on the LCG for my own quality control. If I have a discrepant ob I can check with other observers and try to fix the problem, or try and do better. Always try and do better...

Best to all. Alan

Affiliation
Royal Astronomical Society of New Zealand, Variable Star Section (RASNZ-VSS)
Thanks all...

Alan, you really started something with this thread. I like your last comment...
"If I have a discrepant ob I can check with other observers and try to fix the problem, or try and do better."
This is how I work as well. I know there was one time when I recorded V436 CEN as <160 when it was around 135. I went and checked as soon as the weather allowed, and I was wrong. Mistaken identity I believe. As you say, always try and do better..."
Cheers

Stephen [HSP]

Affiliation
American Association of Variable Star Observers (AAVSO)
Human Factors

The point has been raised, here and elsewhere, that the AAVSO culture may have tipped too far in valuing quantity over quality.  We certainly want to celebrate the productive observers, but some thought should be given to our exclusive focus on observation totals.  I like to think of an analogy from chess: when starting out, you play purely for fun.  But, later, you might join the US Chess Federation.  The USCF will give you a rating that changes as you win or lose against other rated players.  If you're not careful, your chess life can revolve around making your rating go up.  When that happens, what was once a pastime becomes a disease.

Quantity, as a metric, gets distorted in the era of automated observing, particularly in regards to time series work.  Building (and operating) a truly reliable automated measurement system is quite hard, and it is easy to fall into complacency about quality.  For a while, the VPhot system was clogged up trying to plate-solve *blank* frames.  The submitter of the data apparently hadn't checked to see if the roof for his automated observatory had actually opened!  The Chandra satellite operates autonomously, but statistics downlinked with the data are subjected to a human-in-the-loop screening process to look for operational anomalies.  To make a general point, it is important that observers not blindly believe what their computers tell them, particularly in the rush to spin the Observation Odometer on the AAVSO home page.

 

When it comes to sorting out difficulties with data that have been submitted, we run into the challenge of Observer Personalities.  There are contributors who are convinced that their numbers are beyond doubt, and who become defensive about any inquiries in that regard.  A line from Hidden Figures applies here: "No one should have trouble with having their work checked."   We all need to keep open minds about mistakes in our own results, but at the same time, the process of checking needs to be done diplomatically. (leave your ego at the door).

There is also a class of observers who refuse to accept any coaching to improve their work.  HQ has plenty of experience with people who simply will not engage in dialog to fix what are clearly problems.  This doesn't seem to be a matter of denying problems, but an unwillingness to change habits.  I have always found it to be baffling that there are people who want to participate in science, but won't work within basic scientific standards.

Finally, there are those who do not ascribe much importance to consistency among observers.  I was once involved in a forum thread - discussing discrepancies - and received a private message from a third party who said, in effect, "Your numbers don't match his - so what?"  This communication was from an experienced observer, and implied an operational model that every observer is his own photometric system, and it was either impossible or unimportant to achieve consistency.

Tom   

Affiliation
American Association of Variable Star Observers (AAVSO)
That is all very true in my

That is all very true in my opinion. I also work in science (software engineer in gravitational wave research)  and it bothers me when people put their egos or personal biases above the scientific method. How best to address this, tho, is difficult. This is an amateur community, we do this for fun, and any form of negative sanctioning or "policing" would risk taking the fun out of it. So corrective measures should ideally work thru self-motivation, rewards and positive feedback whenever possible.

For most objects, light curves are a wonderful tool for data quality checks. I'd like to come back to the idea I mentioned earlier: present the LCs of observed objects to the involved observers on a regular basis, e.g. monthly, thru automatic generation. The effect would be twofold:

a) you are almost "forced" to look at what you have done (good or bad :-) ) during the past 30 days in the context of other observations, and with some mental and time distance to the submission of the data. If your data quality is good (or getting better over time), you might get a better "reward" by looking at a LC than by just seeing your obs number increase. You also get a visual reward  if you made that one observation that helped to better constrain a minimum or maximum, the beginning of an outburst, or one that closed a bigger gap in the data, or that confirmed someone else's observation that was unexpected, etc.... you get the idea.

b) equally important, you would know that all your fellow observers for that object will be looking at your results as well (good or bad!!), as they will be checking their monthly reports.

 

The number of light curves that gets delivered to each observer per year could be another figure of merit for observer performance, in addition to the nr of data points submitted. It would indicate how many different objects you cover and how much you care about inspecting your results, both is not well reflected by the pure obs count.

I think almost anything that makes people look at lightcurves helps, it could even have some entertainment/game element, e.g. we could have a feature to display a random lightcurve (say write "random" as a star name and hit the LCG button, and you would get a LC from one of the objects that had data submitted on that day). In the new LCG you can find out who of the listed observers submitted a given data point, and yes, there is the "report discrepant data " button... The hope would be that this would implant in the back of the minds of observers the notion that whatever they submit, there is a real chance that someone else will actually look at it pretty soon (and will either scratch his/her head when seeing your data point(s) or will enjoy a good quality light curve ).

 

I think this is a reasonably concrete idea. Would this work, and would it be realistic or worthwhile in terms of effort and benefit? Any comments welcome.

 

CS

HB

Affiliation
American Association of Variable Star Observers (AAVSO)
Metadata

We need to remember that our data are intended to serve researchers who will come long after us.  To that end, we need to provide auxiliary information that will help future astronomers to evaluate our observations.  

In the case of b Persei, mentioned by Roger above, observations taken under uncoopertative skies - having large uncertainties - were still useful.  That's wonderful, but I would argue that the data should have been accompanied with notes that conditions were poor.  Looking at some of those records, I didn't see any such indications.  Without them, it looks like dubious photometry.  Any unusual diffculty in taking data is worthy of notes in the observation comments.

The optical system used, sensor make&model, observer location, reference magnitude sources, and reduction software (including version) seem like a minimal set of specific metadata.  Extinction and transformation coefficients are worthy of consideration.  It is worth repeating that some observations have "chart" information from which actual reference magnitudes are not traceable.  It is also worth noting that the AAVSO data extraction tools do not always supply the metadata that do exist.

It would be very worthwhile to have a discussion about what metadata should be required.  Keep in mind that metadata can be added by post-processing software, so we needn't depend on immediate buy-in from vendors of reduction software.  Indeed, experimenting on our own is probably a good idea before approaching them.

When raw Chandra data are processed into the form distributed to users, the FITS files contain a complete record of the programs, program parameters, input files, and calibrations used at each step of the reduction, as well as all the parameters of the observation run, itself.  The consumer of the data, whether working today or far in the future, can reconstruct exactly what happened.  We needn't (and can't) provide such detailed metadata, but I argue that we need to beef up our audit trail.

Tom

 

Affiliation
American Association of Variable Star Observers (AAVSO)
Large uncertainties in b Per ? report format ?

Hi Tom, in what did you see large uncertainties in b Per ?  My own reports for the last b Per campaign have SD well below 0.005 in general, and similarly for other major observers. Is this what you consider large uncertainties ? In my case each point is the average of a short series and the reported SD is the SD of the mean of that series. Even if we had to observe under difficult conditions "in between the clouds" I didn't consider it was necessary to mention it given that SD level. Anyhow the PI, Don Collins, was in contact with us and he knew what we were doing. 

It's true there are also stranges observations from a couple of guys in the b Per campaign, just like usual, I think it was easy for Don to ignore them.  

Then, on that point, there is a problem with the reporting format: in the past we had codes to report the observing conditions, there is still space for in the table we get when checking the report (I don't remember having seen anything in) but I don't know what codes to use and where to input it. The "extended file format" document doesn't mention it.

The type of reports are also questionnable to me. Reporting instrumental mag in case of a single comp plus check seems to me an issue, I don't see how it could be used properly as not all conditions are known. Maybe a reason for some of the discrepancies ?

In case of ensemble why not to use one of the ensemble as check ? The mag difference to the target is not affected by the fact it's in the ensemble, and that difference is far better correlated than an independant check. Next it leaves more star for the ensemble and improves the scatter, in addition it's an inceased survey of the ensemble coherence and detection of anomalies.

I think a deep review of the reporting rules and form is also needed. 

Clear Skies !

Roger

 

Affiliation
American Association of Variable Star Observers (AAVSO)
My mistake

Since the b Per campaign was mentioned as a scenario in which data were sometimes collected under poor conditions, I assumed that there were high-uncertainty magnitudes being reported at times.  There were a few, but they appear to be from CCD observers rather than DSLR.

I consider errors of 25mmag and up to be large, but see my next post.

Affiliation
American Association of Variable Star Observers (AAVSO)
Uncertainties

Part of the value of electronic photometry is in the accompanying errors, or uncertainties, reported with the magnitudes.  Measuring star brightness through the earth's atmosphere is difficult, but I count myself in the camp that claims that the brightness is a well-defined physical property that can, in fact, be measured.  It is, therefore, reasonable to talk about the "true" or "actual" magnitude of a star, and the relation of our measurements to that magnitude.  That relation come from the error, which estimates chance deviations in the measurement process.

Roughly speaking, if we report a magnitude, m, with a 1 sigma error (say e), that means we believe that the actual magnitude has about a 68% chance of lying in the range m-e to m+e.  68% is not a great percentage, so for my own work I prefer to think about 2 sigma errors, where there is about 95% chance within m-2e and m+2e.  This is a simplification, but the thrust is that the 2 sigma error bars have a very high probabilty of encompassing the true quantity being measured.  [real statisticians are welcome to chime in :)]  Thus, if the 1 sigma error is 25 mmag, the total range of the 2 sigma error bars is 100mmag, which seems, to me, a lot.

For our magnitudes to truly be meaningful, our reported errors must be realistic.  I think that the errors we get in the PEP protocol are, in fact, realistic, at least when modest in size.  You can check out the JAAVSO paper Interobserver Photometric Consistency Using Optec Photometers (upcomng volume).  Based upon what I have heard from practitioners, I think that the errors reported for imaging-based photometry are more problematic: deviations from the true magnitude are dominated by systematic errors, rather than random errors. Trying to sort out this situation is one of the reasons for working with the Landolt standards.

 

Affiliation
American Association of Variable Star Observers (AAVSO)
Statistical vs systematic errors

Roughly speaking, if we report a magnitude, m, with a 1 sigma error (say e), that means we believe that the actual magnitude has about a 68% chance of lying in the range m-e to m+e. 

If this were true, the reported magnitude error for a value derived from differential photometry against a single comparison star could never be smaller than the uncertainty of the comparison star magnitude. But the AAVSO seems to require us to report uncertainties derived from the statistical properties of multiple measurements, or an analytical estimation of the stochastic errors involved, e.g. via the "CCD equation" if you just have a single measurement (see the "Uncertainties" subsection in this chapter of the CCD observer guidelines https://aavso.org/sites/default/files/publications_files/ccd_photometry… ).

I understand the rationale behind this is that data consumers could look up the uncertainty of the comp star and add it to statistical error. But does this always work in practice? As Tom pointed out in the metadata part of the discussion, there are all kinds of situations where this is not so easy: non-AAVSO photometry charts, or unclear or plain wrong entries for the comp star. Ensemble photometry reports also have no obvious metadata to look up systematic uncertainties. The Check star value could also hint to the actual total uncertainty, but if it is reported as instrumental magnitude (for measurements other than ensemble photometry), I agree with Roger that this value is actually not too informative.

In practice, I suspect that for most reported values, the statistical errors are actually way smaller than the systematic errors. I would prefer to have the chance to (optionally) specify an estimate for the systematic error, especially for ensemble photometry, e.g. derived form the observed deviations of the measured magnitudes of the ensemble stars from their catalog values.

Note however that in the second to last paragraph of the section of the CCD observer guide linked above, it is actually mentioned as a "next best option" to report uncertainties derived from measurements of several comparsion stars, which would then in fact fold in systematic errors and not just statistical ones. I actually prefer this, but there is no way to specify in the report that you chose this method over reporting purely statistical errors.

You can always put stuff in the comment field but for something this important, a standard, structured data entry and retrieval would be best.

CS

HB

Affiliation
American Association of Variable Star Observers (AAVSO)
Vis vs v is easier

While the title of this discussion is a bit off on a tangent, it is true that handling errors is easier by visual than by electronic measurements. I have been submitting them here for about 18 years now, and from the start, I have strived to be accurate to 0.1 magnitude, and many observations good to 0.05 magnitude. So, you can always be sure a 'LMK' observation is accurate :)

The main limitation on my visual accuracy is the comp stars provided on the charts. As long as they are at well spaced brightness intervals, say not exceeding 0.5 magnitude, normal sun-like colors, the accuracy should be excellent. I had mentioned before, that the process of roundoff of visual comp stars to the nearest 0.1 magnitude harms the estimation a bit, making +/-0.05 mag harder to achieve.

So, I think that getting accurate visual is much easier than CCD, due to the inherent nature of the process involves a human judging each observation. CCD requires a multitude of steps in the data reduction process, and extreme care to avoid the wide range of variables that can affect automatic measurements. Therefore, obviously the main effort needs to be concentrated on improving the CCD QC validation process.

Affiliation
American Association of Variable Star Observers (AAVSO)
uncertainties

I agree with Tom that more metadata won't hurt, if done properly.  We thought about adding a table of telescope/system information for observers, but decided against it for many reasons - hard to maintain (out of date info is worse than no info), hard to track changes, nearly impossible to enforce for observations coming from other sources such as the BAA or AFOEV.  Someone smarter than I am should be able to find a solution!  However, adding a quality flag, such as done for SDSS or PanSTARRS, might be doable.

However, I don't think metadata in itself improves the quality of the observations.  I think there are two main issues:  how to work with observers to improve their skills so that they submit good data, and how to identify poor data once it gets into the database.  We always had trouble in this latter issue (validating CCD data) because there is usually little overlap between observers, and CCD observations ARE more accurate in general than visual, so that an observation 0.2mags different between observers could be an error, or could be due to a change in the star.  Many, if not most, CCD observations are of rapidly changing stars.  You can always catch the obvious outlier, either visual or CCD.  And again, unfiltered observations should NOT be part of the discussion, as their calibration/offsets are pretty much impossible to reconcile.

I've recommended to several observers that they should not include the catalog error of the comparison star(s) in their estimate of the uncertainty.  This is because a substantial fraction of the quoted error is systematic, and systematic errors do not add in quadrature.  Ensemble techniques here actually help, as only the random error in the comp stars shows up in the error estimate, if all of the comp stars are coming from the same source (such as APASS).  Including the full error doesn't hurt, but overestimates the uncertainty of the observation.  Generally, any observation that includes a reasonable error estimate is a scientific measure, and while having 10mmag uncertainties might be a goal, those 50mmag results are perfectly acceptable, as the researcher can weight them properly.

Then there are the discrepancies that defy solution.  Gary Walker had a couple of light curves where I couldn't find an obvious problem, but they had features that no one else was seeing.  My B-band data of b Per through eclipse is 0.2mags different than others, and I haven't figured out why yet.  I've seen transformation coefficients from observers using Astrodon B filters and SBIG cameras that have large coefficients, yet we're using similar systems with AAVSOnet with results much closer to the standard system.  What causes such effects?

Tom mentioned issues about uploading images to VPHOT that are problematic (such as clouds).  You need to remember that many of these images are coming from robotic systems that are not under the control of the observer, and the first time that they see the images are when they show up in VPHOT.  You can't blame the observer under those circumstances, and they likely won't upload photometry from cloudy or badly trailed images to the AID - but they may use VPHOT to help in making that determination.

I'm looking forward to hearing Tom's suggestions regarding how to better validate the AID!

Arne

Affiliation
American Association of Variable Star Observers (AAVSO)
adding a quality flag

However, adding a quality flag, such as done for SDSS or PanSTARRS, might be doable.

I would think that adding a new column to the report entries is hard (defines a whole new format) but adding a new parameter (in the sense of https://www.aavso.org/aavso-visual-file-format ) should be straight forward, like, say

#COMMENTCODE string

analogous to the existing "COMMENTCODE" field in the report format for visual observations. Those flags would then apply to all following data until the next #COMMENTCODE sets new values.

The field in the database is already there and works for CCD/DSLR reports when used with the "manual entry" web form.

Harder to implement would be changes that introduce new fields in the DB. E.g. one could think about adding data for the observation site location like (say) #LONGOBS ,#LATOBS and #ALTOBS in the same way, mimicking the respective FITS header fields that are in common use (Software like VPHOT should ask for explicit permission before automatically populating those fields from FITS header fields for privacy/safety concerns, I guess).

CS HB

Affiliation
American Association of Variable Star Observers (AAVSO)
Example

Metadata, of course, do not improve the quality of an observation, but they help future reseachers interpret the data.  Below are the metadata I currently include in the comment section of my own B band observations.  Anyone could add these to their own submissions.

 

|SCOPE=9.25in_SCT|SENSOR=SSP5|LOC=44.1N/121.3W|INDEX=BV|DELTA=0.443|K_B=0.33|KK_BV=-0.031|TB_BV=0.012|CREFMAG=7.163|PROG=TJC_PEP_5.0|COMMENT=NOCHECKSTAR; |

It is a series of keyword/value pairs...

SCOPE  the telescope used (I use more than one)

SENSOR  the photometer I used (I use more than one)

LOC   latitude and longitude (I observe in more than one place)

INDEX  the two-color index used for applying transform

DELTA  the measured color delta betwen variable and comparison

K_B  first order extinction coefficient

KK_BV  second order extinction coefficient

TB_BV  transformation coefficient

CREFMAG  comparison refernce magnitude

PROG  My reduction program

 

Note that the AID record structure was, in fact, expanded not long ago.

Affiliation
American Association of Variable Star Observers (AAVSO)
Large uncertainties

Arne:

If an observer is getting 50mmag 1sigma uncertainties, it seems to me that it indicates some kind of problem: working on tagets too faint, working at too low an altitude, working under very adverse sky conditions, or a systematic fault.  For certain campaigns, perhaps these can be tolerated, but we certainly want people to be doing better.  The reported uncertainty is a diagnostic, and it would be helpful if people recognized that high values indicate a need for scrutiny.  I think a lot of our persistent quality problems can be traced to a lack of awareness among the observers.  We live in an era of point-and-click astronomy, but real instrumentation is finicky and needs careful monitoring.  

Last year, I had a big headache of my own in B band.  Data for most of my stars came out beautifully, but two targets had discrepancies that I still can't explain.  The price of photometry is eternal vigilance!

Affiliation
American Association of Variable Star Observers (AAVSO)
>= 50 mMag uncertainties,

>= 50 mMag uncertainties, properly reported with all sources of error included, can also arise from the very poor comp stars that exist in numerous CCD charts, as well as from charts whose comp stars do not cover the entire range of variable-star magnitudes during its cycles, forcing the observer to overexpose or underexpose the target or comps.

It's not just the observers or their rigs: numerous existing CCD charts need improvement, or their reported target uncertainties will not--indeed cannot--get better.

Affiliation
American Association of Variable Star Observers (AAVSO)
extended format

Tom, you wrote that the AID structure was extended not too long ago.  I don't rember the details exactly, but I think no new fields were added; perhaps some were increased in size.  You may know more than I do.  The main point is the submission method, the "Extended Format", has not changed since its inception.  It is certainly possible to extend this input structure, or to add new fields to the AID, but both of those require many modifications throughout the AAVSO database and tools, so you want to make any changes very carefully.

I think that is why most of the metadata supplied by the users - whether your complete system description, or the output from TA - has been placed in the "comment" field.  Developing a set of keywords that can be searched is great, but these need to be standardized.  I notice your metadata does not contain anything about the observing conditions, and for CCD work, many more important values would need to be included (such as the exposure time or number of stacked frames), to help the researcher.  Again, few users would do this unless some easy mechanism for placing such information into the webobs submission were available from the software vendors (or, perhaps, by using an ancillary tool).

We can agree to disagree!  There are many circumstances in which lower signal-to-noise observations are taken or are desired, such as measuring a CV near quiescence (hard to get SNR=100 at 19th magnitude) or when the rapid cadence is necessary to resolve light curve features.  Again, as long as the uncertainty is properly recorded, observations with very poor signal/noise are scientifically useful.  Everyone should try to get the best data that they can, but it isn't always possible, and a blanket "we won't accept measures with 100mmag uncertainties" would not be a right path to take IMHO.

Again, if people are complaining about the data quality in the AID, then we need techniques to (a) find the bad data, and (b) fix or delete the bad data.   The amount of validation that currently takes place is very minimal, because of limited staff, so any ideas as to how to improve this are very much welcomed!

On a slightly different topic, I'll throw out a suggestion:  why not eliminate the CCD observation total awards?  We increased the spacing compared to visual observations to account for some of the difference between visual and CCD observing (like time series), but maybe it would be better to eliminate the CCD awards completely.  Then maybe give out special awards, such as high accuracy, important campaign observations, long term monitoring of targets, whatever.  Aaron Price came up with the "CCD Points" concept back a couple of decades (!) ago, which I thought was a pretty neat way of acknowledging the CCD observer's contributions without dwelling on totals.  I might have the algorithm he used stored away somewhere.  I have 46K observations (almost all single measures, and not time series) in the database right now, and I personally don't care about getting another award.

Arne

Affiliation
American Association of Variable Star Observers (AAVSO)
reductions

I promise to get to my ideas about improving quality :) , but I want to bring up two points regarding data reduction.  First, there appears to be a lot of use of ensemble reduction taking place.  Do we have actual evidence that it is producing better photometry?  Are its theoretical advantages actually borne out in our own practice?  A lot of ensemble photometry is submitted without transformation.  To what extent is the benefit of ensemble reduction undercut by not applying this systematic correction?

Second is the issue of second-order extinction in B band.  I have heard two misunderstandings about this: that is it of no significance, or that it is eliminated near the meridian.  In any event, there seem to be very few observers who apply corrections for it.  The correction can be approximated as:

airmass * 2nd_order_coefficient * d(B-V)

Unless the color contrast between target and comparison is zero, the above quantity is never zero.  The coefficient nominally lies in the range of -0.02 to -0.04, but I have measured as high as -0.053 in my own system.  Even with airmass and contrast that are not extreme, the correction can be significant - on par with the 2 sigma error of the observation.

Tom

 

Affiliation
American Association of Variable Star Observers (AAVSO)
reductions

Hi Tom,

The usual theoretical reason to use ensemble is that it increases the effective signal/noise, since you are using the flux gathered for multiple comparison stars to use in the differential photometry.  Kent Honeycutt's paper (1992PASP..104..435H) does an excellent job of describing the theoretically proper way to perform ensemble photometry.  There are several places where ensemble helps in the empirical world.  All of the comps might be fainter than the variable, so you need multiple comps to improve the signal/noise.  Comps scattered over a frame help in understanding the effect of any non-uniform clouds.  Averaging the results from multiple comps yields a "master comp" that has the mean color of all of the comps, and the uncertainty estimate includes most sources of error, including the Poisson noise of all stars, the random catalog error, transformation, second order extinction, etc.  That said, I don't know of anyone who has done a formal test to see that an ensemble helps as much as it theoretically should.  Most exoplanet observers use a single bright comp star, as most of the weight in an ensemble is from the brightest stars and there are some subtle effects from flatfielding, scintillation, etc. that come into play when you are in the millimag regime.

The best test in my mind is one that I've tried to get professionals to help with.  I've wanted a suite of test images produced by the data pipeline folks for, say, LSST.  These are theoretical views of the sky, but with every effect known to them modeled and included.  The value of this is that you know exactly the brightness of every source.  Doing photometry on these images would then test the accuracy of the aperture photometry from various software packages, and could also be used to test things like the value of ensemble photometry.

Certainly transformation is important to include.  However, with ensemble, what you get is a master star with a mean color, and with the errors due to transformation from the other stars in the ensemble then being included in the uncertainty estimate.  So there remains a systematic offset due to transformation between that mean color and the color of the target star, as well as any systematic offset from the catalog values of the comparison stars with respect to the standard system.  The more corrections that you can apply, the better the results.

As for second order extinction, there was a private email discussion between us in December, as well as a forum thread in December/January.  As you mention, second order is important for the B filter, if the color of the target star is different than the color of the comparison star, and as you go to higher airmass.  I think TA will apply second order corrections if you supply the coefficients.   Obtaining the second order coefficients has traditionally been difficult, and usually requires observing a red-blue pair over a wide range of airmass.  Most of the "power" in the fit is at high airmass, so this derivation works best if you have a clear eastern or western horizon, and follow the star pair to, say, 20 degrees altitude or lower.  That said, CCD observers should have it easier, as you could follow a standard field like M67 over the wide airmass range, and have dozens of stars that can be used in the determination.  CCD observers might even be able to calculate the coefficient in marginal weather, since they can measure things differentially within the field of view.  I haven't seen anyone write such a program, though.

Good points, Tom!

Arne

Affiliation
American Association of Variable Star Observers (AAVSO)
What to do?

I appreciate everyone's forebearance at my extended ravings on problems - I'll try to turn to solutions :)  Broadly, I classify them as education, training, and feedback.  What I can't offer, however, is a means to clean bad data out of the archive.  Other than looking at light curves for closely-followed stars, I don't see how to reliably spot the problems without going through a lot of detective work.  Prevention is the only efficient treament.

Let me start with education.  I think we can do a better job of educating new observers about the travails of our trade.  The CCD manual very cogently points out that good photometry is achieved by hard work, and the more work, the better the results.  I think that we should lay out this story more fully, so that people coming on board can have a realistic sense of the challenges.  It could be part of a "Welcome to Photometry" guide, separate from the CCD and DSLR manuals, and more conversational in its approach.  I tried working along those lines when I composed the new PEP manual (https://aavso.org/pep-observers-guide).  As I started out observing with AAVSO, I really had no awareness about the issues I have been talking about in this thread - I had to stumble through them all myself.  Even the comparatively approachable photometry guidebooks are technical enough to intimidate those without a science or engineering background.  A beginner's introduction oriented towards the AAVSO community would be a big help.  At the very least, I think we should draw up a "best practices" quick reference to help people remember the dos and dont's.  The more we can raise awareness about quality (and photometry in general), the better the results we can expect in data submissions, and observer discussions.

Tom

 

 

 

Affiliation
American Association of Variable Star Observers (AAVSO)
Some observers not following processes?

Having recently done both the CCD photometry and VPHOT courses I would like to point out that the instructors involved (Ed Wiley and Ken Menzies) stressed the necessity of checking measurements against those already in the database to spot any significant mistakes (e.g. say magnitude measurements ~ 8 for a pulsating star that never gets brighter than 11) before submitting results.

I note that where data does not exist for the star in the AAVSO database, a simple check of it in SIMBAD or one of the relevant catalogues would help spot any major discrepency. It is simply good practice in science to check that measurements are at least 'in the ballpark'.

Perhaps we could introduce a line into the upload process which asks whether the observer has done this 'macro' test against 'known' data before uploading. To proceed with the upload the observer would have to tick a box. If it is then found that a particular observer is ticking without checking then simply require them to do a 'refresher' before they can again upload data.

Such 'refresher' courses could be automated online courses like the OH&S and other courses routinely done in modern workplaces. Perhaps we could eventually move to a system of requiring all those wishing to submit observations to do a basic 5-10 minute online course as a means of getting accreditation to submit data?

I fear that for some observers it may have become a game to see how many observations/measurements they can log?

Tex

Affiliation
American Association of Variable Star Observers (AAVSO)
Training...

...or perhaps a better term would be Continuing Education.  We have the CCD school, Mentor program, and, as Terry notes, the VPhot class to get people up and running.  We need additional resources to help over the long term of an observer's career.  I see "test targets" as key here.  I have alluded to an experiment in progress with the Landolt standards.  Once we get a handle on how to work through the problems that crop up there, I would like to see AAVSO establish a set of standard targets (with standardized comparisons) that are known to VSX/AID/VSP, etc.  This would allow anyone to collect, reduce, and share photometry for stars having little or no variation.  While the Landolt fields are good for those who can work near V=11, we can use "constant" stars from VSX for those who need brighter targets, provided we do adequate vetting of the constancy. 

In conjunction with targets, we would need to recruit some designated experts who could provide commentary on the results and help diagnose difficulties.  Working through questions and problems needs to happen in an orderly way.  While we need to get the collective troubleshooting wisdom disseminated, I'm not sure that Forum threads are the way to go.  A new CHOICE course on observing the targets, if offered regularly, would work for people who feel they need a general tune-up, while something more along the lines of a Dear Abby column could address specific problems that need attention.

Tom

Affiliation
American Association of Variable Star Observers (AAVSO)
Comparing to others

Comparing your obsevations to those of others is not a surefire solution. For this to be reasonably successful, you need sufficiently large number of different observers submitting data at similar times. Only then, will the various systematic errors of each observer, tend to "average" out, in a manner similar to random errors. Versus, a large number of observations from just one or a few observers is not going to eliminate the systematic errors, which seem to be the dominant one, so it won't be a valid comparison.

The statistical method of averaging measurements of a large number of observers has been validated in the past for the LPV observations, to obtain a high degree of accuracy.

Lacking sufficient numbers of CCD observers for comparison, its a much more difficult task to validate any individual observers, because you then need absolute, accurate, precise standards for each variable field.

It's also somewhat dangerous to assume a variable will NEVER exceed its previously known range!

Mike 

Affiliation
American Association of Variable Star Observers (AAVSO)
Either way...

It's also somewhat dangerous to assume a variable will NEVER exceed its previously known range!

 

That's a good point, I think. Would it be a fair statement tho that in the event of a (very) unusual report, both the reporting observer and perhaps someone from a "volunteer expert" group (the data quality shift..if you want...or the data quality scouts....or the whatever you want to call them) should get a heads-up?  Because either something is wrong with the measurement OR some star is doing something unexpected/interesting, and either way this diserves some attention.

CS

HB