Vis v. V

Affiliation
Variable Stars South (VSS)
Thu, 05/25/2017 - 05:54

Hi all

Whenever I (regularly) write or talk about the AAVSO and VSOing I say that visual and ccd observers complement rather than compete. Now, I'm the first to admit my eyes aren't the sharpest tool in the shed, but I've seen some absolute shockers of supposed Johnson V measures. The trouble is that people might believe some of these!

V quality control is for the V people. I'm merely a concerned vis onlooker.

Best to all.

Affiliation
American Association of Variable Star Observers (AAVSO)
Feedback...

Let's think about some tools that could engage with observers about data quality.  We need to keep two points in mind: there is very little manpower available for software work right now, and any infrastructure we add must be very robust.

I think any kind of clever analysis in WebObs is impractical, but I believe WebObs should be updated to screen out observations that lack aimass, that lack uncertainties, or have uncertainties of 0.  I see this as useful not only for the integrity of individual data records but also to establish that AAVSO does, in fact, have standards.  Right now, people are free to throw almost anything into WebObs, and we need to cultivate a new mindset.  WebObs could be set up with a banner that says, "Effective <some date>, WebObs will require valid airmass and uncertainty fields" to get the word out, and then start enforcement on that date.  A minimal amount of software changes would be needed to implement this, and the tests do not require access to any information not already present in the data submission.

The light curve generator seems like our best tool to engage users about quality, the question is how best to use it.  If we have people examine light curves as part of WebObs submissions, we will need to add dialog functionality that doesn't presently exist, and leading the user through curves for every star in the submission could be tedious.  That being said, I think it would be worthwhile (if easy) to throw up a lightcurve for just one star in the submission, with the submitter's data points highlighted, in a window the user would need to dismiss in order to see and hit the WebObs "Submit" button.

Heinz-Bernd came up with a very good idea of putting light curves in regular email reports to observers.  This has merit on two important grounds: 1) by aggregating data into monthly (or perhaps bi-weekly) blocks, we put less of a load on people than would be incurred making them "screen" every data point individually (even observers who are submitting large amounts of data will likely be following a fixed list of stars).  2)  These reports can be generated by an auxiliary piece of software, rather than the existing processing programs.  That means the report generation could be turned on and off at will, and any problems encountered will not disrupt normal operations. It would be nice "close the loop," and somehow make people confirm that they have examined the reports, butI see that as something for the future.

A variation on this report scheme could be used at HQ.  Reports based upon a configurable list of stars (or observers) could be generated on a weekly basis for staff who perform quality control.  I'm sure this would make the existing screening process more efficient, by giving the staff a "quick look" without manually generating light curves.

Tom

 

Affiliation
American Association of Variable Star Observers (AAVSO)
Final Thoughts

At various times, people have floated the idea of "ratings" for observers.  I'm not keen on this - we want everyone to raise their game to suit their equipment and location.  We should strive for improvement, not segregation.  A rating system also implies the existence of an objective photometry metric, which would be hard to establish in an automated fashion (even ignoring the fact that seasoned observers can get inexplicable results).  Something along the lines that Heinz-Bernd suggested might be doable: if observers are regularly given reports with light curves including their data, and we engage them in an online dialog in which they confirm their data to be reliable, we could "honor" those who achieve some fraction, say 90%, of confirmation.  This would require some significant infrastructure work, but, in the end, might be an excellent motivator.  Another possibility arises if our experiment with test targets proves successful.  We could establish a year-round suite of targets, and honor those who regularly report data for them.  We could even establish a "sign-up" system, where participants could be regularly reminded when they are due for a test check-up.

In the past, the philosophy has been that we accept data from anybody, even those who are doing badly and refuse coaching.  I'd like to suggest we revisit that question.  People who fall into this category are wasting their own time, and wasting the time of the researcher who must paw through flawed photometry.  It is not fair to either party.  This brings up the larger question of how to apply backpressure.  We could actually lock people out - and that might be necessary for chronically bad observers - but there is another option.  People can ignore entreaties delivered by email, snail mail, and phone, but if they want to keep submitting data, they must deal with WebObs.  If we determine that an observer is a problem, either due to poor participation in some knd of quality-review scheme, as above, or specific problems noted by staff, we could introduce some hurdles in WebObs (extra dialog, for instance) that would serve as an annoyance to the user.  With enough pestering, people would eventually give in and cooperate (kind of like surrendering to robocalls from your dentist).

I'll wrap up with the reminder that the archive is our treasure chest, which has no equal.  If we think 25, 50, and 100 years into the future, what do we want its contents to look like?  The less noise it contains, the greater will be its value.  As researchers are able to summon and analyze very large amounts of data at a single stroke, the uniformity of the data becomes more important.  The astronomer in 2100 may want to simultaneously study curves of two dozen Mira stars, not just single examples.  Good work on our part will help make that possible.

I will now descend from my soapbox :)

Tom

 

Affiliation
None
Missing from the discussion: quality of DSLR comps

I've been following this thread with great interest, and thank all of you for helping me educate myself.  I am a novice DSLR observer (fewer than 150 observations) using modest equipment (71 mm ZenithStar with Canon 3Ti) in a light polluted environment, and have been struggling, largely unsuccessfully, to achieve even 5 mmag sigmas on my observations.  I did the DSLR course, and try to do everything by the book, including using 5 comp stars for every measurement.

The trouble is, it is very difficult to find 5 comparison stars with decently precise magnitude values in the AAVSO material.  I am not saying this to criticize the Sequence team, but rather as a matter of fact.  I notice, for instance that in may cases Tycho/Hipparcus data of high precision are available but not used.  Inasmuch as anyone can access that data, either via seqplot or a database such as SIMBAD, it would make sense to me to facilitate use of these stars by allowing us to report Vt, Bt measurements rather than Johnosn V, B.  If as is claimed they are almost the same (which I most sincerely doubt) it should be "no problem" for HQ to transform our Vt, Bt DSLR measurements into Johnson or Sloan or ...

Another thing that would be extremely useful for folks like me would be a list of variable types suitable for DSLR measurements.  I have already fallen into the trap of measuring, then having to rescind, measurements on stars that have strong spectral ines in the G bandpass that are not in the Johnson V bandpass.  Might I add that just because a star changes color does not render it inappropriate for DSLR measurements; RR Lyr for example.

CS!

Affiliation
American Association of Variable Star Observers (AAVSO)
CCD Points

As I mentioned earlier, Aaron Price had come up with a different scheme for awarding CCD observers, and used it in his "CCD Views" newsletter.  I've taken the liberty to modify his original with changes he made later, and add a cuple of my own ideas.  The basic concept was to move away from giving awards purely on the basis of the number of submitted observations, and instead honor those that went beyond number and made arguably more important observations.

#[Formula: 1 point for every observation

#-0.75 point for every observation in a time series

#10 point for every unique star observed

#0.5 point for every observation made within 5 days of the full moon

#1 point for every B,R, or I observation

#0.25 point for every observation mag 13-15

#0.50 point for every observation mag 15-17

#1 point for every observation mag 17-19

#2 point for every observation fainter than mag 19

#2 point for every star on legacy list

#4 point for every observation of an alert notice object

#-0.5 point for unfiltered observation]

The number of points and their weight could certainly be adjusted.  This was just a method to emphasize community value rather than quantity.

Arne

Affiliation
American Association of Variable Star Observers (AAVSO)
Accuracy

I should recommend again, the title of this thread should be changed from "Vis vs. V" to something in line with the topic, ie. "Quality control", "Improving accuracy", "DB validation" or such... The discussion has little to do with visual vs. CCD, other than initially noting both have surprisingly similar error in actual use.

Secondly, Aaron's/Arne's point system really fails to address the accuracy issue at all. A lot of points should be given out for high accuracy observations, though we still have not determined any effective way to achieve it!

Thirdly, givng 0.75 point for EACH observation in a time series, is grossly unfair against visual observers, who are practically incapable of doing time series! I would think a more reasonable way to reward a time series is to assign an extra few points for the entire time series, regardless of how may observations it contains, since the process is an automatic one with no user involvement, once it begins.

Mike

Affiliation
American Association of Variable Star Observers (AAVSO)
point system

Hi Mike,

There is a minus sign in front of the 0.75 for time series. :-)  That said, the point system only addresses the issue of creating a system that deprecates quantity over usefullness.  I agree that quality is not covered by this, as I could not see an easy way to determine quality in a general fashion.  Others might have ideas regarding that aspect!  Your idea of giving some number of points for a time series as a fixed quantity, rather than being based on the total number of points in the series, has some merit.  Again, this is a concept proposal, to be used for discussion, not anything that I'd expect HQ to adopt as-is.

Arne

Affiliation
American Association of Variable Star Observers (AAVSO)
The idea of a point system is

The idea of a point system is very interesting. I work for an institute that uses the BOINC software framework to harvest CPU time through distributed volunteer computing. The volunteers pay real $s and EURs for their electrical bills and for upgrades to their computing gear (sometimes substantial sums), and basically get rewarded only by awarding "points" to them (and paper certificates it they happen to participate in a discovery, in this case finding new pulsars). And this works in an amazing way!!! People form teams and have races against each other in friendly competition, they celebrate reaching milestones of getting more than 10^N points, there are even third-party websites to aggregate those points across various BOINC projects and create personalized statistics and historical plots of point growth over time and make projections of future growth and ranking positions and whatnot.... The points are also displayed under their avatar in forum messages, and some projects award virtual "badges" for certain milestones, also displayed in forum messages.

I think all this demonstrates that our brain's reward system is easily tricked and rewards can be very abstract and still work brilliantly and in a constructive way. Points, achievement decal stickers you can put on your telescope (killmark style :-) ), campaign badges ... whatever. 

One could also think about rewarding points for other contributions, like reporting discrepant observations, filing valid bug reports for charts or software, attending meetings, courses, etc etc.

It does feel a bit silly at first (yes, being part of the science should be motivation enough in theory...), and I used to be very sceptical about this kind of imaginary reward currency, but it sure does work to motivate lots of volunteers.  

CS

HB

Affiliation
American Association of Variable Star Observers (AAVSO)
Feedback

 

VSTAR has some nice filter routines. Why not just use a similar routine to filter out what the user doesn't want? Make it easy for the researcher to take the good stuff.

A user could filter  airmass< 1.5 and err<0.012 and err .NE. 0 and exposure > 30 seconds and there must be a reported error. So he gets all the  CCD and PEP stuff.

Such a search on the entire database might reduce it from 30 million to 1 million.

One could search by band and date as well, so that VIs OBS befor electronics could be had.

Ray

Affiliation
American Association of Variable Star Observers (AAVSO)
"Blind" testing

Here is another idea. How about AAVSO creates an internal list of "secret" high precision photometry constant stars, throughout the sky. Every CCD observer, prior to doing their nightly runs, would be required to login and obtain one or two of these "secret" stars near all the fields of their observations planned for the night. They would not be allowed to submit observations, unless it includes photometry of each of these "secret" stars along with their data.

This would allow an automated program at HQ, to immediately assign an accuracy value to each submission, by comparing the actual "secret" stars photometry to the observer's measurement of it. The strong assumption would be that any observations made closely in time and space to their actual variables observations - same sky conditions, equipment, methods, etc., would likely closely represent their actual systematic error.

Of course, the observers would know what these "secret" stars are, by their position, but they would NOT know the precise photometry of those stars, and thus make it difficult to "fudge" their data.

EDIT: It may be sufficient to have observers just measure a few of these "secret" stars at the beginning and end of their nightly runs, in the general area of the sky they are observing. This would greatly reduce the number of precision stars needed, and reduce the burden on the observer, instead of having to measure "secret" constant stars in each and every field.

Mike

Affiliation
American Association of Variable Star Observers (AAVSO)
I think I'm not getting the

I think I'm not getting the idea. How can a star's photometry be kept "secret"? It's all open to the public.

Anyway, if that scheme was implemented, that would really be the point where I would stop submitting data (plain and simple), because I'm doing this in my spare time and I have better things to do than making independent measurements and reductions of non-variable stars. I think it's not uncommon for DSLR and CCD observers to have just one or two targets per observation run, especially for faint objects that call for long (or many)  exposures. I don't want to basically half my productivity by having this extra overhead of extra-submissions for non-variable "secret" stars.

Anyway I was thinking that the check-star&comparison star magnitude entry fields were designed to achieve a similar goal (check the photometry on non-variable stars in the same field) without the need and hassle of a separate submission for such stars, aren't they?

CS

HB

Affiliation
American Association of Variable Star Observers (AAVSO)
Secret "good" photometry

Ok, to be more specific, about "secret" photometry, I meant these special stars need to be ones which do NOT have precision photometry already done on them. This would mean predominantly the fainter stars, and those which are not APASS, not Landolt stars stars, for example. Yes, most of the stars will have GSC or Tycho or other catalog photometry done on them, but in most cases, that photometry is nowhere near the accuracy required for this approach.

In regards to the extra time required to measure a few constant stars at the start and end of a nights observing, I would think, if the process were streamlined to get these stars automatically somehow before an observing run starts, the extra time and effort would be inconsequential.

Mike

Affiliation
American Association of Variable Star Observers (AAVSO)
Secret star check !

Like Heinz-Bernd I totally disagree with it ! That idea from Mike is unrealistic, the way, condition, I observe, would kill my productivity. Nothing is streamlined and automatic in my process, and I think it's the case of many of us, in particular DSLR guys. I can achieve 2 - 6 observations by session, applying this would kill 2 of them, someday the result would simply be zero observation... 

Roger  

Affiliation
American Association of Variable Star Observers (AAVSO)
practical considerations

Hi Mike,

I see a few practical issues with this. It would prevent observers from uploading data on nights when they get clouded out unexpectedly, even for experienced observers who consistently obtain high-quality data (the majority of AAVSO observers, I would argue). Also, it wouldn't be helpful with unfiltered time series of CVs; small zeropoint errors in that context are almost always inconsequential as far as the science objectives are concerned. Moreover, I don't see why this would be limited to CCD observers; why not all observations, including visual?

More generally, I would be cautious about adopting too sweeping a solution for the issues identified in this thread. In my experience with AAVSO CCD observations of CVs, the data quality is generally quite high. While this discussion has identified isolated data quality issues with certain types of observations, I am concerned that it might be creating an exaggerated impression as to the extent of these problems.

I think that the most cost-effective way of addressing these issues is to simply do a better job of educating observers. To reuse some of Tom's examples, if some DSLR oservers are submitting observations with zeropoint errors of several tenths of a mag and some CCD observers aren't reporting uncertainties, then we are failing to provide adequate training to them.

Finally, regarding "scores" for observers, my vote is to abolish them altogether. Let the scientific value of the data be its own reward.

Best,
Colin

Affiliation
American Association of Variable Star Observers (AAVSO)
Scores... reward...

100% agree with Colin, all those ideas of score, reward, checking system, secret stars... that's just making police, are irrelevant. Why to favor large scopes like I see in Aaron system ? Why devalue time series ? There are cases where time series are of high value, and need a lot of effort, think of b Per ! Even in long term variations reporting a couple of points, 3~5 instead a single one, is a good way to show the true scatter of the observation. You can find such several points reports in pro records. Then I don't know any rule from AAVSO to compute the err ( err, horrible way to name it...), if I remember well even 1/SNR is allowed, most of the time totally wrong !  Many aspects are involved... If the AAVSO would go that "making police" way I am afraid I would soon leave... And being probably not the only one, the AAVSO could end up a small organization, just with a few large scopes guys.

The only way is improving education as underlined by Colin. The things to do is first to properly document the issues, and I have not seen in this discussion number of known causes of discrepancies being debated. We should first make a comprehensive list of issues, then study them into details, create document and promote it widely. That was the spirit of Citizen Sky, one of the best initiative to push science I knew. 

Clear Skies !

Roger

Affiliation
American Association of Variable Star Observers (AAVSO)
"...the problem is in our (comparison) stars..."

   This has been mentioned before, but it is important to emphasize that variable star magnitude uncertainties depend not just on the quality of an observer's own measurements, but also on the accuracy of the comparison stars. The VPHOT tool is a wonderful way to see this in action. I use it to measure long-period variables with an ensemble of comparison stars, and find that while I can easily achieve an SNR > 100 (i.e. statistical uncertainty less than around 10 mmag), the variation in the comparison star estimates (given in the "err" column of the VPHOT Photometry page) tends to be much larger. In fact, comparison star uncertainties typically dominate the total uncertainty reported by VPHOT, and there is not much observers can do about that. It's interesting (disturbing?) to note that if one uses only a single comparison star, then only the statistical error gets reported, so the measurement appears (misleadingly!) to be more accurate than one based on an ensemble.

Affiliation
None
Comparison stars

Exactly!  It seems that we DSLR observers are almost in a different world from the CCD and PEP folks.  Let me spell it out:  Landolt stars are almost never of any use to DSLR observers, for the simple reason that (when we do it right) both our check star and our comparison stars need to be within the same FOV as the target.  Quite often, when we download the photometry tables for our targets, we get lists that include many stars with 100 mmag or greater uncertainties, even though a call to seqplot reveals numerous candidates with 30 mmag or smaller uncertainties.  Moreover, if one does some rudimentary checking of the numbers in seqplot one sees discrepancies, as for example in the (V,B) calculated from (Vt,Bt) by any of the more sophisticated fits (e.g., that of Bessell).  One then finds onself in the position of using known poor data or repacing those comps with ones for which the values are questionable!

A while ago I requested a presentation or paper on the AAVSO approved method for transforming from (Vt,Bt) to (V,B) and of their uncertainties, without an answer.

Thanks for responding to my earlier post!  It is nice to know that someone else shares my concerns.

CS,

Stephen

Affiliation
American Association of Variable Star Observers (AAVSO)
Comparison stars...

Stephen and Nathan are right, for DSLR there are a lot of issues with comparaison stars from Vsp. Each time I start a new campaign I have to deal a lot with it. I have a lot of respect for the work of the sequence team but there are problems. Problems of very large uncertainties, obvious discrepancies, stars too near others ( a very common issue not discussed here, various instruments don't see the same things !). Poor color distribution, poor placement in the field, many far too faint to be usable... Then I shall go to Vizier to complete, check, eliminate stars for my usual set of 12~20 stars. Then I don't have the bright stars of the FOV needed to make my automated plate solving, I get it from Vizier too, overall a lot of work.  

Roger

Affiliation
American Association of Variable Star Observers (AAVSO)
Need automated comp star generator

While the comp star volunteer team is quite responsive (usually get my requests done in a few hours!), it is a manual process of selection of the stars for a given field. It would be of great help, now that APASS is completed, to have an automatic generation done by software. This should be readily feasible since the APASS is all in a database? The software would take input of what the observer's parameters are (observing method, fov, faintness limit, star type, etc.) and generate the most appropriate sequence for the star. The algorithm should take into account the errors on the comp stars, their colors, spacing w.r.t. the variable to generate an optimal sequence.

It seems this program wouldn't be that hard to write, and be made as one of the standard apps we have available, like VSP.

Mike

Affiliation
American Association of Variable Star Observers (AAVSO)
bright comparison stars

Hi Roger/Stephen/Nathan,

Many of the bright comparison stars come from catalogs like the GCPD, which do not contain uncertainties.  Therefore, we took the conservative approach of indicating error=0.100 magnitude, even though therse catalog entries are probably far more accurate than that.  Likewise, Tycho2 has several different published algorithms for converting to Johnson/Cousins, but most of these agree at the 0.01mag level or better.  In the next major update to the VSD comparison star database, we can investigate both of these issues and try to improve on the current situation.

That said, most of the bright comparison stars were selected for visual work, not DSLR.  As such, there may be historical reasons for their selection and continued use.  There are techbiques that can be used for automated comparison star selection for both DSLR and CCD observers, and they can be considered for future targets.  The last major update to the comparison star database was in 2008 (the "Henden Bump"), so a new update is overdue!

Arne

Affiliation
American Association of Variable Star Observers (AAVSO)
VSP

Hello Arne, I know such 0.1 are in general a default value (Even if I would say 0.999 would have been more obvious... ). But there are many cases with uncertainties larger or near 0.05 , sometimes much more...  the case for exemple of the chart for the recent AT 2017eaw (X18926AKB). And this is not a so bright object, around mag 13.

DSLR are not limited to bright objects, in G channel their sensitivity is similar to CCD with a V filter and they have less Nyquist-Johnson noise and much less dark currents. We also use them with various types of telescopes, in fact nothing very different than CCD when considering luminosity. But DSLR practice is somewhat different: normally using color correction and also extinction gradient correction as soon the FOV+elevation allows it, the FOV being usually much larger than the one of common CCD cameras based on ICX, IMX sensors. 

In number of cases this is true VSP looks like more for visual observers than for DSLR / CCD. We have a clear need for better references, first. Selection tools comes second on my views ( I have some...). The first problem is having a good source, is it APASS ? Are its uncertainties so low ? Do we have all stars down to some mag level ? What about blending ? data to address it ? 

Clear Skies !

Roger

Affiliation
American Association of Variable Star Observers (AAVSO)
Additional idea for checking LC's

Hi Arne

Just read your article in the latest newletter on this issue and I would like to offer an idea.
Not sure how practiacal it may be but if there were "mentor" observers for star LC's that we could use to calibrate our observations against it may help observers with a self check.

I have compared my observations at times with the LC generator and still wonder which observer to compare against.  I am sure this is not  an easy thing to do but maybe if a few stars were chosen in north & south skies that would be "standard" LC's that we got then do observation against and see how we compare.

Just a thought.

John R