Hi guys. I have been "out of the loop" for a while, and am hoping to get back into some variable star observing. Decades ago I did a lot of visual VSO estimates. Several years back I made some estimates using an ST8 with a filter. I believe it was called a "V" filter. When I finally found it, it was all clouded up.
I have got a new Mallincam DS16c, and would like to buy a filter so that I can make scientifically useful observations. My questions are:
1) What filter(s) should I buy these days?
2) Where can I buy them?
Thanks for any help! And clear skies to all!
Pat Madden
I would like to raise the question: Can you do photometry with a video camera? Up to now I know that it is difficult to calibrate a video system and they often have only an 8 bit depth. But the the technology is getting better and better and the specs on the Mallincam that you mention are impressive. These cameras are in our future. So the question: How do you quailify your observations with a video camera for submission to AID? The AAVSO should look to develop a process, just like they did for the DSLR cameras, for submitting data collected by video systems. Anyone want a project?
The Cape Cod Astronomical Society, where I'm a member, does most of its observing with a Mallincam. I'm still sorting how to take those observations and make them comparable to transformed CCD observations.
Pat, with regards to your V filter question: the V filters of years ago all grew the mange and fogged up; I had one too. The new V filters will be better, but have $200 ready. Not sure of the best supplier. Astrodon comes to mind. Others will comment.
George
Photometry can't be done on anything that compresses files. That's why it's done either as a .FITS or as a DSLR raw file (Canon would be .CR2 and Nikon would be .NEF). However, there is scientific work being done using a Mallincam.
http://cams.seti.org/
https://www.mallincam.net/micro-series.html
I'm not sure how compression is relevant to the post commented on, as video cameras and recording software (e.g., SharpCap) do not necessarily compress their data.
But to the statement as given:
Photometry can't be done on anything that compresses files
That's just too facile. I can't agree, for at least two reasons.
First, there is no problem whatever in doing photometry on a wide range of files compressed with lossless algorithms, including some PNG images and all zipped FITS files. So the statement as given already can't be so.
Second, there's no reason I can imagine why it should be impossible to do photometry with certain (lossy) compressed files. It would depend on conditions. If the signals are oversampled enough (lots of pixels to average over), and the compression artifacts are small enough (or even larger and known to be random both within and between pixels), and if flux-signal linearity is reasonably preserved by the recording mechanism, then careful photometry from those lossy-compressed images should be at least as good as photometry currently reported from antiblooming CCD images. I'll readily agree that the history of photometry from compressed images is not a happy one, but it's a trap to claim that that history necessarily requires that it's impossible.
With lossy compression, it's very often the linearity loss during compression (e.g., as with JPEG, or by definition whenever gamma is applied) that causes the largest systematic errors--but that is a general problem of linearity loss, not a problem with compression per se. The signal loss on proper lossy compression to 1/2 or even 1/3 of the original file's size is astonishing little, especially when the signals are very highly correlated, exactly the case for practically all astronomical images.
Photometry has overcome a lot of obstacles in the past, and there are a lot of clever people out there to overcome new ones.
I am toying with the idea of de-Bayering my DSLR so that I can use it for all kinds of variables and get a sensitivity boost to boot, but to do so I need to find a Johnson V filter fit to a 48mm threaded filter mount (2 inch OD). As of now, I have been unable to find anything, and sorry, a 52mm will not do because it has to fit into a 2 inch focus tube.
For even my puny APS-C sensor using a 1.25 inch filter would result in an unacceptable degree of vignetting. And of course the problem would become even worse should I attach my full frame camera.
Given that there are none available, does anyone have advice or experience in, er, attaching, a 50 mm filter to the outside of a 48 mm filter mounted in a 2 inch OD cell?
Stephen,
It's not necessary to put a V filter on a DSLR camera to do photometry. If you debayer the raw images then the G channel (there should be two: combine them by averaging) should be reported as filter TG. That is, a DSLR channel reported against V comparison stars. If you develop transforms for your camera, then your transformed TG data should be reported as V data.
Now the match of the G channel to the V bandpass is not perfect. So if the star has an odd spectra (like a CV or nova) then this V data will be a little suspect. But this process I described above is my understanding of standard AAVSO practice.
Now if you do find a V filter to put in front, that might be interesting. You shouldn't have to debayer at all: all the photons that got through the filter are in the V pass. Your bayered sensor just has a non-uniform sensor. I'd say you should sum combine each 2x2 pixel group.
Cheers,
George
I am not sure to understand what Stephen means here, my impression is it's not the classical software interpolation of the RGB data from a Bayer structure and RGB data separation but the physical removing of the R, G, B filters, no ?
Some are experiencing this, but it's in fact to remove the micro-lens array that includes the filters and the result is certainly not a sensitivity boost after such micro-lenses removal ! Much less photons are then to reach the sensitive photodiode that is well smaller than the pixel. Then I think it's very difficult to do it a clean manner, not ideal for photometry.
A normal DSLR provides excellent V photometry without any filter just using the original band-pass of the camera filters as George said. Using a V filter in addition to the camera band-pass will simply provide a band-pass transmission being the math product of both, not a V filter response.
There are several ways to proceed with a non modified DSLR to obtain a standard V photometry: either use a classical transformation with the proper coefficient (in general around 0.14) or use a color correction technique at RGB fluxes level (like my VSF technique, JAAVSO 40/2), it usually provides better results on critical spectra stars.
Another way I have not seen used at time being is to apply a yellow filter that eliminates the blue excess of the DSLR G channel, an Hoya Y50 is ok, then the addition of few percents of the red channel (RAW flux) output makes a perfect V "photonic" response, (the original V Johnson definition doesn't match the present photovoltaic sensor response, we have to use the "photonic" V filter definition instead). The Y50 filter is availlable at optical component web stores for about 50 $ if I remember well. But that solution kills the possibility to use the blue channel simultaneously, a significant drawback as the simultaneous catch of R, G, B channels the DSLR provides, is a very interesting property.
Clear Skies !
Roger
I'm aware that it is not necessary to modify a DSLR to do photometry, provided you limit yourself only to very bright stars that are accurately representable as black bodies. Bright because it is necessary to defocus considerably to sample with multiple green filtered pixels, and black bodies because the green filters unfortunately differ from Johnson V inter alia by passing the H-beta emission line.
To echo what Roger says above, if you simply put a Johnson V over the focal plane, what you will wind up with in each color (R-G-B) is a transmission curve that is the product of the original and the V, which is of no help whatsoever.
What I am exploring is removing all three filters - that is the entire color filter layer - ffrom my sensor and replacing it with a transmissive passivation layer, then using a Johnson V filter to give me a one color CMOS imaging system that is 1:1 comparable in terms of spectral response to any other V-filtered sensor, enabling me to measure anything, albeit at lower sensitivity. I am intrigued by the possibility of using the Hoya Y50 despite its effect on the red channel, because as far as I am concerned the DSLR red channel is useless anyway. But from the curves Roger attached it appears that the addition of the Hoya Y50 also renders the B channel useless.
It seems to me that the Hoya Y50 is worth a try if I can find one. $50 is certainly a more attractive investment than $1000 (the cost of a new APS-C camera, de-Bayering, and a 50 mm Johnson V filter). The drawback of course is that it does nothing to deal with the inherent inefficiency of the Bayer matrixed focal plane.
Hi Stephen,
I am very sorry but I disagree with several of your points:
Only very bright stars: No, the sensitivity of the DSLR G is not significantly different than an astro CCD camera (clear ) when equipped with a V Johnson filter. The noiss of recent DSLR CMOS is even well lower than classical CCD, both: Nyquist-Johnson noise and shot noise from dark currents (this one much lower !). The possibility to make photometry of faint stars is only a question of telescope size, exposure lenght, light pollution. I do decent photometry of mag 15 stars with an 8", a regular Canon EOS M3, in a light polluted city, and I know several others doing so.
Need of large defocusing: No, there is no need for large defocusing due to the Bayer matrix structure sampling. This is an old history the analysis and experience show it's just wrong. The defocusing is used to avoid saturation in case of bright stars, for faint star we do full focus, it's all. Most DSLR have an anti-moiré spatial filter that eliminates the possible sampling issue linked to the Bayer structure, and in any case the PSF of the instrument and the seeing are enough, as a spatial filter, to eliminate the under-sampling effect. My standard deviation on faint stars, well in focus, doesn't show significant difference than others using astro CCD.
Black Body: I do not see the problem, the transformation works for most spectral types except some M stars, but this is not due to Hb. Anyhow astro CCD camera (clear) and V filter also usually need a transformation.
Y50 filter: No, it doesn't affect the red channel response but cut the blue (it's yellow...). You can see it's the same in both graphs. The red channel DSLR has a low response in deep red just to conformate with the human vision. This is achieved by the dye IR filter that can be removed from number of DSLR types.
Clear Skies !
Roger
Hi Roger,
So if I understand you correctly it is not requied to defocus stars in order to do photometry. This seems to be the opposite of what I learned in the recent Choice DSLR Photometry class. I have a modified Canon T3i that I plan to do photometry with. I have installed a Astronomik L-2 UV-IR Blocking clip-in filter. Is this filter required to do photometry. Would it hurt to have it installed? Still trying to figure this all out.
Clear skies,
Dennis
Hi Dennis,
Yes you shall use the IR-UV cut filter to do RVB photometry with a modified DSLR. The RGB synthetic pigment filters that are deposited on the pixels are transparent to IR. Without the IR cut you would just do IR photometry ! Maybe a single cut could be questionnable in case of strong IR source, the normal DSLR have two IR filters: a tinted glass one plus an interferential cut.
Defocus need for a modified DSLR is a question mark. Most "normal" DSLR have a low-pass spatial filter that reduces the moiré due to the undersampling effect of the Bayer structure. But that spatial filter is included in the same filter stack than the IR-UV cut. By the way modified DSLR have no more spatial filter. But ok, the T3i / 600D has 18.7 Mpix and 4.2 microns pixels. The PSF of most instrument, including seeing, should be larger than such pixel. FWHM should be 3.5 pixels at 3 arcsec for a 1000 mm focal lenght. That means we have some 50 pixels in the overall spot of a star, the residue of undersampling should be very small, particularly on green.
We should also consider the drawbacks of defocusing faint stars. This first reduces the SNR that is anyhow not very high, second it increases the risk of star blending, and this is often a problem. Then a large defocusing doesn't usually result in a very nice PSF: often a crater with strong narrow peaks due to diffraction pattern and an halo of diffused light at the foot. Not really good for the SNR, and possible undersampling and/or saturation of the peaks. Pros and cons should be compromised... as usual !
Sometimes I do prefer to disperse the light using a short star trail associated with a week defocus. It provides a much more uniforme fill factor of the pixels. Then an elongated photometric aperture shall be used.
I have been part of the DSLR manual and I remember we said people to defocus, but at that time it was not based on a lot of experience and tested, just some theory. That point should be revised, considering the true fonctionning of optics it's not an easy subject.
Clear Skies !
Roger
Hi Roger,
Thank you for clearing up these points. I plan to use a 10" f/3.9 Newtonian at 1000mm as my main telescope. I also have a unmodified Nikon D5300.
Clear skies,
Dennis
Comment withdrawn. The discussion is evolving ever further from the original topic, which concerns where to purchase filters. I am still looking for a Johnson V in a 49 mm cell.
Chroma Technology has 50mm diameter filters, and can probably shave them down to 49mm and mount them for you. http://chroma.com
Thanks a million! It had never occurred to me to ask!
Hi All,
DSLR photometry through longer focal length telescopes is probably ok with no defocus as Roger points out above. However, in my experience defocus is necessary when using a 200mm camera lens or a 480mm focal length refractor. Just at what focal length defocus becomes unnecessary is an open question which will depend on seeing and pixel size amongst other things.
Combining the two green channels into a single green image reduces the effect of having too sharp a focus. The red and blue channels show more artifacts due to under sampling. See my description in the AAVSO DSLR photometry guide (https://www.aavso.org/dslr-observing-manual) page 57. Cheers,
Mark
Thanks for that, Mark! I had already started building a Python simulation to demonstrate just what you showed with your little experiment! As far as I can tell, there is no focal length for which defocusing is not needed when using a Bayer array. But the SNR implications are grim, as Roger points out.