Focal length, FWHM, and undersampling

Affiliation
American Association of Variable Star Observers (AAVSO)
Mon, 08/20/2018 - 05:26

I have been told that, for getting a decent FOV large enough for both reference stars and stars that are being monitored for variablity, that having the short focal length of a fixed focus camera lens is preferable to a telescope with a far longer focal length.  However, there is the issue of undersampling.  The ideal sampling resolution would be 1/2 to 1/3 of the stars' FWHM.  For a Canon 6D that has a pixel size of 6.54 µm, that would be the following

FWHM ---> focal length @1/2 FWHM ---> focal length @ 1/3 FWHM

0.5"     ---> 5400 mm                           ---> 8100 mm

1"        ---> 2700 mm                           ---> 4050 mm

2"        ---> 1350 mm                           ---> 2025 mm

4"        --->   675 mm                            ---> 1013 mm

6"        --->  450 mm                             ---> 675 mm

 

But using somethng like a 85 mm camera lens, the resolution about 16 arc-seconds per pixel.  To keep from undersampling, the star's FWHM would have to be no less than 32 arc-seconds.  So, my question is how much of an issue is undersampling for photometry?

Affiliation
American Association of Variable Star Observers (AAVSO)
Sampling

For adequate sampling of DSLR images, take a look at section 5.5.1, The Size and Shape of Star Images in the AAVSO DSLR manual. It gives a good explanation of FWHM and sampling for DSLR images.

 

Barbara

Affiliation
None
Sampling

The primary criterion for sampling of a star's image using a DSLR is not the resolution of the optical system but rather the degree of defocus required to spread the star's image over a sufficient number of pixels of given color.  As Barbara says, look at Section 5.5.1 of the DSLR Observing Manual, where it says:  

"FWHM of stars on RAW images (before calibration and channel splitting) should be no less than about 8-10 pixels. This is to ensure that the star image is well-sampled in all four color channels."

The issue is a bit more subtle than it might at first appear.  If the star image is smeared over a radius of 10 pixels, there are in principle as many as 78 blue and 157 green pixels receiving the star's photons.  But the exact number, and degree of coverage in the outermost pixels, depends on where the star's image is centered on the Bayer array.  The images of the comparison stars are uncorrelated in center location with respect to a Bayer unit cell, and so there will be slight variations in the effective number of pixels illuminated.  Remembering that the objective is to compare the signals of the comparison stars with that of the target, we want to size the blur circle large enough that these variations in illumination become statistically insignificant.

Affiliation
American Association of Variable Star Observers (AAVSO)
Case of stacking

Let's imagine that we are using stack of several (aligned) images (with not perfect tracking -- so there is small drift). Could the defocusing requrements be somewhat softened?

Sometimes, in crowded fields, using telescope instead of lens, it is impossible to achieve optimal defocus...

Affiliation
None
Sampling

Again, as Barbara suggested, you should refer to the DSLR Observing Manual.  In short, though, drift can be used to a limited extent in addition to some degree of defocus.  But that will not help in crowded fields, unless by coincidence the crowding is primarily in the direction orthogonal to the dirft.  And image stacking has nothing to do with it.  Drift is primarily of utility when the camera is unable to automatically track the target, as in the case of a camera and lens mounted on a conventional photographic tripod.

As explained in the Manual, averaging over multiple expousres is preferable to stacking to form a single image, because it allows estimation of errors.  The variance of a single sample is infinite.

Affiliation
American Association of Variable Star Observers (AAVSO)
Stacking

Well, it seems you get me wrong. If we take several images for stacking, each of them slightly shifted by several pixels due to tracking imperfection, we average signal from different pixels, so it is similar to a result of defocusing, is not it?

Regarding error estimation: currently I'm working with several small stacks, each consists of, say, 5 images, so actually I have a group of several already stacked images. Hence I still can estimate error using several images.

There is also a possibility to estimate an error using an ensemble of comparison stars (however it is not the best way, of course), this allows one also take into account an error connected with B-V difference (untransformed magnitudes) or improper transformation (transformed magnitudes)

under sampling star images and stacking

Hi all,

undersampling is a problem for photometry with all one shot color cameras, not just DSLRs. My practical experience, as described in the AAVSO DSLR Observing Manual V1.4 section 5.5.1, showed up to 0.2 mag "variation" in constant stars due purely to where the peak of the focussed star point spread function falls on the sensor.  That could theoretically be even larger depending on how small the point spread function is compared to the pixel size. A more detailed description can be found in the Variable Stars South Newsletter 2015-1.

Stacking several focussed, but slightly displaced, images would help to average out this artificial variation. But why bother when a small defocus of the image would give more accurate results without the added step of stacking?

If stacking is to be done then you should separate out the individual color channel images first before stacking.

Cheers,

Mark

Affiliation
American Association of Variable Star Observers (AAVSO)
Separation before stacking?

Hi Mark,

Just to clarify: to separate color channels or not before stacking depends on the software used. For example, an old good IRIS works with different color planes independently even while alignment (it, in fact, separates planes "on the fly" and reassembles them).

Moreover, when we look into FITS format, we can find that color planes simply go in sequence, so, if we do not use global image properties (i.e. weighting images before averaging) we may work with unseparated images.

Concerning stacking slightly displaced images, I assume that it is an only additional step to mitigate undersampling; of course, defocusing is better, however sometimes (in crowded fields) we cannot achieve the desired degree of defocusing.

Regards,

Max