B, V, R filters for DS imaging

Affiliation
American Association of Variable Star Observers (AAVSO)
Mon, 04/24/2023 - 11:43

Does anyone have general guidelines on using B, V and R filters for general deep sky imaging? Perhaps exposure ratios?

Peter

BPEC

 

Affiliation
American Association of Variable Star Observers (AAVSO)
Hi Peter,

That's a…

Hi Peter,

That's a coincidence! I was about to post the same question. I am setting up a new scope in Spain and want to use my existing 7-position filter wheel. I want to add photometric R V and B filters and I was wondering if I could replace my current RGB filters so I can still image DSOs and keep my 3 NB filters in place.

Cheers

Ian.

 

Affiliation
American Association of Variable Star Observers (AAVSO)
BVR for RGB

It works, but the color balance adjustments may be a bit different with BVR.   

If you have room for an Ic filter I'd recommend you include that in your wheel.  Then, when you get around to transforms you will be able to report transformed V-I magnitudes.  How about BVRI and the three nebular filters for your 7 slot filter wheel?  The only hitch might be that your nebular filters and photometric filters have slightly different focus points.  Can you set focus offsets with your system?

For Peter, I think you will just have to experiment with exposure times, but if memory serves I think the exposures for me were roughly comparable.  You could start with the same exposures you use with the RGB filters, then adjust the color balance as Richard described below.

Phil

Affiliation
American Association of Variable Star Observers (AAVSO)
BVR for RGB

Thanks Phil,

I really need to keep my L filter, but I could get rid of SII at a pinch. No problems with focus as I can use filter offsets.

Cheers

Ian.

Affiliation
American Association of Variable Star Observers (AAVSO)
Quantum Efficiency

Peter:

Check reported quantum efficiency of your sensor. For most cameras, typically smaller (25-50% less?) in blue and infrared ends of spectrum. See example - https://www.photometrics.com/learn/imaging-topics/quantum-efficiency

Then check reported relative quantum efficiency of filters. Typically smaller for B vs V (10-25% less?). See example - https://www.photometrics.com/learn/imaging-topics/quantum-efficiency.  Best to check reported QE of your specific filters IF the relative QE is given. Often not. ;-(

So after combining QE of both sensor and filter, typically need longer exposure in B than V. Perhaps 1.5 to 2x longer?

Exposure then obviously also depends on broad-band spectrum of target. Brighter in red (e.g. Mira) or blue (e.g., Vega).

To answer your question, for middle color stars, I use BVRI exposure ratio of 2,1,1,1 but that is only an estimate that normally gets me in a reasonable SNR ball-park!? For a Mira LPV, the I exposure ratio is normally < 0.1.

It is best/so easy to just measure the necessary exposures of a few stars of known magnitudes in BVRI with your own system!

Ken

 

Affiliation
American Association of Variable Star Observers (AAVSO)
IR
Another thing that often gets me is trying to keep the whole image in the Sensor's linear range if there is a really red or really blue star in the image. The target might be 10000 ADUs but a red star in the image could be saturated. I am told that it is more of a problem for CCDs than CMOS because CMOS tends to be linear over more of it's range. I sometimes stack 20x5 seconds just to get reasonable SNR from the target without letting any pixel saturate. Ray
Affiliation
American Association of Variable Star Observers (AAVSO)
Whiter color balance with solar-type stars...

Hi Peter-


Determine the exposure ratios that produce equal R, G, and B-filter magnitudes for a type G2V star. When combined, images of galaxies and other broadband objects will have a decent approximation of normal daytime color balance. The colors of emission nebulae will depend on which of their emission lines fall within the RGB filter passbands. Remember that many "pretty picture" imagers exaggerate the colors in their images, so don't rely on their productions as color standards.

--Richard

Affiliation
American Association of Variable Star Observers (AAVSO)
Good thread! I noted a few…

Good thread! I noted a few years ago that some ESO "pretty pictures" were taken with B,V,R combos. When my club, ATMs of Boston, obtained a 17' planewave we decided to use BVR vs RGB, and its been successful both scientifically and DS imaging wise. Yet none of us users have done the exposure calibration to optimize balance in the DS images.

 

I started this thread in hopes of A) getting a starting point for this balancing and B) sensitize others to going with BVR instead of RGB.

 

Peter

 

Affiliation
American Association of Variable Star Observers (AAVSO)
A can of worms

Oh well, I guess this is a big can of worms to open but anyway, just a few observations as an extended rant:

If I understood this correctly, the "DS imaging" in the tread title stands for astrophotography? As in "after I did the photometry of YSOs, can I use the B V R frames to make a pretty picture of the surrounding star forming region and if so how do I do this?" . If this understanding is wrong, just skip the following :-) as then I got it totally wrong.

Point 1): You don't even need three filters. Look at the pages discussed below how to combine just the two Sloan g and r filtered images to "familiar" looking pretty pictures.

Point 2): It has been done. e.g look at http://trappedphotons.com/blog/?p=376 or https://www.youtube.com/live/hjauMEU9CcQ?feature=share . Here you can also find concrete numerical suggestions on how to "transform" the colors.

Point 3): Filter transmission and detector QE don't tell the whole story and are actually not very informative for setting the exposure times and color transformations. I guess the reason is that you need to set this in relation to the human eye and human visual perception capabilities and limitations (and here I mean the human eye looking at an astrophotography end-product on a print or a computer screen, not even looking thru a telescope...see next point). Think about it: there is a reason why consumer pretty picture one-shot-color camera sensors have twice as many green pixels as red and blue pixels (so effectively twice the exposure time and more spacial resolution) even tho the QE of the sensor actually peaks (!) in the green. Obviously green is extra-important for the human eye.

Point 4:) Related to this: Exposure time and color transformation coefficients are two separate pairs of shoes. Longer exposure times give higher SNR. The weighting of the colors when transforming photometric filter picture ADUs to RGB values for human consumption tho is a different thing, right?  If you make a Deep Sky image of, say, the Pleiades, I would think you need a good SNR primarily in the blue channel, but when you make an image of an H-alpha region, you primarily want a high SNR in the red channel. It depends on the object and which color contains the more relevant information, right?

Point 5): Astrophotography like all photography is also an art form. It is not about linearity or true colors, but more about trying to emphasize certain interesting stuff and deemphasizing less relevant stuff for a given object, that's why almost any piece of astrophotography involves histogram stretching and similar things to elevate certain details so that human visual perception can better register it. To hell with linearity! :-)

Point 6): Why would you even want to create a "natural color" image given the human eye is such a limited sensor at least in some respects? E.g. if an astronomical object reveals fine details at wavelengths that our photometric filters  + camera sensor can register but the human eye can not (e.g. IR), why throw away that information in an attempt to replicate what the eye would see (or not see)? If you can bring out the morphology of an object by highlighting faint details to better stand out against the background, isn't that the better "photography" because it reveals more about the "true nature" of the object even when being less true to what the naked eye would  see?

As I said, this is a can of worms and I expect some controversy for most of these points, but I hope at least the references given in Point 2) are useful.

 

CS

HB

Affiliation
American Association of Variable Star Observers (AAVSO)
Now that we have a very…

Now that we have a very bright SN 2023ixf in iconic M 101, I think we'll have a good use for this technique. By the time the SN will have faded, many observers will have collected a lot of observation time on M101 and why not make a pretty picture as a bycatch?

CS

HBE

True colours

"Point 5): Astrophotography like all photography is also an art form. It is not about linearity or true colors ...."

Yes, astrophotography is an art form, but sometimes astrophotographers do aim to render their images to represent true colours. After all, non-astrophotography folks looking at images every now and then ask "How do you know the colours are right, when you can never actually see them?"

This is why image processing software packages offer algorithms for tweaking images to show true colours, or as close to them as possible.

Roy

Affiliation
American Association of Variable Star Observers (AAVSO)
tongue in cheek

Yeah, I see the reasoning, but it's too anthropocentric for my taste. For most images, when those photons began their journey, humans and their mediocre vision systems were'nt even existing yet (tongue in cheek).