Hello everybody!
I have been doing visual and electronically assisted observing of DSOs, but I have become more interested in the scientific part of amateur astronomy and have decided that VSO is a great sub-speciality. I have tried some visual measuring with binoculars, but I would like to produce more precise measurements. I am thinking of starting with my current set up so I can learn the ropes and get a feel for the technical part of image processing.
So here are some questions:
1. I have colour CMOS camera (ASI533MC). Can it be treated as a DSLR as per the DSLR manual? I mean, extract the green channel and so on.
2. The sensor linearity testing in esence is done with a kind of a "flat frame" if I understand correctly. Since my rig is permanently mounted outside I can´t point it to an evenly illuminated pastel-color wall. So can I use the dimmed screen of my Kindle and start taking exposures of increasing length untill the histogram gets pushed all the way to the right (It's what I use for flat frames normally)?
Thank you in advance!
Hristo
Your camera…
Hi Hristo!
Your camera should be suitable for photometry. You’re right that the rules and procedures for DSLR photometry should be followed. Bayer filter arrays are common to most, if not all one-shot color imagers. I’ve been most satisfied with images based on just the green measurements of targets that aren’t very red ( color indices less than 1.2, with 0.700 being ideal.) I suggest avoiding LPV and semi-regular stars. You should also select comparison and check stars that are close in color and brightness to your target.
I think your method for shooting tests and flats will work. I’ve never tested my equipment for linearity, but I do pay careful attention to avoid over saturation as directed in the manual. I think most cameras are adequate, but you should discard any measurements that don’t appear to be linear. If your check star results don’t seem accurate, that’s an excellent indication that there is a problem.
John
Ok, let me start from the begginig and see if I understand correctly.
1. ASI533 camera at unit gain (100 according to the manual) is 1 electron (ADU) per photon. 14bit ADC allows for 2^14 values or 16.384 values.
2. So, my camera has an array of photosites, each one with a separate color filter above it (Bayer array). But the .fits file that comes out of the camera is just a matrix of single pixel values (each one between 0 and 16.383). In other words it is a grayscale image.
3. According to the DSLR manual:
Debayering (or demosaicing) refers to the process of producing a color image (each pixel having ADU values for R, G and B) from information encoded in a greyscale RAW image.
This means that in order to separate color channels, first you have to assign tree values (for R, G and B) to each "square" of the matrix. For this you use the BAYERPAT= 'RGGB' line from the header and interpolate the (for example) R and G values of a B element from it's neighbours. Also it has to be linear interpolation.
4. Only after this you can extract the green channel.
So for example, in Fitswork:
- load the raw images from the camera
- Processing > Expand to RGB
- Processing > Split into 3 W/B images
1. Yes. You should only use star values with maximum ADU < 13107 ADU to avoid errors resulting from oversaturation of your sensor. This is approximately 80% of 16384, the 14-bit maximum. I use a monochrome 533. It is supposed to be 14-bit, but my images are actually 16-bit, with 65536 ADU as a maximum. You should check your image statistics to make sure.
2. If your camera is a one-shot color camera, it uses a Bayer filter array to compute color values for each pixel based on known transmission values for the red, blue and green filters in the Bayer array. The filters are arrayed in groups of four pixels. In each group, there are one red, one blue and two green filters. These are TRICOLOR imaging filters, not standard photometric filters. Tricolor sensors are designed to make the resulting images look as realistic as possible, thus 2 green filters in the group. Human vision is primarily sensitive to green light.
I'm unfamiliar with FITSWORK. I use AIP For Windows, and it's fully capable of doing everything needed to obtain photometry from start to finish. It's also available free, but it is dated, currently unsupported and has limitations. You need software that has photometric capabilities to make accurate measurements.
The end goal of image processing for tricolor photometry is a single grayscale image for each color you want measured. One with only green data, one with only red data, and one with only blue data. AIP4WIN has several options to split color images, often with one click. I think your camera produces a color FITS image, so I would look to see if your program allows you to separate a single tricolor image into the separate color images you need. Most DSLR image processing needs raw to FITS conversion- I don't think you will.
All my color experience is with DSLR raw images, not color FITS. I hope this hasn't been too confusing.
John
John,
You wrote:
"I use a monochrome 533. It is supposed to be 14-bit, but my images are actually 16-bit, with 65536 ADU as a maximum."
IMHO the above is not correct. I have used two different ZWO ASI cameras, the 1600MM and 294MM. The 1600 and the 294 used in "unlocked bin 1 mode" are 12 bit instruments. I believe the 294 when binned 2x2 has 14 bit ADC.
However, the images from these cameras show 65536 ADUs at saturation. This is the case with ZWO software, and other photometry packages (I use AIJ mostly).
The cameras are in fact 12 or 14 bit instruments. The ADUs are simply factored up to a max of 65536.
Roy