QHY600 tests

Affiliation
American Association of Variable Star Observers (AAVSO)
Mon, 06/29/2020 - 18:59

Since there seems to be considerable interest in CMOS sensors, I thought I'd give you a blow-by-blow account as I test a new QHY600 camera over the next week or so.  The OC61 (Optical Craftsman 61-cm) telescope node of AAVSOnet is located at Mt. John, New Zealand.  This telescope is a true Cassegrain, with a final focal ratio of f/14.6.  When we upgraded this system to robotic operation about a decade ago, Robert Ayers contributed his FLI PL09000 camera with filter wheel.  It has been a workhorse for us, but suffers greatly from Residual Bulk Image (RBI) effects.  These latent images require an LED preflash before every exposure to "wipe" the array of the RBI from prior images.  The preflash dramatically increases the noise level in the images, such that we are probably losing a magnitude on the faint end for photometry.  We needed this big sensor in order to get a reasonable field of view.  Even with this 52mm diagonal sensor, we only have a 14x14 arcmin FOV, and have to bin 2x2 to yield 0.55arcsec pixels.  I've seriously considered adding a focal reducer, but haven't found the right product yet.

Dick Post has generously funded the upgrade of the camera system to the QHY600.  We've also purchased the 7-slot 50mm square filter wheel to go with this, so that we can re-use the existing BVgri filters.  We now have 2 empty slots that need filters!  Over the next week or so, I'll test this camera in my lab, so that I know its characteristics before shipping.  Hands-on control works much better than remote testing!

While the QHY600 (IMX455 sensor) has 9576x6388 effective pixels, each is only 3.76 microns in size.  That means to yield the same 0.55arcsec/pixel scale, you would need to bin about 6x6!  The QHY600 only supports 2x2 binning in its ASCOM driver.  We will probably bin 2x2 to reduce the stored image size to 15 megapixels, and then transfer these images back to HQ for calibrating (dark subtract/flat field) using our pipeline.  Somewhere along the line, we'll probably do a further stage of binning.  The site can have 1-2 arcsec images, though most are 2-3 arcseconds.  How to handle this best in software will be a challenge.

Note that, while the pixel size is small, each pixel has a pretty decent full well depth.  QHY says you can get up to 80Ke-, but that depends on a lot of factors.  You have three different imaging modes, and can set the gain and offset for each mode.  You can also include or exclude overscan pixels, and change the USB transfer rate to lessen interference.  Lots of knobs to twiddle, and each affects the full well depth and read noise!  The mode we're choosing gets about 50Ke-, which is still pretty good, along with 3.5 electrons of read noise.  However, this means if you keep a 16bit FITS file output, something has to give when you bin.  If you do the binning entirely in software, say in MaximDL, you can store the output image in 32-bit floating point and not lose any precision.  This adds another layer of complexity.  For the time being, let's assume 2x2 binning and 16-bit FITS storage to keep things simple.

This sensor is only 42mm diagonal, so 50mm round filters could be used with it, but we're sticking with our original filter set.  However, the field of view is even smaller - about 14x9arcmin.  A focal reducer/field flattener/coma corrector would be really nice.  One big advantage with the IMX455 sensor is that it is back-illuminated and has greater than 87% peak QE according to QHY, compared with the 64% QE of the KAF09000.  Combined with the lack of RBI mitigation, we're expecting to obtain about 2 magnitudes of improvement with this camera, along with lots of other nice features, such as very short exposures and very fast readout time.

This is a USB3.0 camera that can take around 2 frames per second, so very high transfer rate.  Not all computers can handle this, and extending USB3.0 cables is tricky.  The Kepler KL400 camera from FLI is an example of a camera that doesn't work with all USB cables.  I've asked Nigel Frost, the Observatory Superintendent, to give me the needed cable length, and then I'll configure the system here and confirm that it works before shipping camera/filterwheel/USB3cables/etc. down south.

So at minimum, here are the upcoming tests:

- unpack the camera and filter wheel

- test basic operation with the computer and cable that will be used in New Zealand

- test gain and readnoise of desired operating mode

- test linearity and full well

- test dark current, hot pixels, and cosmetic defects at several different operating temperatures

- look for bias stability and whether to use overscan

- on-sky tests for RBI, squareness of sensor to optical path, etc.

- anything else that comes to mind

The next post will describe the unpacking and basic operations of the QHY600.  If anyone has any questions/comments/suggestions, please feel free to post.  I do not profess to know everything about the camera, nor assume that I will do every important test!

Arne

 

Affiliation
American Association of Variable Star Observers (AAVSO)
Excellent. 

Excellent. 

I am planning to create a "Best Practices Guide" for CMOS those interested in acquiring a CMOS camera, but I think this merits a separate guide on "receiving and testing any new camera." Please take some good photos of your testing setup.

Meaning, noting that the Forum has hosted repetitive and redundant thread on photometric filters, I assembled a "Best Practices Guide" for photometric filters, and am looking for commnets and corrections before posting it on the Section page:

https://www.dropbox.com/s/4vfv1vpad763awi/AAVSO_Instr%26Equip_Photometr…

These compilatations are intended to be short and sweet. In this compilation, you will note your own words coming back to haunt you. 

--Richard

 

Affiliation
American Association of Variable Star Observers (AAVSO)
QHY600 Tests

Thank you for you work! I look forward to your results. Best regareds.

Mike

Affiliation
American Association of Variable Star Observers (AAVSO)
The OC61 (Optical Craftsman

The OC61 (Optical Craftsman 61-cm) telescope node of AAVSOnet is located at Mt. John, New Zealand.  This telescope is a true Cassegrain, with a final focal ratio of f/14.6.

and

While the QHY600 (IMX455 sensor) has 9576x6388 effective pixels, each is only 3.76 microns in size.  That means to yield the same 0.55arcsec/pixel scale, you would need to bin about 6x6!

Sounds like they would be better off with camera with a KAF-1001 CCD sensor.  That would be a perfect match for the 0.55 arc-second/pixel scale though the FOV would be 9.26' × 9.26'.

Affiliation
American Association of Variable Star Observers (AAVSO)
FIlters "best practices"

Thanks, Richard, for the link to your notes about detectors and filters. Trvially:  do note that Mike Bessell has two el's in his surname --- it is mis-spelled all over the literature and in observatory instrument manuals etc.  (The famous Bessel-with-one-l was the mathematician...)

     I wonder if the advice sought in re filters centers on what filters are appropriate for what objects.  Objects like asteroids are grey (have essentially no color variation), so running unfiltered is okay as long as you choose comparison stars that are similar in color to asteroids (roughly 0.6 < B-V < 0.95 or equivalent ranges in other systems).  Beyond just the rotation period, one does prefer to be able to use the data for matching with other data, determining the phase-function, etc, so one might want to use an Rc or Sloan r' filter (to maximize sensitivity), and adjust the data to the standard system.

     I also agree that for instances where you want merely a raw detection, or the on/off state of some transient target, running unfiltered is OK.  But even so one might want to adjust the zero-point at least approximately to (say) Sloan r' so that things can be compared with other results.

     For stars, I agree that just having a V filter is the best single filter.  In principle of course one needs to observe in two filters to get a proper V magnitude because of color terms.  Either B,V or V,I would be preferred.  For most stars, rather generally, the preference would be for B,V,I photometry to get color changes as the stars varies, either from pulsation (RR Lyraes, Cepheids, Miras) or to model component temperatures of eclipsing binaries.  Those three filters give you the leverage needed for those determinations.  I would argue that Sloan z', another 1000A longward of I, is even better if the detector is sensitive out at ~9000A.  (Going back to asteroids, it turns out you get a fair estimate of the mineralogy of an asteroid from Sloan g,i,z.  The g-i color-index gives you a color slope, and i-z measures the depth of the pyroxene band (in the z filter), which together are very diagnostic of the overall composition.)  Probably the hardware situation is such that if you have more than one filter available, then you can do three.  My argument would be to always take data in at least two filters if possible so that color information and consistency checks are available (whatever happens in one filter needs to show up in an expected way in data taken with another filter taken at nearly the same time).

\Brian

Affiliation
American Association of Variable Star Observers (AAVSO)
I'll intergrate your comments

I'll intergrate your comments to some extrent. Bert sent me a document that will be the basis for a Best Practices Guide on what filters to use for what type of observation, and I'll integrate more or your noters there. Thanks!

Richard

 

Affiliation
American Association of Variable Star Observers (AAVSO)
QHY600 out of the box

Sorry about the delay for the second post.  I wanted to test the camera as it will be installed in New Zealand.  The Observatory Superintendent there measured the distance from the telescope back end to the control room.  While we could put the computer physically mounted on the telescope, it is then subject to the ambient conditions, and things can get pretty dicy during the winter.  To make things easy, I wanted to extend the USB3 cable back to the control room.  Nigel says it is 11.5 meters, so I purchased the Tripp-Lite U330-15M 15m extender cable.  Afterwards, I realized that I probably could have used Tripp-Lite's U330-10M 10m cable, but this gives a little flexibility.  Anyway, that cable just arrived, and so now I can test the camera properly.

I'm including a bunch of photos and images as I go along, so that you can see what you get.  The QHY600 Early Bird camera comes in a very nice 30x24x17cm cardboard box, with everything nicely separated and cushioned.  The first 3 photos show the box and its contents.  Each of the small boxes contains things like the 12V 6A TEC power supply (other vendors make you buy this separately), a female dovetail and several spacers.  I can't think of anything else that you would need to run this camera.  You can either mount it with a dovetail (preferred), or M54 thread.  It comes with an M54 to 2" nosepiece.  A photo is included showing the back end of the camera.  They supply a short jumper cable that has a threaded 2.1mm plug to firmly attach to the camera.  The other end of the cable is a 2.1mm female connector to attach to the power supply cable.  A supplied 2m USB3 cable normally attaches to the computer, though in our configuration, I'm going to use a 1m USB3 cable to attach to the Tripp-Lite extender.  There are two fiber connectors for running the 10Gbps fibers, if you upgrade to the professional camera later (it gains you about 2x transfer speed, so that you can do 4fps instead of the nominal 2fps readout).  Note, however, that using the 10Gbps fibers, you will need a standard desktop computer with a PCI fiber board.  A 4-pin GPIO socket is available, but without any pinouts or documentation.  There is a connector for a QHY filter wheel; more about that in a subsequent post.

The Tripp-Lite U330 extender will provide up to 388ma of current.  They include a 1.35mm jack at the camera end for inserting 5V if that amount of current is insufficient.  In that case, you have to purchase a separate 5V 2A supply.  The maximum data transfer rate is 5Gbps.  Each IMX455 image is 120MB, so 2fps requires 2.4Gbps, below the maximum.  Note:  not all USB3 cables, and USB3 extenders, are equal.  Some cameras, like the KL400, are very picky as far as what kind of cable will work with them.  QHY and ZWO are far less sensitive, but be prepared to try two or three cables until things act right.

There is a software driver/ASCOM/EZCAP "System Pack" that is the best way to download the necessary drivers.  Look around for this on the QHY web site; it is not always obvious.  EZCAP is their image acquisition package and is useful if you don't have MaximDL or want some specific features not available in MDL.

After installing the drivers, I connected to the camera over the 15m extender - yay!  I downloaded a quick bias frame, and the image transferred in less than a second, and looked normal.  Sorta like first light with your telescope!

Turning on the TEC cooler, it looks like you can get about 30C of cooling.  The lab was at 22C, and at -10C, the system was using 83% of power.  According to Maxim, the temperature was stable at the 0.1C level with a reasonable time constant for corrections.

I first did a couple of test darks.  Using a 300-second exposure time and the included camera-lens cover, I obtained the image dark300a.  This is shown in inverted gray scale, so that dark means brighter.  It is easier to see defects this way.  Note that there is a low-level vertical streak in this image.  I then additionally covered the front of the camera with black cloth, and took dark300b.  Note that the streak goes away.  It is hard to prevent light from getting into a camera, so be aware that you may find streaks on long darks unless you take precautions.  I always do my darks at night in the closed telescope enclosure, and usually also cover the telescope tube or use a blank filter.  This is one "defect" of CMOS cameras - they usually do not include a mechanical shutter, relying on the readout method to give an electronic shutter.  This works for most exposures, but definitely not darks.  You need some mechanism to keep light away from the sensor.

Looking at dark300b, there are a couple of things to notice.  First, there is no amplifier glow.  Earlier CMOS generations have amplifier glow, and even the FLI KL400 sCMOS camera that Gary Walker has suffers from this effect.  In general, if you have amplifier glow, you can subtract off a dark frame and the glow will disappear.  However, the NOISE from that glow remains, and so there will be areas of increased dark current noise near the sensor edges if it suffers from amplifier glow.  For the QHY600, this issue does not exist.  I see a lot of hot pixels that I haven't quantified yet.  The hot pixel pattern stays the same and so is inherent to the sensor and not related to the readout method.

Because there are hot pixels, I'll do more dark frame testing than I do with other sensors.  How many hot pixels exist?  How many saturate at 300seconds, typically the longest subexposure that I do?  Is the pattern truly fixed from frame to frame?  Does the flux in each hot pixel scale with temperature?  How does the bias level change with TEC temperature and ambient temperature?  You may not need to know this level of detail for your specific camera and use, but I like to know everything I can about a new camera.

The next post will describe setting the gain/offset/read mode, as well as the dark current tests.

Arne

Affiliation
American Association of Variable Star Observers (AAVSO)
Arne: this is stupendously

Arne: this is stupendously helpful. The dark images are good news; let's hope the hot pixels get better over time with their manufacturing experience. I'll be very, very interested to learn how linearity depends on settings.

Question to all you CMOS filter wheel vendors: for dark images, why in the world don't you just include a cheap black filter? That is, one that is guaranteed to fit perfectly that very wheel, won't leak light, won't warp, etc in cold or heat. It wouldn't cost you $0.25 US, and every single CMOS is going to need one.

 

Affiliation
American Association of Variable Star Observers (AAVSO)
I have a black filter and

I have a black filter and light still gets in it.  It's only good for additional protection while taking dark or bias frames.  So, test your black filter before assuming that it will block all incoming light going into your sensor.

Affiliation
American Association of Variable Star Observers (AAVSO)
blank filter

There are several blank/dark filters on the market.  I've purchased a sampling and am testing them.  They are helpful, but not perfect, which is another reason for doing darks at night.  The main reason why you might not want one is that they take up a filter slot.   I always seem to need one more filter. :)  The E-180 astrographs we are using on the BSM systems have a nice tight tube cover, and the older refractors tubes were easy to cover - if the telescope was in your back yard.

There are some good reasons for using a blank filter.  As an example, say that the bias level changes with ambient temperature, then you may want to take bias/dark frames before/after science frames.

I'll discuss filter wheels after the camera tests.

Arne

Affiliation
American Association of Variable Star Observers (AAVSO)
That F/14.6 is extremely slow

That F/14.6 is extremely slow for imaging! So, its too bad the telescope itself is the issue down there. If its a classic Cass with parabolic primary, maybe eliminate the secondary and place the camera at the prime focus, with a Paracorr?

Mike

Affiliation
American Association of Variable Star Observers (AAVSO)
QHY600 Darks

Hello! I'll be curious to read about your Darks analysis.

    If I remember correctly, there had been concerns raised with CMOS that a Darks library prepared ahead of time might not march the actual images taken on later nights as well as CCD imagers. Best regards.

Mike

Affiliation
American Association of Variable Star Observers (AAVSO)
QHY600 - setting mode, gain, offset

Before you can do almost any test with the QHY600, you have to study the various ways the camera can be configured.  Normally you get much of this information from the sensor data sheet, but the IMX455 data sheet seems to be proprietary.  I would not be surprised if you didn't have to sign a non-disclosure agreement before getting a copy.  If someone has a copy, I'd love to see it!  The next best thing is to look at writeups from Dr. Qiu himself.  He does a pretty reasonable job of describing the effects of tweaking parameters on his website: 

https://www.qhyccd.com/index.php?m=content&c=index&a=show&catid=94&id=55&cut=1

under the specifications tab.  There are three readout modes available for the QHY600.  I don't know if these are the same for the ZWO ASI6200 and other cameras using the IMX455 sensor, but these are the ones we have to work with for the QHY600.  One of the added complications is that each ADC on the sensor appears to have two amplifiers, and depending on the gain, one amplifier or the other is used.  The two amplifiers have different readnoise and gain characteristics, so the various curves have large shifts when you switch from one amplifier to the other.

Mode # 0 (Photographic).  This has very large full well (>80Ke-) at gain 0, but readnoise = 7.5e- at that gain.  This mode is best for a specific regime: low readnoise (2.5e-) with full well between 20-27Ke-.  I don't see a lot of advantage to using this mode.

Mode # 1 (High Gain Mode).  This one has 3.5e- readnoise from gain = 0-55.  At gain=0, it has 0.8e-/ADU and a full well of 50Ke-.  At gain 55, it has 0.35e-/ADU and a full well of 24Ke-.  It is the best mode over almost the entire parameter space for the sensor in the sense of the greatest dynamic range, but it doesn't have the best full well per pixel (modes 0 and 2 are better).  Mode 1 has the lowest readnoise value of around 1.5e- at a gain of 56, where you have full well of 22Ke-.  This would be an ideal setting for spectroscopy, where readnoise and not sky noise is the limiting factor.

Mode # 2 (Extend Full Well).  This one has the same >80Ke- full well and 7.5e- readnoise at gain=0 as does mode#0, but keeps a higher full well throughout its gain range.  That said, the readnoise stays high, never dropping below 6e-.

Note that these full well values are true full well.  Because these are anti-blooming sensors, the linear full well is less.  Dr. Qiu suggests the sensor is linear to about 70Ke- in modes 0 and 2.

So based on these curves, which mode is best for wide band imaging?  You want the highest dynamic range, because you are always trying to measure some bright object in comparison to some faint object near the sky limit.  Dynamic range is generally expressed by dividing the full well by the readnoise.  You can then convert the ratio into dB, or into photographic stops (powers of 2).  The highest dynamic range on the IMX455 is about 13.6 stops, not bad.  I suggest that you run the IMX455 in mode #1, and at gain 0.  That gives full well of 50Ke- at 65535 counts.  During our testing, we'll see if that is also the linear full well, and if not, we may adjust the gain so that you lose a little full well depth but get a linear response to 65535.

This discussion assumes that you are running the sensor in native 1x1 binning mode.  Big full well for a 3.76micron pixel!  If you bin 2x2, then the gain setting depends on how the binning is handled by software, since the binning is handled off-sensor.  You want the maximum dynamic range for the resultant pixel, and that might take a bit of adjustment of the gain setting to minimize the readnoise while maximizing the signal.  We can make that adjustment during the linearity test.

There is also an offset parameter that can be adjusted.  That basically just adds a pedestal or baseline to each pixel.  I usually adjust the offset so that the minimum level in the image is a few hundred counts.  With my QHY600, and mode 1, gain 0, this is an offset of about 10.

So for the upcoming tests, I'm selecting mode 1, gain0, offset10.  You set these parameters in the driver when you choose the camera in MaximDL.  This will give me roughly 50Ke- full well, readnoise 3.5e-.  The readnoise will be negligible for most wide-band filter photometry as the sky noise will dominate.  These are good starting values; we'll see how the actual camera compares.

Arne

Further information on the IMX455 and QHY's differences

Hi Arne and all, 

I've found a resource on the differences between the QHY600 and the ZWO ASI6200mm in detail. The article is no doubt biased towards QHY in trying to justify the price increase (e.g naming "innovative technologies"), but it is possible to list the facts: 

 

1. Consumer grade vs industry grade​​​​​

As mentioned on the QHY website, both the new 268 (APS-C) and 600 (FF) use the industry-grade version of this sensor. I have not found any concrete resource on this. An email to ZWO told me "it does not make a difference", which truthfully seemed unlikely at the time, a direct phone call to Sony left me in a holding line for 40 minutes after being redirected, and a  QHY camera distributor did not know any more technical details beyond what was listed on the QHY website. 

According to the website linked above and a reference of theirs (Framos), industry-grade sensors have more rigorous testing, a shorter "Mean Time Between Failures", and a tougher base material (ceramic). They also mention a "Land Grid Array" package Sony designs which mimized thermo-mechanical stress on the sensor directly. They finally mention having "a significantly lower number of pixel defects" though it remains unclear on how this is accomplished, whether it is through a higher amount of sensor rejection before being sold by Sony to manufacturers, or if the production is fundamentally different and thus more expensive. 

 

2. Readout modes

The QHY version has different readout modes. Arne has already gone into great detail with them, far more than the website has. 

 

3. Field Programmable Gate Array & memory

The QHY cameras have an on-board chip which allows for Firmware updates directly to the camera (for example for further readout modes). They also have far greater on-board memory but this was already also mentioned earlier in this post. Quote from the article:

"It makes sense to make the large storage capacity available in the QHY 600 Photo so that the sensor can be read out immediately. BECAUSE: A CMOS BSI image with Rolling Shutter technology is exposed until it is 100% read out - otherwise this leads to irregularities in the image.

They also name the link www.qhyccd.com/index.php?m=content&c=index&a=show&catid=23&id=315.

 

4. Noise suppression technology

This one is new to me and there is not too much info listed on how it works. According to QHY, they have "tuned the factory sensor" to make the image cleaner to remove horizontal noise. See the link and also this QHY page. If anyone reading this knows more about this, please leave a comment. It sounds interesting and I wonder if a direct comparison between the different IMX 455 cameras at high gain (QHY and non-QHY) would have a noticeably different result. 

 

Finally the article lists that the "Pro" version of their cameras can directly interface to GPS boxes, but I'm unsure of how useful this is and if it would offer any advantage beyond conventional GPS devices. Comparable to how many cameras say that they have a direct interface to a guide camera, but in practice it is better to hook the guide camera up to the PC directly via USB.

Either way I hope this clears up some of the differences between the "IMX 455-K" and QHY/others in general, since many are currently faced with the "QHY600 or ASI6200" decision (or "QHY268 or ASI2600" soon). Please correct me if there are any mistakes or inaccuracies in this, I'm of course neither affiliated with QHY nor ZWO and personally use both brands cameras. 

Leon 

Affiliation
American Association of Variable Star Observers (AAVSO)
Thanks SOooo much Arne!

I took a leap of faith and got a 600 for my main camera, for the reasons you know.  Came close to a SBIG 6303e, but it seemed worth making the jump to CMOS.  An now you've made it some much less lonely to get going with it.

I recently completed your CCD school video series, which I purchased on AAVSO.  So valuable. 

Affiliation
American Association of Variable Star Observers (AAVSO)
Another test to consider

Arne,

Dennis di Cicco reviewed the QHY600 in the July issue of S&T.  He mentioned that darks not taken within a few nights of the light frames did not "work as expected," but did not expound on that.  Hard to know exactly what he was seeing, but it could mean that darks aren't stable over time.  You note that you'll check bias stability.  Maybe a test of dark stability with time is worthwhile too.

Clearest skies,

Walt

Affiliation
American Association of Variable Star Observers (AAVSO)
Thank you!

Arne, thanks so much for posting this thread.  We are also considering this camera as a possible replacement for our aging Apogee U16M (KAF-16803 chip), so it's very useful to see your testing and comments.  I look forward to the next installment!

Eric Jensen

Swarthmore College

Thank you as well!

Thank you very much for this thread Arne! I've decided to purchase the QHY600 PH and will be receiving it in several weeks, and the information in this forum post is very useful. As you mentioned, there is somewhat of a lack of a data sheet on the sensor and testing data online. 

This is a valuable addition to the instrumentation and equipment forum. If I come across anything interesting or find something to note once it arrives, I'll add it here. For now a big thank-you from me as well yes

Affiliation
American Association of Variable Star Observers (AAVSO)
QHY600 - binning and blinker

The QHY600 ASCOM driver, accessed through the setup menu of MaximDL, has both 1x1 and 2x2 binning clickboxes.  Kinda weird that you can select binning at this low level!  Then, with MDL's camera control window, you can also select 1x1 or 2x2 binning.  So what gives?

If you select 1x1 binning in the driver, then MDL will give you two binning factors:

1x1, with image size 9576x6388 (no overscan)

2x2, with image size 4788x3194

Note that higher binning factors are not available with the QHY setup.  When binning, pixels are summed, but only to the 16-bit limit.  That is, if each native pixel has value 500, then 2x2 binning will yield binned pixels with value 2000.  If the native pixel is 65535, then the binned pixel is 65535, not 4x65535 = 262,140.  So the first thing to note is that binning with the QHY does NOT increase dynamic range.  For a CCD system, the system gain is often changed in binning mode, so that each binned pixel contains more electrons.  Unfortunately, with the CMOS systems, you cannot go to a gain factor less than 0, so however they have the lowest gain set up, you have to live with it.  QHY implies that 2x2 binning yields an 18-bit image, but at least MDL doesn't see it that way.  If you know the trick to get a full 18-bit image with MDL, let me know!

If you select 2x2 binning in the driver, then MDL still gives you 1x1 and 2x2 binning options, but now those binning factors are with respect to an already 2x2 binned image.  That is:

1x1, with image size  4788x3194

2x2, with image size  2394x1597

So the net effect is that you have available true 1x1, 2x2 and 4x4 binning options - just not all at the same time.  This will be useful for the New Zealand setup, as the f/14.8 telescope has such an expanded plate scale that we will want to use 4x4 binning most of the time, yielding 0.343arcsec pixels, not a great match but as good as we can do easily.  Note, however, that the "full well" for that 4x4 binned pixel is still only 50K electrons, when you'd like it to be 50K x 16 = 800,000 electrons.  At the same time, the readnoise has now increased to 3.5e- x sqrt(16) = 14e-.  Binning decreases the dynamic range in this case.

If you did not bin, but instead saved the native 1x1 binned image and then binned in software later, you could retain the full dynamic range.  We may do that some day - it would take an MDL binning plugin or an ancillary program.  We don't want to save 120MB images and then have to transfer them to HQ/VPHOT.

Enough on binning!  It is a logistical problem that most people won't run into, unless they have a 9m focal length.  What I'd like to start to talk about is tests with a light source.  These are used to evaluate several aspects of the camera, and are one of the truly important things you should do with your setup.  In particular:

- using two flats and two biases will yield the read noise and gain for your system, using the "Janesick method".

- examining a flat can show cosmetic defects - bad rows/columns, dead pixels, low sensitivity regions, dust on the sensor cover plate, etc.

- using a series of flats checks for sensor linearity and full well.

The first thing you need is a stable light source, preferably with a brightness control.  You can do a linearity curve with almost any light source that is moderately stable, like a desk lamp light bulb, and can determine the full well count level with such a setup.  However, it does take some extra stability-checking exposures.  I strongly prefer using a very stable light source in the first place, and one of the best is a pulsed LED.  A constantly lit LED tends to decrease in brightness with time due to self-heating.  If you pulse it, the LED doesn't get hot enough to drop in intensity.  Such a pulsed system is extremely easy to build today.  I use an Arduino UNO equivalent board, an LED and a current-limiting resistor.  Arduino shows you such a circuit in their examples.  You connect the LED/resistor across pins "gnd" and "13" on the Uno board.  The Arduino code (blinker) that I use in the basic system is available, but we're not allowed to attach files.  If that restriction changes, or I find a site to put the file on, I'll modify this post.  Blinker generates a 5ms pulse, followed by a 10msec cooldown period, and then repeats as many times as you want.  1000 such pulses takes 15 seconds.  Then what I do is set one exposure time for the camera in MDL so as to not complicate dark current or shutter issues, and run through a series of LED pulses, from barely detectable to complete sensor saturation.  Those pulses start after the exposure begins, and end before the exposure finishes.  That means the brightness is set by the number of pulses, not how long the camera exposes.  I set the camera exposure time long enough to handle a saturation-level of pulses plus human keypress times to start the exposure and then start the blinker.

LEDs without a camera lens will give a vingette-like pattern on the detector.  To make it more uniform, I tape a piece of opal plastic to the front of the camera.  I purchased a small white opaque acryllic sheet about 1/8" thick through Amazon, and cut off pieces as I need them.

While I'll describe measurements based on this pulser, you can use any other available light source.  The more constant, the better.  The light source does NOT have to be white, as you are testing the camera unfiltered and just looking for any detected light.

There are two basic tests to do at the beginning.  First, take exposures with varying number of pulses until your camera saturates.  You will need to set your camera exposure time to something larger than this.  Then calculate how many pulses to get 32K counts per pixel, and use that value to obtain two flats with about half-well exposure.  Then take two bias (or zero) frames.  These four frames constitute the minimum necessary information to determine read noise and gain.  Jim Janesick, a wonderful pioneer in testing CCDs, developed a technique that is called in the professional world "Janesic's method".  As implemented in IRAF's findgain task, their description is:

    "The formulae used by the task are:

            flatdif = flat1 - flat2

        zerodif = zero1 - zero2

           gain = ((mean(flat1) + mean(flat2)) - (mean(zero1) + mean(zero2))) /

                  ((sigma(flatdif))**2 - (sigma(zerodif))**2 )

           readnoise = gain * sigma(zerodif) / sqrt(2)

        where  the  gain  is given in electrons per ADU and the readnoise in

    electrons.  Pairs of each type  of  comparison  frame  are  used  to

    reduce  the  effects  of  gain  variations from pixel to pixel.  The

    derivation follows from the definition of the gain (N(e)  =  gain  *

    N(ADU))  and  from  simple  error  propagation.   Also note that the

    measured variance (sigma**2) is related to the  exposure  level  and

    read-noise variance (sigma(readout)**2) as follows:

             variance(e) = N(e) + variance(readout)

        Where  N(e)  is  the number of electrons (above the zero level) in a

    given duration exposure."

Note that you don't have to bias subtract the flats (bias subtraction doesn't hurt, but you need high S/N master biases), and you can use a region of interest on the flats and biases instead of using the entire frame.  This is useful for when you have vignetting in the image.

What are the results?  For the QHY600, using the High Gain mode with gain= 0 and offset = 10, I get a gain of 0.79e-/ADU and a readnoise of 3.05e-.  That means 65535 counts corresponds to 52Ke-, about what QHY specs.  Now, on to the linearity curve!

Arne

Affiliation
Vereniging Voor Sterrenkunde, Werkgroep Veranderlijke Sterren (Belgium) (VVS)
4x4 binning option

Hi Arne,

I appreciate very much your effort in characterizig the QHY600 CMOS camera.

As you know I have one as well. So far I have done some HADS investigations with it taking 2x2 binning modes (filesize about 30MB) in May. I have read your post and you mentioned you seem to have achieved 4x4 binning. Unfortunately the file size numbers lag your post, How did you do that. I tried to use 2x2 binning in the QHY ASCOM driver and 2x2 binning in MAXIM but the result was only 2x2 binning in the end.

Thank you for clarifying this. 

Josch

Affiliation
Vereniging Voor Sterrenkunde, Werkgroep Veranderlijke Sterren (Belgium) (VVS)
4x4 binning possible with new all-in-one pack for windows

Hi,

I have used my QHY600M early bird version now for more than one year. First in 2x2 binning mode on a C14 Edge HD at f/7 and since a couple of months in 4x4 binning mode with the new all-in-one pack from QHY. I gained by binning 4x4 about a factor of 2 in reduction in exposure kength compared to the 2x2 binning version.

I can only recommend those of you who have a QHY600 or QHY268 16 bit CMOS camera to upgrade the drivers.

Although not everything is perfect with these CMOS cameras, I love mine and get about 1000 images a night now in 4x4 binning with much reduced file sizes.

In recent weeks I have experienced that the camera is no longer accessible via MAXIM DL and needs to be power cycled, but then runs most of the time during the night without problems.

Regards,

Josch

Affiliation
American Association of Variable Star Observers (AAVSO)
linearity test

First, Josch points out that he was not able to get 4x4 binning on his QHY600.  While doing the linearity test, I noticed that the 4x4 binning trick that I found doesn't seem to work the way I expect.  So for now, assume that you only have 1x1 and 2x2 binning with the QHY600, and select the 1x1 binning option in the driver.

Using the blinker program and gain setting 0, offset 10, I found a good distance from the LED to the camera, and ran exposures from 5 blinks to 950 blinks.  I picked a 1M pixel window near the center of the sensor, and calculated mean and standard deviation for each image.  We can't attach files or graphs to forum posts any more, so I will give you the comma-separated dataset enbedded at the end of this post, if you want to import it into your favorite spreadsheet and play with the numbers.  As part of the testing, I redid the gain/readnoise calculation, and got 0.79e-/ADU as before, but with a slightly better 3.01e- readnoise.  The bias level is 135 counts at 0C operating temperature, about what I desire.  The standard deviation of the bias is 4.7 counts, so the system should never go below zero counts.

I also examined the image for major defects, and saw none - the sensor seems nearly perfect, and with no amplifier glow.  I've been in this business far too long, with my first CCD image taken in 1984, and the quality of today's sensors still blows me away.

Basically, a linear fit to values below 50K counts and extended upwards shows near linearity up to at least 60K ADU.  My guess is that it is linear up to the maximum count, as what happens near maximum is that the uniformity of the illumination becomes a problem, with some pixels saturating before others and throwing off the curve.  There are several tricks that you can play to examine this dataset more, such as to subtract the linear fit and just look at the residuals, look at the count rate at each exposure rather than the total count, look at the standard deviation plot, etc.  I won't go into these right now since I can't attach plots, but will later when HQ finds a solution.

So - at full resolution, this device has low noise and is most likely linear to the maximum count rate.  I like it!  On the next post, I'll look at some of the dark current characteristics.

Arne

Exposure, mean, standard deviation

5,543.1,35.99

10,949.7,44.72

15,1350,48.33

20,1752,52.97

30,2558,62.26

40,3363,71.09

50,4168,78.39

60,4972,86.4

80,6581,100.1

100,8185,112.4

150,12189,142.1

200,16191,169.3

250,20184,195.2

300,24180,221.4

400,32150,273.4

450,36128,299.3

500,40110,325.8

550,44079,352.0

600,48051,379.1

650,52016,406.2

700,55975,430.4

750,59925,456

800,63248,472.2

850,65534,0

900,65534,0

950,65534,0

Affiliation
American Association of Variable Star Observers (AAVSO)
STD DEV

If the std Deviation goes to zero near 65500 or so, does that mean the counts from all the pixels are 65534, ie saturated? So you have found an aproximate saturation point?

Affiliation
American Association of Variable Star Observers (AAVSO)
Digital saturation

Yes, that low standard deviation is telling you that most/all of the pixels have the same value, which in this case would be 65,535, or nearly so.  That has nothing to do with the full-well capacity of the chip - it is "digital saturation", i.e. maxing out the A/D converter.  The chip has a 16-bit analog-to-digital converter, and the largest unsigned integer that can be stored in 16 bits is 2^16 - 1 = 65,535.  So you'll never see a pixel value higher than that, no matter how many electrons are recorded. 
 

Eric

Affiliation
American Association of Variable Star Observers (AAVSO)
Bias subtraction?

Arne, have the counts in the data you posted already been bias-subtracted?  The reason I ask is that I get a y-intercept value of 150-160 (depending on exactly where I cut off the fit on the high end) when I do a linear fit to these data, excluding the points above ~30000-40000 counts.  That's not exactly your quoted bias value of 135, but it's close enough to make me wonder if that's part of what I'm seeing in the fit.

 

Thanks!

Eric

Affiliation
American Association of Variable Star Observers (AAVSO)
bias subtraction

Hi Eric,

The counts have not been bias subtracted.  In theory, the bias level should be the same throughout the series, and so you should just see a non-zero y intercept.  The reason for not subtracting is that the A/D converter sees both real signal and the bias level, and so the raw saturation level depends on both.  That way, when I see a star on the sensor in real-time, I don't have to bias-subtract in my head to know whether it is near saturation.  It is not so important when the bias level is 135, but in some commercial CCD cameras, I've seen biases of several thousand counts.

Arne

Affiliation
American Association of Variable Star Observers (AAVSO)
Arne: thank you

Arne: Your posts are stupendously helpful.

Several of us very busy CCD observers have been asking "if our CCD chips fail, do we just get out of the photometry business?" Your posts are causing more stir than you might realize, already getting us closer to much happier end-of-CCD prospects--and sooner than I at least expected. Your details are answering many concerns we've had about practical CMOS photometry. So yes, please do keep us informed of your wonderful progress.

Cheers, Eric

Affiliation
American Association of Variable Star Observers (AAVSO)
QHY600 tests - dark current

Getting good dark frames is a lot harder than you might think.  Remember, these are *really* sensitive cameras, so any light leak, averaged over many seconds, can be seen visually on the displayed image long before it is very obvious in the raw counts.  The QHY600 comes with a commercial DSLR camera-like lens cap with ears on two sides that click onto the front of the camera.  You'd think that was sufficient, but in a 300-second dark in a somewhat dark room shows an obvious light leak, probably from those clamping ears.  With the Bright Star Monitor E-180 telescopes and the ZWO ASI183 CMOS camera, we have a dark filter in the filter wheel.  Even at night, you can see an obvious vertical streak unless you also use the nicely fitting Takahashi tube cover.  This is why I highly, highly recommend that you only do dark frames at night, preferably on the telescope so that all electrical connections are the same as when you are imaging, and then with all kinds of covers.  Close the enclosure roof.  Put tube or mirror covers in place.  Turn off every light that is obvious and try to cover the remaining ones.  You want to measure what is internal to the sensor, not any external light.

For the QHY600, you can only cool ~30C.  If you are testing in the lab at 20C ambient, you can only go to -10C at near 100% TEC power.  For that reason, I've run my tests at -5C.  I tend to do a series of bias frames to get a decent master bias, and then do a set of dark frames with increasing exposure.  Then I subtract the master bias from each dark for plotting.  I typically do darks with logarithmic increasing exposure times, as the usual CCD structure yields increasing dark current that is a power law.

The whole frame results from the -5C tests:

exposure,mean,median,stddev,min,max

0 ,134.3,134.2,2.628,45.5,696

1,0.8194,0.7175,11.74,-160.2,64838

3,0.7794,0.6077,18.31,-152.0,64838

10  0.9068  0.6248  38.18  -173.2  64838

30,1.215,0.7795,68.92,-153.7,65394

100,1.978,1.716,120.7,-157.7,65403

300,4.598,-1.97,200.4,-153.2,65408

The first thing to note is that at -5C, the dark current is about 4.598 counts in 300 seconds.  With a gain of 0.79e-/ADU, this means a dark current of 0.012e-/pix/sec.  QHY gives 0.0046e-/pix/sec at -10C and 0.0022e-/pix/sec at -20C.  So using their values, I'd expect a dark current of ~0.007e-/pix/sec at -5C (they actually include a plot that shows this), and my results are "close enough".

I can't attach an image, so trust my description:  there is no amplifier glow on this sensor.  It is flat across the entire field of view.  This is the first CMOS sensor that I've tested that is free of amplifier glow.  I don't know if this is due to special circuitry provided by QHY, or whether any IMX455 camera will exhibit this same lack of amplifier glow.  Perhaps someone with a ZWO ASI6200 can comment.

There are a number of hot pixels.  You can see this from the "max" column, where even at short exposures, the maximum count is close to saturation.  For other sensors, I've plotted each individual pixel, and you can see the majority of the pixels clustered near zero slope with time, and then families of pixels with increasing slope (the hot pixels).  Dark frames actually correct most of these, as long as they don't saturate.  When the AAVSO allows attachments, I'll generate one of these plots and post it.  In general, the colder you run a camera, the fewer the number of hot pixels.

The bias level changes as a weak function of operating tenperature.  In 2x2 mode, in a lab with more-or-less constant temperature, the bias level is:

TEC_temp,bias_level

-10,536

-5,540

0,544

5,549

10,551

So in addition to the increasing number of hot pixels and their strength with temperature, the average bias level also shifts.  This is another reason to take new darks everytime you change the operating temperature.  For many signal-chain electronics in the CCD world, the bias is also a function of the ambient temperature.  However, here CMOS technology "wins".  The signal chain and A/D converter are ON the sensor itself, and are cooled to the same TEC temperature.

So bottom line:  dark current looks excellent with this sensor.

Arne

 

Affiliation
American Association of Variable Star Observers (AAVSO)
Negative median at 300s?

300,4.598,-1.97,200.4,-153.2,65408

Is that negative median dark count in the 300s exposure a typo?   If not, what do you think is going on there?   The mean looks reasonable given your other values. 

Given the slight temperature dependence of the bias, do you think using the overscan region for bias subtraction might be a good option?  Or is there enough spatial structure in the bias that this wouldn't work well? 

Thanks, 

Eric

Affiliation
American Association of Variable Star Observers (AAVSO)
overscan columns

Hi Eric,

Not sure what is going on with the median for the 300sec exposure.  I have similar dark sets at -10, 0, +10C and will be looking at them.

Overscan is usually used for CCD systems where the signal chain is off-chip and not usually cooled (ambient temperature).  It does help to compensate for any level shift.  The IMX455 has two sets of overscan pixels, and I'll be talking about them in a subsequent post.

Arne

Affiliation
American Association of Variable Star Observers (AAVSO)
re: hot pixels & psf phot

Hi, Arne:

If the hot pixels are steady and can be dark-corrected well short of saturation, I understand that they aren't a huge problem.

But if hot pixels do saturate on long exposures (probably not necessary if one can easily stack), or if they aren't steady or otherwise are hard to dark-correct, should that drive us from aperture photometry toward PSF photometry (after mapping and masking the bad pixels)?

Thx

Affiliation
American Association of Variable Star Observers (AAVSO)
psf photometry

point-spread-function fitting can avoid the bad pixels, since you are fitting something like a two-dimensional Gaussian to the profile, and you can either mask the bad pixels or fill them in by interpolating from nearest neighbors, as long as the fit is for the entire star profile.  So it is an alternative.  After I see just how many hot pixels saturate in 300-second exposures (about as long as I expose, since you will be sky-noise limited and can stack without penalty), we can consider photometry techniques.

Arne

Affiliation
American Association of Variable Star Observers (AAVSO)
stacking time penalty?

Sounds right on PSF.

Regarding stacking: since the CMOS download times are so short, and no filter change, and an off-axis guider can just keep running while taking the stack, CMOS stacking seems to have no meaningful time penalty at all. So given some sophisticated outlier rejection in post-processing, a 600-second exposure taken as 20 x 30-seconds stacked should have essentially no saturated hot pixels and very little effect from cosmic rays or satellites, all costing just a few seconds' extra overhead while observing. In all, stacking everything longer than maybe 30-60 seconds' exposure looks like a bargain for CMOS. Am I thinking about this correctly?

Affiliation
Variable Stars South (VSS)
The problem for me is that I

The problem for me is that I do not kmow of any software that will stack a set of images on the fly, then take another set of jmages, stack them, and so on. For time series photometry, that would be the requirement.

Roy

Affiliation
American Association of Variable Star Observers (AAVSO)
Let's tell them what we want

Oh, they will.

They'll have to provide realtime stacking, if my sense of the value of CMOS stacking is at all correct. Straight stacking (without registration) requires insignificant computation time, 60 megapixels or not.

It's early days. So we need to tell the software vendors and CMOS driver authors what we need, not just accept whatever's convenient to themselves.

Affiliation
American Association of Variable Star Observers (AAVSO)
Actually, the Pro version of

Actually, the Pro version of Sharpcap (https://www.sharpcap.co.uk/ , ~10$/year) is doing live stacking (with ot without registering), dark subtraction and flatfielding on fly as well, when needed. Data can be saved as FITS file. It supports also Python-based programming langugage (IronPython) to control all/most of the aspects of data acquisition.

The only "issue" is, that currently it does some bitshifting with stacked images. Fortunately, correcting for that is plain integer arithmetics (division by 2^N)...

Last time (this spring) I tried TheSkyX live stacking and for my disappointment it turned to be kind of a toy functionality.

Best wishes,
Tõnis

Affiliation
American Association of Variable Star Observers (AAVSO)
sharpcap

One of the issues I had with some of these systems that were originally designed for lucky imaging is that the stack-on-the-fly aspect was usually open ended.  You started stacking, and then pressed a key to stop the process.  For photometry, you need the ability to specify the total exposure time and the sub exposure time so as to prevent saturation in the subs, but stop the process with a fixed total exposure.  Does Sharpcap provide this option?  Too bad ACP is tied so strongly with MaximDL!

Arne

Affiliation
American Association of Variable Star Observers (AAVSO)
So far I have used Sharpcap

So far I have used Sharpcap Pro manually, there it is possible to set time limit of automatic stacking. I.e. 1 minute when shooting 0.1 second exposures. Then after every 600 stacked frames the stack is saved and new is started. That control is also fully doable using Python code

I had discussion with Bob Denny and seems that in principle it could be possible to command SharpCap from ACP using scripts.

Best wishes,
Tõnis

Affiliation
American Association of Variable Star Observers (AAVSO)
Stacking on the fly

Arne and I have worked with FLI, Maxim, and DC3 Dreams to incorporate this feature.   Maxim 6.22 now incorporates the Stack on the Fly feature.  I have extensively bench tested it with the FLI Kepler KL 400 sCMOS camera and it works very nicely.  Not sure if other camera drivers would work without being updated, but the ASCOM driver is now set up to Stack on the FLY.  

Gary

Affiliation
American Association of Variable Star Observers (AAVSO)
Linearity

Arne, your QHY600M (CMOS) is far more linear than my SBIG ST-9 (CCD).

As part of my personal "photometry improvement program," I put together a linearity curve for the ST-9, and then built a little Python "aperture photometry simulator" to give some insight into how nonlinearity in the low- and middle-end of the transfer curve affects photometric accuracy. With the data that you provided in your earlier post, I extracted the QHY600M linearity error curve and added that to my simulator. The linearity errors for the QHY600M are a full order of magnitude smaller than those of my ST-9.

That little simulator lets me translate linearity error into photometric error. When the comp star and the variable are equal in brightness, there is no nonlinearity error. But as the stars differ in brightness, nonlinearity increasingly affects photometry. With a 7-magnitude difference (e.g., variable star mag 5 and comp mag 12 -- certainly not ideal, but not unheard of with an Ic filter on a cool Mira), the QHY600M nonlinearity photometry error is about 0.005 mag, while my ST-9 nonlinearity for the same pair of stars is about 0.025 mag according to the simulator (which I haven't been able to figure out how to validate).

I've been unsuccessful in correcting linearity errors in order to improve photometry. (Well, corrections have worked okay when done the same night as the linearity curve was measured, but don't seem to work as well on other nights.) Literature I've found suggests that both CMOS and CCD non-linearity is caused by physics that has a temperature dependence. I have yet to take the time to measure how the transfer curve changes with temperature, but maybe that's the next step for me.

Do you have an accuracy error budget target for this camera? Do you know whether you will need nonlinearity corrections?

- Mark

Affiliation
American Association of Variable Star Observers (AAVSO)
QHY dark tests, continued - and first light!

Here are some more dark tests:

at 0C

exposure,mean,median,stddev,min,max

1,0.7623,0.653,12.56,-157.2,64599

3,0.7615,0.6363,20.94,-168.5,64599

10,0.9935,0.5188,44.01,-188,65391

30,1.532,1.241,78.34,-156.8,65400

300,6.753,-1.235,229.9,-180,65407

The median is still wonky.  I've found this before on other sensors - the median is not a good statistic for dark current.  It could be that hot pixels skew the distribution; it could be that some internal processing is done by Sony.

Comparing 300 second exposures at different temperatures:

temp,exposure,mean,median,stddev,min,max

-10,300,3.537,-1.792,170.3,-184.2,65411

-5,300,4.598,-1.97,200.4,-153.2,65408

0,300,6.753,-1.235,229.9,-180,65407

10,300,14.15,1.806,294.2,-167,65412

You can see from this that the mean pixel value increases from 3.5 counts to 14.15 counts over a 20 degree temperature change.  Note that 14.15counts/pixel/300sec is still only 0.04e-/pix/sec, a very small value.  These sensors have very low dark current.  However, any time that you change operating temperature, you should take new darks.

I cannot easily test for any bias level change with ambient temperature, but I doubt there is much variation.  The signal chain electronics (amplifier, ADC) are on-chip, and so are cooled and regulated with the TEC just like the light-sensitive region.  With CCDs, the signal chain electronics are external to the sensor and not usually cooled to the same temperature.  I'll devise a test for this sometime soon.

For on-sky tests, I imaged a carbon star from this list:

https://skyandtelescope.org/wp-content/uploads/Carbon-Stars.pdf

which is a really handy list to keep around.  I chose T Lyr for these tests, since it passes nearly overhead and is bright.  I first checked for light leaks in the filters I was using, especially the U and B filters, which are common ones for possible leaks.  Using a really red star like a carbon star provides lots of flux in the red, where a leak might occur, along with minimal flux in the blue.  I took exposures from Johnson U through "Sloan" Y, and saw no leak.  The star was appropriately faint at U and B.  If you had a light leak, the star will be brighter than expected with the bluer filters. 

The detector performs quite well, especially in the red.  I saw no electronic artifacts.  CMOS sensors are nice in that you never see a blocked column because each pixel is addressed individually and no blocking can occur.  However, each row has its own signal chain and A/D converter, so in reality each row has its own gain and readnoise.  Manufacturing tolerances are extremely good, so you don't see a lot of row-to-row difference, but looking at an image, you will see some horizontal structure on most CMOS sensors due to this effect.  It mostly dark subtracts and flatfields out.

What I did see with this red star was a slight amount of residual image.  If you totally saturated the star at I-band, then moved an arcminute and took another exposure, you see a faint dot at the original position.  This dot disappears on the next exposure, so only hangs around briefly.  This is different than residual bulk image for CCDs, but means I need to study the effect more closely and see both the conditions under which it happens and how to mitigate it.  Right now, the residual image is at such a low level and only when you saturate the detector, and so I don't think it would affect photometry at all.  I'll attach some images later, when the AAVSO allows attachments.

Arne

Affiliation
American Association of Variable Star Observers (AAVSO)
QHY600 - residual image

There are lots of individual tests you would need to do to fully characterize a defect like a residual/ghost image.  I just did a couple of them, and would have to do more careful analysis to fully understand the issue.  The setup is gain 0, offset 10, high gain read mode, binned 2x2.  For each filter set, I use the same exposure, and dither a few arcmin between exposures.

- using SI (essentially Ic) and saturating T Lyr, a very red carbon star, shows a residual image on the 2nd exposure.  On the 3rd exposure, that residual image disappears and is not measurable.

- using SI and NOT saturating T Lyr (peak counts = 20K), yields 271K counts in the measuring aperture.  On the second exposure, a residual image (peak counts = 527) is present with 20K counts.  On the 3rd exposure, that residual image disappers and is not measurable.

- using SR and saturating, a residual image is seen on the 2nd exposure with peak 531 and aperture flux = 11K counts.  It is not present on the 3rd exposure.

- using V and saturating, no residual image is seen.

Based on these simple experiments, I'd say the residual image is stronger at redder wavelengths.  It only appears on the first image after the offending star, and then disappears.  There are lots more things to check, such as:

- I don't have a blank filter in the wheel, so can't check for residual image with no skylight on the image.

- I only used identical exposure times between frames with the same filter, dithering with the telescope.  I don't know if the residual image appears if you took a short exposure after the saturated exposure.

- the field is pretty blank at SI, and so I can't tell whether this effect is only for very red stars.

- I don't know if this is an artifact of 2x2 binning, and need to perform the test at 1x1 full resolution.

- I don't know if this is an artifact of a particular readout mode.

So much more testing will be required.  For now, I'm going to assume that any SR, SI exposure is going to have a residual image, and make sure that I either dither between images, use a filter ordering that places the red filters at the end of the set, or insert a throwaway image between fields (like a pointing exposure).

I'll take the camera off of the telescope and insert a blank filter into the wheel and do some more tests.  I'd like to understand this phenomenon before shipping the camera to New Zealand.  I'll also experiment with the overscan and let you know what I think about using these blank pixels, and quantify the hot pixels a bit more.  All of these tests are easier to do here than remotely in New Zealand, so delivery will be delayed another week!

Arne

Affiliation
American Association of Variable Star Observers (AAVSO)
Clarification on the residual image tests

Hi Arne,

One clarification on the residual image tests you've done.  You're dithering the telescope after each exposure, so I'll say that the star is at X1, then X2, then X3 in exposures 1, 2, and 3.  When you say that the residual image is not present on the third exposure, I assume that what you mean is that the residual at X1 is no longer there - correct?  But in exposure 3, you would still see a residual at X2, where the star was on the previous exposure? 

Thanks for clarifying,

Eric

 

Affiliation
American Association of Variable Star Observers (AAVSO)
Arne, it has been very

Arne, it has been very interesting to follow your test results. In the very first post you mention other things.... So I'd like to ask if you have noticed any signs of fringing in NIR (e.g Ic or even Rc filter or their SDSS counterparts) in the case of that camera? I'm considering that sensor as a very low cost (*grin*) possibility for an echelle spectrograph, mainly because of very low read-out noise and appealing sensor size. And when it is possible to deal with fringing in photometry, in spectroscopy it's just an awful nuisance. A deep depletion CCD would be that second and a not-exactly-cheap option...

Best wishes,
Tõnis

Affiliation
American Association of Variable Star Observers (AAVSO)
fringing

Hi Tõnis,

I have not seen any fringing in r' or i' bandpasses.  My z' filter is still on backorder, and in the past with thinned CCDs, it was z' or clear that showed fringing the most.  I do have a Y filter, and on the few images that I've taken with it and the QHY600, I did not see fringing.  However, I did see some strange vertical/horizontal "diffraction" spikes on bright stars with the Y filter.  I have not traced down their origin, but I'd guess it has to do with microlenses.

Fringing on CCDs is due to interference from the reflection off of the gate structure after the photons have passed through the silicon pixel, since the gates are behind the sensitive area.  In a CMOS sensor, there are electronics covering part of the back of the pixel, but no wide areas like gates.  I would assume that any fringing will be minimal.  That also means the QE in the red might be a bit lower.

I may leave an open slot in the filter wheel to look for fringing - thanks for reminding me to check for this!

Arne

Affiliation
American Association of Variable Star Observers (AAVSO)
I was asking about fringing

I was asking about fringing because apparently that sensor is a back-side illuminated one (though I'm not sure if pixels have also microlenses). Unless the substrate is pretty thick (30+ µm, as deep depletion CCD-s have), there may be some fringing. Indeed, if there is fringing, unfiltered observations would show them as well.

Sodium street lamps are quite good nearly-monochromatic sources in NIR (around 820 nm) to max the signal of fringing.. ;-)

Best wishes,
Tõnis

Affiliation
American Association of Variable Star Observers (AAVSO)
QHY residual image issue

Ahead of the last couple of posts on this thread, I'd like to mention that the residual image issue that I found with the QHY600 was solved by updating to their latest driver.  They had heard about this issue several months ago and fixed the problem.  In my case, my on-sky testing was done with an older driver as I had an existing QHY600 system on the telescope, and the driver dated from that prior installation.

With help from Mr. Ma, one of QHY's software engineers, I was able to install their latest camera driver (dated 20-06-28), and the residual image went away.  Yay!

I'll send out a new installment on this thread in a day or two.

Arne

Affiliation
American Association of Variable Star Observers (AAVSO)
Hi Arne

Hi Arne

i was hoping to work this all out on my own by following your threads but the move from ccd to scmos was already feeling like moving from piloting a fixed wing plane to a helicopter and now feels like piloting a helicopter with individual controls for each rotor blade...

I have  new QHY600 and am looking to see how it compares with my much much more expensive andor ikonL936 for the purposes of photometry and specifically exoplanetary transit work.

now, we are talking a 13.5um andor pixel vs a 3.75 qhy so i was looking at 3 or 4x binning and drawn by the low read noise and the high effective full well depth that QHY advertised and possibly using on camera or in-driver stacking to maximise my SNR.

My scopes are 0.4-0.5m RCos f8 and my exposures with the andor are typically 20sec to 180sec with a sweetspot of 60 secs of mag12-13 host targets.

my initial trials with the qhy i have done with the 0/10 gain/offset that i had seen discussed as a starting point before but of course on top of that one would need to know the suitable readout mode.

i note from an earlier post that yu said that you were using  mode 1, gain0, offset10.

after your testing and using the camera for photometry is this still your recommendation ? and would that change do you thing looking at what i am doing?

your comments on no improvement in the SNR from the binning via the driver has me looking into software binning as it seems to be the only way i stand a chance of comparing the 3x3 or 4x4 binned performance of the 3.75um pixels with the 13.5um, 150,000e full well depth of the andor...

and that has me looking at astroimagej and pixinsight as possible answers for that...

your updated thoughs would be most welcome - even if just to confirm the mode and gain...

thanks

gavin