I typically shoot at native resolution with my ASI2600MC for any single target. Recently, I’ve started collecting data for my first attempt at a mosaic, a two-panel spread over the Eastern Veil Nebula. With frames from my camera being about 6000×4000 pixels, and needing to collect two panels’ worth of data, I decided that I would bin the exposures. This should increase the signal to noise ratio of the exposures, allowing me to spend less time per panel to achieve acceptable results.
Binning is essentially some method of combining adjacent pixels on a camera. Just like how we average exposures to increase the signal to noise ratio, binning pixels in a single exposure will increase the signal to noise ratio of that exposure. The trade off is that you lose spatial resolution, as multiple pixels are somehow converted into a single pixel.
When this topic comes up on discussion boards, everyone always says “don’t bin in camera, do it in post.” While at least the ZWO camera drivers will properly bin the color images, the suggestions given by others for how to “bin” in post seems either counter-productive or just wrong.
A color camera sensor has a Bayer matrix in front of the pixels. Each pixel is then assigned a color by a tiny filter that is deposited in front of it. Cameras typically do this by blocks of 2×2 pixels, with the three primary colors somehow assigned to these four pixels.
There are two ways that you can bin a color camera. The most basic is that you average all of the pixels from the 2×2 matrix. This essentially removes the color data, as all of the colors are mixed together in a new pixel. This could be used to create a luminance image if the pixels were weighted properly.
The other method is to consider a 4×4 matrix of pixels and to average together the values from the corresponding Bayer pattern of each 2×2 block. E.g., combine all of the red pixels from the 4×4 block into a new red pixel in a 2×2 block.
This method is proper 2×2 color camera binning, as it does average four pixels together while retaining the color information. You get the approximately two times increase in SNR while only losing about half the spatial resolution.
As you can see, there is missing information for all colors in pixels not assigned to that color. During the process of Debayering, the monochromatic image is converted into a three-channel color image by using some type of interpolation for color values not represented by a pixel. There are many methods that can be used to interpolate the values of missing colors using the neighboring pixels of that color, but all of them are some kind of compromise since they are just guessing at missing data.
The most common method of post-capture binning that I see suggested is to Debayer exposures as normal, then simply resample them. When you do this, a 2×2 block of pixels is converted into a single pixel. With, e.g., the red pixels, you are averaging together one true data pixel and three synthetic red pixels generated by whatever method of interpolation. In this case, you are not actually binning more than one real data value, except for green. This would give some kind of arbitrary new value for the pixel, heavily weighted by the interpolation algorithm.
Another method of Debayering that is suggested for achieving binning is Superpixel. This takes a 2×2 block and converts it to a single pixel by accepting the red, blue, and the average of the two green pixels. This is at least sound in that it is using real data to create the color image, but you are not gaining any SNR increase except in the averaging of the two green pixels.
The proper Bayer matrix binning in Pixinsight can be done using the SplitCFA, IntegerResample, and MergeCFA processes. However, these cannot be performed as a batch using the ProcessContainer. I’ve written a PixelMath process that will perform the proper 2×2 binning on calibrated but not Debayered color images.
RGB/K Expression: mx = x() % 2; my = y() % 2; x2 = 2 * x() - mx; y2 = 2 * y() - my; ( p( $T, x2, y2 ) + p( $T, x2 + 2, y2 ) + p( $T, x2, y2 + 2 ) + p( $T, x2 + 2, y2 + 2 ) ) / 4 Symbols: x2, y2, mx, my
This will reduce the size of the image by half, binning the 4×4 blocks of pixels into 2×2 Bayer patterns. You will need to use DynamicCrop to then crop the images after this is applied. If you are using an ASI2600MC, the following Process Icon will work, but the DynamicCrop process can be easily modified to fit your camera’s resolution.
The following shows the same image natively Debayered and binned using the above script before being Debayered. The STF auto stretch is used for display.
The binned imaged appears to have better SNR. Running the SNR script on each image shows about a 6dB increase in SNR by binning,
run --execute-mode=auto "/opt/PixInsight/src/scripts/SNR.js" Processing script file: /opt/PixInsight/src/scripts/SNR.js native Calculating SNR * Channel #0 SNR = 1.492e+03, 31.74 db * Channel #1 SNR = 3.171e+03, 35.01 db * Channel #2 SNR = 7.792e+02, 28.92 db run --execute-mode=auto "/opt/PixInsight/src/scripts/SNR.js" Processing script file: /opt/PixInsight/src/scripts/SNR.js binned Calculating SNR * Channel #0 SNR = 5.912e+03, 37.72 db * Channel #1 SNR = 1.266e+04, 41.02 db * Channel #2 SNR = 3.081e+03, 34.89 db