Deep Learning, Embedded Systems, Signal Processing

Digital color cameras are made out of a monochrome sensor, with a color filter on top. Each pixel only measures a light in a single color: red, green or blue. The resulting image is a gray scale image as displayed above, where each pixel represents either red, green of blue light. Color cameras have twice as many green pixels as red of blue, because our eyes are most sensitive to green light, so we need the most detail in that channel.

These rastered gray scale pixels have to be mixed in order to create a proper RGB (red, green, blue) image. Common artifacts of such a mixture are colored edges and so called zipper effects. These are clearly visible in the right part of the image above. In order to overcome this issue we downscale the image with 50%. After all, a (general) RGB pixel contains 3 x 8 bits of data, while the sensor has only measured 1 x 8 bits for each pixel. When the image is down scaled with 50%, each pixel will have 4 x 8 bits of data available (1 x 8 bits red + 2 x 8 bits green + 1 x 8 bits blue).

In the images below you see: (1) The original bayer image; the same as the one in the image above, but now it is color coded. (2) A default debayer result; the same as the RGB image above, but 50% down scaled RGB image. (3) Is the method previously used by Kubiko. Demosaicing and downscaling are combined and executed on the GPU. This reduces the amount of color / zipper artifacts and it is an order of magnutide faster than on a CPU. (4) Is the result of the exact same method as no. 3, but with debayer filters which have been trained using deep learning.

The image below shows the resulting image filters after training on many Bayer images which were synthesized from a large set of photographs. It is a convolutional filter per color channel with step size (stride) of 2×2. The darker the pixel, the more its original bayer value is used for the resulting RGB pixel.

The green filter is pretty straight forward, but the red and blue filters have converged to an interesting pattern. The center pixel (which is actually a green pixel in the bayer image) is used quite heavily for the resulting RGB pixel. This increases the amount of detail in the resulting image as it removes aliasing, but it also introduced color bleed; it would make green regions less green. I.e. if an area is purely green, the red and blue channel would also receive some value. To overcome this issue, some green has to be subtracted from the red and blue channel again. And indeed, you can see that the green pixels left and right from the two red pixels (and above and below the two blue pixels) have a negative value, and thus are subtracted to balance out the colors.

red
green
blue
Related Projects

Start typing and press Enter to search