How does camera sensor work?
What is the Bayer Pattern?
What is meant by Demosaicing?
What is the function of micro lenses in camera sensor?
The process of photo making has become extremely easy in digital era. One can press a button on a digital camera or smartphone and instantly view the result. This was an unthinkable dream in the mind of the artist who used to stand in Camera Obscura to paint over the image that was created by light piercing through a pinhole, in order to preserve the image. His role is taken over by the camera sensor. A square shape greenish device with outstanding manufacturing excellence that triggers electronic signals when exposed to light.
How does camera sensor work?
Camera sensor is covered by an array of millions small photo sites (Pixels). When you press the shutter release button the camera sensor will be exposed to light for a short amount of time (shutter speed). Photo sites on different areas of the sensor will receive different number of light photons during the exposure. Accordingly they will trigger electronic signals that are in proportion to the number of light photons each photo site has received.
Up until this point each photo site provides information regarding the brightness of light at each pixel, and the final image would be a black and white image and each pixel would have a different shades of grey.
Figure 1. A simplified example of how photo sites, upon exposure send electronic signals to the camera’s computer, where they will be analysed to define the brightness of each pixel in the final image.
How does camera sensor detect colours?
The photo site array on the camera sensor as we discussed above is not able to detect the colours. But what if we break down the colours of light that hits the photo sites to their primary colours? All colours in our images are combinations of the three primary colours; Red, Green and Blue, the RGB colours. Hence, by placing green or red or blue filter on the top of each photo site we would allow only green, red or blue colour to pass while absorbing and reflecting the other two.
In reality 50% of all photo sites have green, 25% blue and 25% red filters. The reason for the higher percentage of photo sites with a green filter is that our eyes are far more sensitive to green colour and are able to see much more details in green than in the other colours. As a result of this overwhelming number of green photo sites, the final images will be of much higher quality. This colour filter array on the top of photo sites is called a Bayer Pattern or filter, which is very common in the manufacturing of the camera sensors.
Note: Bayer pattern is the most common colouring system in camera sensors. But there are other systems as well. Like Foveon that detects all three colours at each pixel.
Figure 2. In this example the light photons contain equal amount of Red, Green and Blue, which create white colour, for the sake of understanding. In reality each part of the sensor might be hit with different combination of these colours.
Now when the sensor is exposed to light each photo site will generate a signal that is initiated by light photons with a single primary colour; green, red or blue. But there are still a few problems with this sensor; 1- A (major) part of the incoming light is lost before reaching the photo site 2- The other two primary colours are also lost. Hence, the signal initiated by a photo site neither reflects the true brightness nor the true color of light.
This conclusion is true when the camera’s computer would have analysed the signal of each photo site in isolation. But this is not the case. Each photo site has neighbouring photo sites with different filters. Hence, the camera’s computer can calculate what the percentage of the other two primary colours has been and using sophisticated algorithms, the camera can accurately calculate the brightness and the true colour of each pixel in the final image. This analytical process is called “demosaicing” or “colour reconstruction”.
Image format will define how much of the initial data will be saved
Your decision about the image format will define how much of the original data will be saved. If you have chosen to shoot in Raw, the final image shall contain the entire digital data that was created upon exposure. However, if you have chosen to shoot in JPEG, then the camera will remove some data or disregard some data during the colour reconstruction (among others) to make the image smaller.
In order to enhance the number of photons reaching each photo site there are micro lenses above photo sites to gather and concentrate photons into the photo sites. This will increase the signal and reduces the amount of noise, and therefore, result in better image quality.
I hope this was useful, if you have any comment or question please do not hesitate to send me an email.
Triangle Photo Academy