It may come as a surprise to some of you but, nothing is perfect. Take your camera’s light sensitivity for example. In theory, every pixel should give you the same output value for a given amount of light right? Wrong. In fact, no matter how much you spent on your camera, it will still have some degree of unevenness in response from one pixel to the next. So what is a poor Machine Vision person to do? One solution is to give it all up and run away with the circus. But that would be silly. A better solution would be to determine each pixel’s response to a given light level and apply some kind of correction, in order to achieve a common ideal pixel value. Kind of like evening out a farmer’s field by flattening the bumps and filling in the holes. Except, in our case, the bumps and holes are pixels that are either too bright or too dark.
We start by determining what each pixel’s response is to a known stimulus (that’s fancy talk for capturing a boring unchanging picture of nothing – you know, gray). It is important to make sure that your picture of nothing is not too bright though, otherwise your image will be saturated i.e., every pixel will have the same maximum value. Ideally, you should arrange your lighting and object so that you end up with an average pixel value in the middle of your minimum and maximum pixel range (e.g. if min = 0 and max = 255, use a target pixel value of 128).
So, now you have an almost even image of nothing. The next step is to calculate a correction factor for each pixel, such that the result will be equal to your target value. In other words, if you have a pixel value of 115 and you want to make it 128, what do you have to multiply 115 by to get 128?
Wow! Algebra really works! Now, if we apply our new found correction factor to the original pixel value, we should end up with our target value:
There you have it. Each pixel gets its own correction factor and every time you grab a new image, every pixel is adjusted with its own individual correction factor. That way, your camera’s uneven light response is corrected and you end up with a perfect image every time. I guess some things are perfect after all.