Notes on Image Quality, Sensor- and Pixel Sizes

   
 

Figure 1: Image by Pentax

 
 

There is more to camera sensors than merely the number of effective pixels.

 
Camera Specifications: More than Pixel Numbers

 

When I compare specifications for my three older compact cameras and my wife's somewhat newer, I find amongst others the following information: 

 
 

Olympus

C-50 Zoom

(5 megapixel camera)

Pentax

Optio 550

(5 megapixel camera)

Minolta

Dimage G-400

(4 megapixel camera)

Olympus

mju 760/ Stylus760

(7 megapixel camera)

Sensor type

1/1.8 inch CCD solid-state image pickup 5.36 million pixels

 1/1.8 inch interline transfer CCD with a primary color filter. 5.25 megapixels (total pixels)

1/2.5-inch CCD,  primary color filter, (total approx. 4.23 megapixels),

1/2.33" CCD (primary color filter), 7,380,000 pixels (gross)

 

Image size (in best quality)

2,560 x 1,920 pixels

2592 x 1944 pixels

2408 x 1758 pixels

3072 × 2304 pixels

 

Lens focal length (wide)

7.8 mm

7.8 mm

5.6 mm

6.5 mm

 

Lens f/ratio (wide)

f/2.8

f/2.8

f/2.8

f/3.4

Lens focal length (tele)

23.4 mm

39 mm

16.8 mm

19.5 mm

Lens f/ratio (tele)

 f/4.8

f/4.6

 f/4.9

f/5.7

Depth (mm) / weight (g)

41.5 / 194

39.5 / 205

23 / 145

24.4 / 120

Table 1: Selected specifications for our household's "current compact camera park"

 

Of these specs, we can readily understand the image size numbers. They are the dimensions in width and height of our pictures and multiplied together, they give us the "pixel sizes" that you will always see advertised (i.e.: here we get in rough numbers 5, 5, 4 and 7 mega. pixels)

 

Why I mention lens specifications here, when this is supposed to be a note on sensor and pixel sizes will become obvious soon; suffice to say now, that optical and sensor specifications are intimately linked together in a well designed digital imaging system.

 

But the sensor types then; what is the meaning of 1/1.8-inch, 1/2.5-inch and 1/2.33-inch??? The "inch" indicate that it has got something to do with the physical dimensions of the CCDs and that is exactly so: They are a kind of technical slang left over from the days of TV cathode ray tubes. There is no direct mathematical correlation between type designation and absolute size, but it is a unique designation that one can look up in a table and below are shown some of the most frequently applied sensor types in compact digital cameras.

 
 

Type

Sensor Dimensions (mm)

Diagonal

Width

Height

1/3.6"

5.000

4.000

3.000

1/3.2"

5.680

4.536

3.416

1/3"

6.000

4.800

3.600

1/2.7"

6.721

5.371

4.035

1/2.5"

7.182

5.760

4.290

1/2.3"

7.70

6.16

4.62

1/2"

8.000

6.400

4.800

1/1.8"

8.933

7.176

5.319

1/1.7"

9.500

7.600

5.700

2/3"

11.000

8.800

6.600

 -and, for comparison, also some sizes not applied in standard compact cameras:

1"

16.000

12.800

9.600

4/3"

22.500

18.000

13.500

1.8"

28.400

23.700

15.700

35 mm film

43.300

36.000

24.000

Table 2: CCD sensor types and their dimensions

 
Now it begins to become interesting, because we have as well the absolute sizes of our CCD's as the number of pixels packed into that (tiny) space and we may first calculate the relative sizes of the individual pixels used in our cameras and next ask, if we have any knowledge that will the tell us something about the image quality we may expect from our cameras. There have been many a heated debate about this, but I shall try to elucidate some of the more important aspects for special applications such as astrophotography.
 
 
Basic Optical Principles

But before that, we had better refresh some trivial but important facts about lens sizes and focal lenghts. Firstly, it shall be remembered that a lens not only "bends" the light for us to produce an image; it also serves as a "light-bucket" and the larger it is in absolute size, the more photons / the more light will it collect for our camera and sensor:

 

Figure 2: Light capture from a remote, point-like source as function of absolute lens aperture

 
Indeed, if we double the absolute aperture size, we collect four times more light from a given subject within a given exposure time. In other words, for a point-like, remote source such as a star we may expect to get a proper exposed image with the larger lens in a quarter of the time needed with the smaller lens. (A star will never truly be imaged as a point but, due to the laws of optics, rather as a tiny disk. However, these disks are small and the argument still holds very well).
 
Secondly, let us consider two lenses of the same absolute aperture but with different focal lengths:
 

Figure 3: Image size as function of lens focal length

 
With a larger focal length, we get a larger image, but the light will then be spread over a larger area and the image formed will be dimmer. In other words: If we double the focal length, we shall need four times the exposure time for extended objects.
 
With these simple factors in mind, let us revert to the question about the possible importance of sensor- and pixel sizes.
 
Apart from the ever increasing pixel numbers, it is obvious that compact pocket digital cameras have become slimmer and slimmer over the last 5-6 years. At the same time, and as a natural consequence of the smaller camera body dimensions focal lengths have also in general decreased for such cameras. Unnoticed to more, sensor sizes have also decreased somewhat; this again is a logical consequence of the slimmer camera bodies as lenses with smaller focal lengths require smaller sensors to cover the same field of view (which is mostly around 50 degrees at the wide field position for these cameras):
 

Figure 4: Sensor/image size as function of lens focal length for a given field of view

 
 
Sizes: Sensors, Images. Lenses and Prints

Let us take a quick "historical overview": I got my first digital camera, the Olympus C-900 Zoom in 1999. It had a 1/2.7-inch CCD and had provided an image size of 1280 x 980 pixels in uncompressed TIFF or best JPEG-mode. Focal lengths and ratios were between 5.4 mm, f/2.8 and 16.2 mm, f/4.4. Table 2 then lets me calculate that this CCD had some 240 pixels per millimetre (mm).

 
In 2003 I bought my second camera, the Olympus C-50 Zoom with specs. as per Table 1 above. This one has about 360 pixels per mm on an 1/1.8-inch CCD.
 
My wife's Olympus mju 760 bought in early 2007 has 500 pixels per mm on a 1/2.3-inch CCD.
 
And the most recent Canon Ixus release (2009), the 990 IS sports a whopping 12 mega pixels on a 1/2.3-inch CCD which gives 650 pixels per mm. This is a 5 x zoom camera with focal lengths between 6.6 mm, f/3.2 and 33.0 mm, f/5.7. (Please take note of these figures - we shall use the for later arguments' sake).
 
For all the cameras mentioned above and/or listed in table 1, let's do the same exercise. This results in the following overview:
 

Camera

Sensor type

Sensor dimensions

(mm)

Image dimensions

(pixels) and sizes (MB)

Pixels per mm

(rounded number)

Max. Aperture

(mm)

Magn.

(linear)

Olympus C-50 Z

(5 MP - 3 x zoom)

1/1.8-inch

7.176 x 5.319

2560 x 1920

(14.1 MB uncompressed)

360

2.78 (W)

4.88 (T)

5.0

Pentax Optio 550

(5 MP - 5 x zoom)

1/1.8-inch

7.176 x 5.319

2592 x 1944

(14.4 MB uncompressed)

360

2.78 (W)

8.48 (T)

5.0

Minolta Dimage G400

(4 MP - 3 x zoom)

1/2.5-inch

5.760 x 4.290

2408 x 1758

(12.1 MB uncompressed)

415

2.00 (W)

3.43 (T)

6.3

Olympus mju 760

(7 MP 3 x zoom)

1/2.3-inch

6.16 x 4.62

3072 x 2304

(20.3 MB uncompressed)

500

1.91 (W)

3.42 (T)

5.8

Canon IXUS 990 SI

(12 MP - 5 x zoom)

1/2.3-inch 6.16 x 4.62

4000 x 3000

(34.3 MB uncompressed)

650

2.06 (W)

5.79 (T)

5.8

Pentax *ist DL

(6 MP)

n/a

(about 1.8-inch)

23.5 x 15.7

3008 x 2008

(17.3 MB uncompressed)

128

Interchangeable

optics

1.5

Hypothetical full frame

DSLR  (12 MP)

n/a

24 x 36

4242 x 2828

(34.3 MB uncompressed)

118

Interchangeable

optics

1

Table 3: Sensor dimensions, relative pixel sizes and image sizes (in pixels and megabytes) for selected cameras

 
.
In table 3 above, I have included my own Pentax *ist DL (6.1 million pixels) DSLR (: digital single lens reflex camera) and a hypothetical 12 MB full frame DSLR for comparison as we shall need such data for the following discussions. The "Magn. (linear)" column to the right gives the linear magnification required to get the same image width as that of conventional 35 mm film - or for the modern full-frame (DSLR) sensors. Also inserted is a column showing the maximum (absolute, effective) apertures in mm at the widest field (W) and at full zoom (T).

The number of pixels per mm in Table 3 gives a good indication of the relative pixel sizes. Each pixel occupies 1/(number of pixels per mm) on each side and it is obvious that pixel sizes must have decrease substantially (for the compact pocket cameras) as more and more pixels have been packed in a slightly decreasing total CCD size. It is also clear from Table 3 that individual pixel sizes for the high-end DSLR cameras must be significantly larger

What we don't know for sure is the actual size of the active elements (light sensitive photo diodes) that may differ in shape and packing density as shown in the illustration to the right.

 

 

Figure 5: CCD Chip layout

(Images by Fujifilm)

 
Sensor size and enlargements

To begin with, let us see the difference between the Olympus C-50 Z and the mju 760: We got 40% more pixels. Great, then we also got 40% better resolution? No, not quite so, because the 1/2.3-inch sensor size is only 86% in linear dimensions compared to the dimensions of the 1/1.8-inch CCD. This means that while we have to magnify the image from the C-50 Z by a factor of 5 to reach a print size comparable to standard 35 mm film, we have to magnify that of the mju 760 by a factor of 5.8, c.f. Table 3 above.

This is the linear magnification required. By area, the smaller sensor is only 74% the size of the larger. In other words, one might say that we spend 36% of the extra pixels to compensate for the extra magnification required to reach a comparable print-size.

 

Figure 6: It takes some enlargement for compact camera CCD's to give the same print size as DSLRs
1: 1/3-inch 4.8 x 3.6 Low cost applications (toys, surveillance- and web cams)
2: 1/2.5-inch 5.76 x 4.29 Compact cameras, older and newer
3: 1/2.3-inch 6.16 x 4.62 Modern, higher-end compact cameras
4: 1/1.8-inch 7.176 x 5.319 Older  (prosumer like) compact cameras and modern prosumer cameras
5: 4/3-inch 18.0 x 13.5 DSLR cameras of four-thirds type
6: 1.8-inch 23.7 x 15.7 Common approx. format for many DSLRs - both new and older
7: Full Frame 36 x 24 Modern, high-end DSLRs - standard 35 mm film format
 
 
Pixel size and focal length, (Sampling)

The above issue may in a certain sense be considered as related to the issue of proper sampling. By sampling we mean how many pixels it take to record a (small object) properly. If we are to capture a small, bright circular disk on a sensor with large pixels compared to the image size of that disk chances are that photons will be captured on only a few pixels and hence, look oddly and angular shaped. In this case, we need smaller pixels to record the image properly. This is illustrated in the example below:

 

Figure 7: Photos of star Castor in Gemini (The Twins)

Left: 8 sec. exposure at 7.8 mm focal length - Right: 8 sec. exposure at 23.5 mm focal length

Both pictures captured with Olympus C-50 Z at ISO 320

 
Believe it or not, these are real photographs of a star made on the same night a few seconds apart with my Olympus C-50 Z with the camera mounted on a fixed tripod. The pictures have been cropped and blown to an enormous print size that one would never apply for an "aesthetic constellation picture" but they do hereby illustrate the issue of sampling very well: The image taken at the smaller focal length needs to be blown up three times as much as the other to give the same size and the coarseness in the tiny stellar disk is obvious although not aggravating. (One would speak of a moderate under sampling). The other image taken at maximum zoom for the C-50 Z is much more smooth. The elongation of the stellar disk is not a result of poor optics but a result of trailing, i.e.: The star did move to a noticeable extent (at this enormous enlargement) during the 8 second exposure. Otherwise, one would say that this is a well sampled - almost oversampled - image of star Castor.

For further comparison I have inserted a similar picture of another star (Arcturus) captured with an entirely different system: A Pentax *ist DL equipped with a 350 mm f/5.6 mirror tele lens but enlarged to about the same stellar disk-size:

 

Figure 8: Photo of star Arcturus in Bootes

Image captured with a Pentax *ist DL camera and a Tamron SP 350 mm f/5.6 catadioptric lens

2 sec exposure at ISO 400

 
NOW we are talking sampling! Note that as seen in Table 3 this DSLR camera's pixels are about 3 times larger in linear dimensions and more than 9 times larger in area as compared to the compact camera used to take the pictures in Figure 7. Yet, we get a much better sampling - and that at a just one quarter the exposure time for the compact camera at comparable ISO settings. (Again: In two seconds, a star does trail a visible bit in the field of a 350 mm lens - especially with additional great enlargement thereafter). We may start wondering here if big CCD's (even an old 6.1 MP one from 2005) have something to offer, that just cannot be delivered by smaller CCDs?
 
Anyway, one conclusion is clear: For a given focal length, there is an optimum (range of) pixel size(s) . The overall sensor size is determined by the desired field-widths (c.f. Figure 4 above) and thus, since there are constraints on useful pixel sizes due to the optics of the design, pixel numbers are not only determined by the manufactures desire to benefit the consumer with more and more pixels for better and better image quality.
 
 
Pixel size and sensitivity

"Well", you might say, "I can understand that pixels must not be too large for a given set of optics to achieve proper sampling but, surely, I can make my pixels as small and get as many as possible - for my particular overall sensor size - as technological advances allows"? Again, yes and no. We are discussing a double-sided truth because there are also other things such as sensitivity and noise to consider..........

 
The sensitivity aspect is readily explained: Suppose you have two rain gauges of different sizes. One has a diameter of 40 mm and the other is 80 mm. After a good night's rain you go out and find, that both gauges tell you that there was a rain fall of 19 mm rain that night. Now, the one with the smaller diameter has captured 478 raindrops that night (because a raindrop is typically 1/20 millilitre) while the larger one has captured 1912 raindrops during the same time of exposure to the same rain weather. That is because the large one has 4 times as large an aperture area as has the smaller one.

It is the same with pixels: They build up charge in proportion to the number of photons that they capture and for any given light intensity that number is proportional to the (active) area of the pixel (the photo diodes seen in Figure 5 above). Thus, suppose you have a pixel that has caught photons enough to generate the charge of 10.000 electrons. That charge will be read and used in the generation of the entire image accordingly.

Next, replace that pixel with four pixels of half the side lengths and expose them to the same light for the same time. Their surface area combined is the same so, obviously they will also generate 10.000 electrons together. But that then means, that each pixel will only generate the charge of 2.500 electrons each. Therefore, in order to get the same readout as before we need to expose the smaller pixels four times longer than before.

Figure 9: Rain gauges

 
"But that's only a problem for point-like sources like in Figure 2", you might say, "and who cares about point-like sources? If I look at Figure 3, I see that smaller focal lengths will give me brighter images and shorter exposure times as long as I keep the same aperture". Well, for one person, I DO care about stars and other point-like sources. But, moreover, the argument about smaller and brighter images only holds to the point where you have near-perfect sampling - i.e.: where the pixel size very well matches the focal length. If you add more, smaller pixels to your sensor than that, you WILL get over-sampling and dimmer images. You may gain something in resolution - up to a point - but you will definitely loose in sensitivity.

Furthermore, you should be aware that the reasoning about four times exposure times for the smaller pixels were based upon the assumption that the total number of photons was the same in the two cases. But that is just not the case in practice. Take a look at the absolute apertures calculated in Table 3, (they are easily computed as the focal length divided by the f-number). The newer cameras have apertures near to 2 mm at the wide-field settings of the zoom lens, while the older have apertures near to 2.8 mm. Since the number of photons available is directly proportional to the open lens area, we see that only (2/2.8)2 = 51% of the photons available for the older cameras are available for the newer ones. This is not the "CCD's fault". It is the overall design goals for the cameras that dictates this development. Again we see, that CCD- and optical characteristics cannot be dealt with independently.

You might of course then ask, why the manufactures do not increase lens sizes to get a higher, effective aperture. This is a question about cost and manufacturing techniques. It is very difficult - or at least costly - to manufacture a highly curved lens - i.e. a with short focal length - with a large surface. DSLR owners who can change their optics "at will" may confirm that glass may indeed be as very costly commodity: A high-quality, large aperture wide-field lens for a DSLR may easily cost more than a kit of a camera-body + run-of-the-mill zoom lens itself !

We may now begin to understand, why contemporary cameras of the very compact type and with many-many-megapixels are acclaimed to give excellent pictures in well lit outdoor scenes, while indoor shootings at ambient light may sometimes be characterized as somewhat disappointing.........

......and then there is also an issue about noise.

 
 
Pixel size and noise

The problem with noise in digital photography is, as I understand it (and I am only a lay.-man), a three-headed monster:

Firstly, you have "hot" or "stuck" or "dead" pixels. This may not really be "noise" in the strict, physical sense of the word, but for us as consumers, it may be as annoying for our pictures as "real noise": Due to almost inevitable tiny flaws in the manufacturing process, a few pixels will not respond to incoming light the way they should. "Hot" or "Stuck" pixels will give excessively high read-outs upon exposure to light, while "Dead" pixels will not respond to light at all. "Dead" pixels will usually not be very prominent in your image and there is nothing much you can do about them. "Hot" or "Stuck" pixels will show up in your picture as highly localized small dots of light (often coloured) covering some 3-5 adjacent pixels. The effect usually becomes pronounced after a couple of seconds of exposure - that is, at long exposures in low-light situations.

As said, this is a manufacturing quality issue and is not dependant upon sensor- or pixel sizes or -design as such. The effect may change over time as the CCD ages and it may also be dependant upon exposure time and temperature. For critical applications, you may eliminate this effect by taking a "Dark Frame" at the same exposure time, ISO-setting and temperature and subtract that dark frame from your image. Since the effect may vary as the CCD ages, the dark frame should preferably be made immediately after the exposure of your image. Some cameras have this procedure built-in as an automated noise-reduction feature as soon as longer exposure times (typically from 2 seconds and up) are being used.

 

Figure 10: Hot pixels  usually appear as point-like light sources

Olympus C-50 Z image of belt stars in Orion

for details, see Figure 11 below

 
Another way of dealing with the problems with "Hot"/"Stuck" pixels is to build automatic noise reduction filtering algorithms into the camera firmware. This is much like, say, radar mapping in topography: If you suddenly get a spike of 20.000 meters altitude in a flat landscape, your mapping software will automatically discard that data. However, that is not always a preferable way to go in digital photography. The technique may work fine for "candle-light shots" but in wide-field astrophotography those tiny dots representing faint stars may be mistakenly be interpreted a "Hot" pixels and be filtered out from the in-camera processed image (and so does for example my Pentax Optio 550 at low ISO-settings). A feature, where you may turn off automatic noise reduction (as I can in my Pentax *ist DL) would be most welcome for such applications!
 

Figure 11: Automatic (aggressive) noise-reduction or not?

Belt stars and sword in Orion shot with:

a.: Olympus C-50 Z at 7.8 mm FL f/2.8; 8 sec. exposure at ISO 160

b. Pentax Optio 550 at 7.8 mm FL f/2.8; 2 sec. exposure at ISO 400

for further explanations see text below

 
Figure 11 presents a comparison between two different shots of the same subject - as comparable as can be: The Optio's longest exposure time is 4 seconds and further, at ISO-200 all starts but the very brightest are removed due to aggressive post-processing. The C-50 applies a more gentle noise removal. After merger and crop of the two shots, the composite image has been enhanced in PhotoImpact using SmartCurves. As a result: a. shows some hot pixels but good star images. b. shows no hot pixels but also fewer faint stars. Had I taken the shot in b. at ISO 200 or 100, only the brighter stars would show. in this scene.
 
Secondly, there is the "Noise Floor" or "(Electrical) Background Noise" or "Self-Noise" to deal with. All electrical appliances produce noise when power is on and a CCD is an electrical appliance. A well designed electric apparatus will have reduced that noise towards the theoretical minimum, but the laws of physics tell us that there will inevitably remain some self-noise. This noise is erratic in that it varies over time from pixel to pixel, but it will fluctuate around an average level that increases with temperature. You may compare it with the "white-noise" hiss from speakers, radios, TV-sets etc. that are not tuned to any specific station. The output of a pixel and thus, the number of photons captured has to be larger than the level of self-noise in order to contribute to the digital image.

And as we have learned above, smaller pixels have a lower sensitivity than larger ones. Therefore one requires more light one way or the other before the smaller pixels can build up a decent picture. Again, we see that smaller cameras may have trouble with low light situations where DSLRs with their generously large sensors, pixels and optics perform to the photographers' creative desires. So, complaints about low-light performance in the smaller cameras may not be attributable to poor craftsmanship from the designers' or manufacturers'  side but rather to the laws of nature.

 
Thirdly there is the statistically determined "Image-Noise". The capture of photons in the sensitive element of a CCD-pixel is a statistical process: Two photons of the same colour (same colour = same wavelength =same energy) may hit the same pixel; yet, one will be absorbed, generate charge and contribute to the image, while the other will not. Thus, if you point your camera towards an object of uniform brightness and colour, such as the clear blue sky, all pixels will receive photons in equal numbers and equal energy, but their resulting charges will not be exactly the same - there will be a certain image noise.

This noise increases (the deviations from the proper signal value grow) as the number of captured photons grow. But it only grows proportionally with the square root of the number of photons, while the signal itself grows proportionally with the number of captured photons itself. Hence the signal-to-noise ratio grows (the quality of the signal grows) proportionally with the square root of the number of captured photons: Signal-to-noise-ratio = S/N = P / sqrt(P) = sqrt(P), where P is the number of photons captured during the exposure. It is the a bit the same as with the 1912 versus 478 raindrops above: You will expect a higher accuracy from you rain gauge the larger the sampling area (aperture) is and thus, the more raindrops that you collect. In mathematical terms we talk bout the requirement to have "a statistically significant sample" (of all the raindrops that fell to the ground) to work with.

 
The example in Figure 12 below illustrate the occurrence of image noise as seen, when you have a very bright, uniform surface is your light source:
 

Figure 12: Pictures of clear, blue sky and (statistical) image noise

a. Olympus C-50 Z photo; b: Pentax Optio 550 photo

Upper part: Crops of un-processed, un-enlarged original images

Middle part: Crops of the above, greatly enlarged but otherwise unprocessed

Lower part: Same crops enhanced using AutoContrast in PhotoImpact

 
The sky in the upper part appears uniform in both upper pictures, as it would in an ordinary picture. When one looks closer at the seemingly uniform sky by enlarging the pictures excessively, we see that very locally, there are indeed differences in the read-outs from the individual pixels. This is the image-noise. Undoubtedly, there has been some automatic noise-reduction inside the cameras before the pictures were delivered, but the result of contrast enhancement in the lowest part of the figure shows, that there is plenty of both signal and signal variation to respond to post-processing of the original images.
 
Quite a different lesson may be learned for long duration exposures at low light, as the night-photos of Orion in Figure 13 illustrate:
 

Figure 13: Photos of the night sky (Orion)

a: Olympus C-50 Z shot; 8 sec. exposure at ISO 160 b: Pentax Optio 550 shot; 4 sec. exposure at ISO 100

Upper parts are crops of of original images merged and enhanced using Level in PhotoImpact.

Lower parts are crops, highly enlarged from the above.

Click on image to see more details

 

The two pictures in Figure 13 are not fully comparable (exposure x ISO being much higher in a than in b), but that is also not the main point. Here's the story I want to tell: Normally, unprocessed images straight out of the camera take about the same amount of disk space, around 2 - 2.5 MB. And, indeed, so do the original images of the otherwise "blank" bkue sky used for Figure 12 above, because, as we saw, there are subtle differences in the colour values from pixel to pixel in these images. However, in these shots, the original Olympus shot occupies 2 mega bytes, while the Pentax shot only occupies a little less than 200 kilo bytes - i.e.: 10 times smaller ! Clearly something has happened here. The images have been enhanced quite a bit, as one usually has to for astrophotos on small CCDs in order to more than just the brightest stars. But the Olympus image responds quite different than the Pentax to this post-processing: In the Olympus picture the image-noise becomes obvious, whereas the Pentax image has become a flat, basically 2-bit black and white image.

So what is the explanation? Clearly, from all other comparisons that we have made and the knowledge we have compiled so far, the CCD cannot be that different (in fact, for these two camera models, the CCD manufacturer was one and the same). The answer lies in the different ways manufacturers deal with image noise in small pixels - particular at the lower ISO settings.

 
Here the sensitivity of large vs. small pixels comes into play again. Because large pixels capture more photons than small, the so will  the S/N will always be better under all exposure circumstances for the larger pixels. In table 3 above we see that and older 6.1 MP and a newer 12 MP full frame DSLR both have pixel sizes around 8 µm, (1 µm = "1 micron" = 1 millionth of a metre, or a thousandth of a millimetre), while a new 12 MB compact camera has a pixel size  as small as 1.5 µm. DSLR pixels are almost 30 times larger by area than modern pocket camera pixels and it now makes sense that some DSLR manufacturers increase the CCD-size to full frame in their high-end DSLRs at the same time as they increase the number of pixels.
 
We may summarize the issue of noise via the below, crude schematics:
 

Figure 14: Schematic presentation of noise sources in a digital image

 
In Figure 14 we follow a row of pixels over the CCD after a certain exposure time where the signal (yellow part) has had time to build up. Hot pixels may not occur until after some time of exposure. Self-noise fluctuations (grey part) will be distributed over all pixels. As discussed, Hot and Dead Pixels are a manufacturing / quality issue and common to both large and small CCDs. Self-noise should not and is usually not a result of poor design or poor manufacturing quality but only a result of the of the electrical circuits needed and which cannot be removed due to the physical laws involved. But in the resulting picture, image noise and self-noise will be much more pronounced the smaller pixels that we have.

Once again, the manufacturers may build in algorithms that allow for more advanced noise removal in the processing within the camera before the final image is delivered. But to me, this is a dubious approach as you will inevitably also level out subtle differences - such as texture - that are real, c.f. Figure 13.b above. Here, the noise reduction has been so efficient that I am am left wit a velvet black night sky (which it never is at my place) at the expense of extinguishing all but the brightest stars . Another example from the realm of macro photography is shown in Figure 15.

I certainly prefer to have as much differentiation as possible to work with at first and then do the noise reduction at a later stage. You may pick up a beautiful stone at the beach and polish it vigorously to get a nice, silky smooth surface. But that smooth stone will not tell you what pebbles and stones at the seaside are truly like.

 

Figure 15: A closer look at my handkerchief

a. Cropped and enlarged part of an image straight out of my camera (Pentax Optio 550 in macro mode - 1/1000 sec f/7.7 at ISO 200)

b. Same part after very aggressive (manual and deliberate) noise reduction in PhotoImpact

 
Finally, it should be mentioned that the combination of reduced sensitivity and the degrading signal-to-noise ratio will set a practical limit as to how far you may stop down your lens (reduce the absolute, effective aperture). If you study the product literature you will discover that most compact digital cameras cannot be stopped down, whether automatically or manually, below an f-ratio around f/8. Thus, not only are we constrained in respect of maximum aperture du to the interrelationship between sensor size and lens specifications; we are also restricted in respect of minimum aperture, due to pixel size. This limits all compact cameras in their ability to cope with high dynamic ranges as well as in their overall capability to work with extended fields of depth at low f-numbers.
 
It may sound unfair, but "Small isn't exactly beautiful" when it comes to pixel size and noise.
 
 
Numbers of pixels and information contents in a digital image

Irrespective of all that is said above about pixel sizes, the higher the number of pixels we have the more information will our digital image contain - simply because of the increased number of colour levels that we have to define the image. Right?

Of course that is right - to a certain extent!

If we only had one large pixel covering the whole of our CCD area available we would at best get information about the average grey scale (i.e. between black and light) level of light available for our exposure. If we had four, one red, one blue and two green, we might at best get the average light temperature (colour) too - whatever sense that would make? So, of course we need many, many more pixels to define a proper image, and the more we add the better we are off.

But remember what we have learned above: That really only holds, if we are allowed to increase our sensor size and optical size without limit and only provided that we carefully match sensor/pixel size and optics. But in practice we do impose constraints in respect of camera dimensions and sensor sizes. Further, we have seen that the smaller the pixels get, the poorer will our signal-to-noise ratio be.

This leads me to one, last note on pixel sizes and image quality: In normal consumer use, output images are of the JPEG file format. File sizes are typically around 1.5 MegaBytes (MB) for a 4 Megapixel (MP) camera; 2-2.5 MB for a 5 MP camera; 3.5 for a 7 MP camera and 6 MB for a 12 MP camera.

However, the maximum limit in information contents for a picture of a given pixel sites is 3*8 bits (256 levels) or 3 bytes (1 byte = 8 bit) times the number of effective pixels used to produce the image. If you consult Table 3 above, you will find the maximum image sizes in MegaBytes for all the cameras listed there. Thus, a 5 MP camera will have around 14 MB as maximum image file size while a 12 MP camera will have around 34 MB as the maximum image file size. So what happens with all this theoretically available information when going from 34 MB maximum limit to a practical output of 6 MB for a 12 MP camera? The answer is, that this information is being lost during reduction and compression in the camera's internal routines for production of the final JPEG-image. This is because JPEG is a so-called lossy file format: Producing a JPEG picture involves compression of the original image data by grouping pixels of slightly varying colours and assigning each such group a single average colour value. The principle is illustrated in the following schematics:

 

Figure 16: Principle for lossy image (JPEG) compression

 
In Figure 16 we have to the left four pixels constituting part of an image, each with its own shade of pure green. This represents  the raw image in the camera. After processing, the "almost identical" 3 pixels will now be grouped and represented by just one shade of green, (I am exaggerating for the sake of illustration) and we end up with having only two green shades in the final image where we started out with four. And once compressed this way, there is no return to re-construct the original image. (Which is why you should never save your work in its various stages on important pictures in JPEG-format several times over).

Now, my older Olympus and Pentax compact cameras provide me with the option to have my image output as uncompressed 8-bit TIFF-files, i.e.: they represent the original image and no information is lost and they do provide files that are about 14 MB large! Modern DSLR cameras such as the two types mentioned in Table 3 do not provide uncompressed TIFF-images as an option - alongside with JPEG - but rather RAW-files which are lossless file types. The RAW format(s) represent an advanced compression technique where data is sorted and packed in such a manner that file sizes are smaller than the maximum file sizes, ( though not as small as JPEG-compression yields ), but when they are unpacked in the imaging software provided with the cameras, the original images may be regenerated without loss in information. One may thereafter save them as uncompressed TIFF or as compressed JPEG at one's own discretion.

But the newer compact cameras in the 7 - 12 MB range then? They DO NOT provide neither uncompressed TIFF, nor lossless compressed RAW - only lossy JPEG! In other words in these newer cameras, information is irretrievably being thrown away.

It is often argued, that 34 MB file sizes are uncomfortably large to work with and store. But DSLR owners are supposed to work with uncompressed TIFF files of the same size (or even twice as large in 16 bit TIFF formats). And why not give consumers the choice of output type so that they may work with the files first and then compress images afterwards? Why 12 MP in the first place??

Part of the explanation may be that all those extra MB of information are in fact used to provide a better definition of the resulting JPEG image "before they are thrown away". However, I also suspect - but this is purely guesswork from my side - that the provision of uncompressed data like with my older compact's is no longer feasible. Perhaps, uncompressed images from a modern compact camera would look strangely noisy, considering what we now know about small pixels and noise ??? I don't know. The camera manufacturers should be able to tell us.

Anyway, information content available in images from modern compact cameras is positively much smaller than the sheer number of pixels indicates at first sight.

All in all, there is definitely a limit beyond which it becomes senseless to implant more and more smaller and smaller pixels into a constant, and in practice very confined CCD-area. Has this limit been reached? I am not competent to say for sure; the camera designers would know. But to me, the reported problems with sensitivity and noise in low light situations is a hint that we may be close to the limit with the current compact camera designs.

 
 
CCD or CMOS?

So far, I have just referred to camera sensors as "CCDs" (Charge Coupled Devices) and not mentioned the alternative sensor type CMOS (Complementary Metal Oxide Sensor) at all. This is deliberately so, as the quality compact cameras have almost exclusively used CCDs until today (mid-2009). In later years, CMOS have entered into the DSLR designs and recently also in some current "prosumer" camera designs - i.e.: Larger, cameras with more advanced specifications somewhere between compact and DSLR cameras.

All what has been said above applies to both CCD and CMOS sensors - simply because both types use photodiodes (PD) in their individual sensor elements (pixels).

The main difference in designs is that each PD in a CMOS has its own receiver-amplifier circuit embedded in the sensor chip itself, while the PDs in CCDs  need a more complicated external circuitry before the analogue signals (charges) can be converted to the digital signals forming the raw image in the camera.

Thus, until around 2005 "common consensus" was that CMOS were cheaper to manufacture, less power hungry and less complex to build into camera applications, while CCDs had an advantage in sensitivity, noise characteristics and signal uniformity, (the latter due to the fact that the many, tiny amplifier circuits in a CMOS would differ slightly from each others).

Development ongoing already at that time and achievements since then have resulted in a current-day status, where the two technologies are comparable to each other in most aspects as long as photographic applications are concerned. Complexity and manufacturing costs for CMOS has gone up for quality sensors, while noise characteristics and the issues regarding non-uniformities have been significantly improved. Some of the improvements have involved that the complexity in the overall system architecture has increased for CMOS-based cameras. On the other hand relativel manufacturing costs have been reduced as have power consumption for CCDs.

For the ordinary consumer, it is unlikely that there will be any visible differences today in what matters most, namely image quality and convenience of use. At least, I believe, this holds true for the larger (DSLR) cameras.

Figure 17: CMOS Pixel Layout - schematic and real example

- c.f. CCD chip layout in Figure 5 above

(illustrations by BroadcastEngineering and ???)

 
 

Conclusions

My first digital compact camera (a 1.3 megapixel Olympus C-900 Zoom) cost around EURO 800 in 1999. Today, you may get some three quality cameras for that money.

My current higher-end cameras from around 2003 were about two thirds that price. They are 5 megapixel cameras with a weight around 200 grams and a depth around 40 mm.

Contemporary (mid-2009) consumers may easily find a camera in the 10 megapixel- and 150 to 200 EURO- ranges, weighing and filling only about half that of my 2003-models.

So, in a sense today’s consumers not only get the camera they are paying for – they get more than that! And they get it in an easily portable go-everywhere package. This is surely one of the up-sides of the development in the market, as the ownership and use of a camera has become very much “democratized”.

But does all this mean that Consumers of today get a camera twice as good as my higher-end cameras from 2003?

I do not think so.

The frantic megapixel-race, the fierce competition on prices and the general trend towards smaller camera dimensions have by necessity involved some sacrifices and compromises along the road. If you have followed me through the discussions above, you will understand that there are limits to the picture quality that you may expect for both theoretical/basic physical reasons (such as pixel sensitivity and noise) and practical reasons (such as manufacturing issues with small short-focal-length optics).

Figure 18: Two 1/1.8" and one 1/2.5" cameras
   

A lot of in-camera post-processing in respect of noise reduction and contrast/sharpness enhancement after the picture has been taken is not a real substitute for a good quality “raw” picture that you may use and print “as is” or use as basis for further creative work in your digital dark-room.In addition, the options for using manual and remote controls have as a general rule been removed or at least greatly reduced with today’s compact consumer cameras. Thus, and in addition to limitations in regards of noise and dynamic range, the photographer’s freedom to do creative work with his or hers compact camera has been reduced too. Instead, one may have to opt for current days’ “prosumer” cameras - somewhere in the middle between compacts and DSLRs. But then, one will once again have to go with a more bulky and less easily go-everywhere package.

Personally, I have grown very fond of the high-end compact cameras from the earlier 2000-years with their many manual controls and other advanced features. Although more bulky than those of today, I can – and do – carry them along in a shirt pocket. Also, I don’t need more than 5 megapixels for even very great enlargements, not to speak of ordinary 10 x 15 cm (6 x 9”) prints or monitor displays. I shall stick to these cameras as long as I can and if (when) they fail some day (and repair should no longer be an option), I think that I shall find myself looking for an old, used “only 5 megapixels” substitute………..

 
2009-06-19 / Steen G. Bruun

PhotoHome

Copyright © 2009 - Steen G. Bruun