The aspect ratio of an image of photograph is the ratio of the length of one side to the length of the other. For instance, 35mm film, 4×6″ prints, and full-frame digital sensors all have proportions of 3:2. Most APS-C sensors, used in cheaper digital dSLRs are also around 3:2. Images from my Rebel XS are 3888 × 2592 pixels, which is a 3:2 aspect ratio.
Standard definition televisions and many point and shoot digital cameras use an aspect ratio that is closer to square: 4:3. 4:3 is also used for Four Thirds system cameras and 645 medium format cameras. For instance, my old Canon A570 IS produces images that are 3072 x 2304 pixels, which is a 4:3 aspect ratio.
What vexes and perplexes me is the fondness digital picture frame manufacturers have for making wide-screen devices. They have ratios like 16:9 and 15:9, which means that images from virtually any sort of commonly-used film or digital camera will appear with relatively thick bands of black screen space on either side. This is akin to watching a VHS tape or standard television broadcase on a wide-screen high-definition television. Given how much digiframe manufacturers charge for screen space (a good 10″ frame costs around $300, whereas 19″ LCD monitors can be had for around $150), it seems foolish for them to throw away so much of it. Why spend $300 on Sony’s DPF-V1000 frame knowing that a good fraction of the screen space will be wasted with every photo you ever display?
A frame with a 3:2 aspect ratio would show images from film and higher grade digicams perfectly, and images from cheaper digicams with minor bars. Why this is not the standard for digital photo frames therefore bewilders me. It might have something to do with being able to brand them ‘high definition.’ Of course, you can have a 3:2 aspect ratio frame with any level of definition you want: it could be three billion by two billion pixels!
Aside on ‘megapixels’
It is also worth noting how the number of pixels along the long edge of an image gives a better idea of comparative resolution than megapixel count. After all, it follows that the size of each pixel will shrink by half, every time you cram twice as many of them along either edge.
Looking at the pixels, it is easy to see that the A570 has 79% of the resolution of the Rebel XS. By contrast, reading that the Rebel has a 10.1 megapixel sensor and the A570 has a 7.1 megapixel sensor might lead to a customer being mistaken about how much more image quality they are getting. The difference gets even more significant with higher end cameras. A consumer might naively think that a 21.1 megapixel 5D Mark II has three times the resolution of my cheap A570IS. In fact, it produces photos that are 5616 x 3744 pixels. The sensor in the A570 puts out 55% as many.
Admittedly, there are many properties of the sensor that are at least as important as resolution, such as noise level at high ISO settings. That is why I argue that – above 6 megapixels or so – resolution ceases to be an important issue in comparing cameras. Factors like noise and dynamic range are much more important
Megapixels aren’t so unfair.
Measuring a sensor by megapixels is like measuring a flat in square metres – very standard and defensible. The number of pixels rendering any particular object in an image increases with the megapixel count, not the number of pixels on the long edge.