Understanding "Resolution" 101

Related: Retina display

The word "resolution" has been so misused that the original definition, if I remember it correctly, actually bears little connection to how it is used nowadays.

Let's start at the very beginning, a very good place to start.

The first step in the communication process (from the screen to the person) is "visual acquisition".  Your eye must be able to acquire the image.  The image must be big and bright enough to trigger photosensitive cells in your retina.  Ignoring brightness, and just considering a single dot to represent the image, there is a minimum size for the dot below which the human eye cannot pick it out.  But this depends on how far away you are from the dot, so a better parameter to quantify this is the angle subtended by the dot to your eye.

Figure 1 - Images of different sizes can subtend the same visual angle, depending on distance

Obviously the minimum subtended angle that can make a dot visible varies from person to person.  And an eagle probably beats every human.

The second step is image recognition.  To simplify, let's forget about pictures and just restrict the discussion to the letters of the English alphabet.  About the minimum number of dots needed to represent each letter of the alphabet and still make every one distinguishable and recognizable (as letters of the English alphabet) is 5 dots horizontal by 7 dots vertical.  If you have examined an old fashioned CRT terminal, an old fashioned moving stock display, or an old-fashioned matrix printer, you would know what I mean.

Figure 2 - Anything less than a matrix of 5x7 would make it difficult to recognize all the letters of the English alphabet as we know them.

The minimum is 5x7.  But that doesn't mean you can't have more.  Having more doesn't make each letter more distinguishable than the next.  Technically, anything more than 5x7 is pure waste.  But artistically, using a bigger matrix for each letter allows you to have nice smooth curves, with minute turns, and so on, to create a more visually appealing character.  Now you know why a typeface like Times Roman looks coarse and ugly on "low resolution" displays.  Serifs are just lots of curves and curves require lots of pixels to achieve a smooth graduating path.

Figure 3 - Letter A drawn in a 50x50 dot matrix and in shades of color (dithering).

The above letter A when seen from a distance where the subtended angle of each dot becomes barely discernible will look like this:

Now that you know the reason, you should not say "Times Roman looks ugly on low resolution displays".  The right thing to say is probably: "Times Roman cannot be rendered adequately if each letter is represented by a small matrix of dots"!

"Resolution" is defined as the number of dots per unit distance.  A typical laser printer output is 300 or 600 dots per inch.  That is high resolution.  The LCD screen that I am using now has about 120 dots per inch.  Times Roman can still be rendered beautifully at a resolution of 10 dots per inch, provided you have a gigantic display.

Next, the displayable quantity (and size).  The monitor I am using now has 1,680 horizontal dots and 1,080 vertical dots, and it is 23 inches diagonal.  I can see so much of a spreadsheet if each character in the spreadsheet is represented in a matrix of 10x14 dots.  Go ahead and do the simple division arithmetic to find out how many rows and columns that is exactly.  A person with a display of 1,024 x 768 would be able to see less of the same spreadsheet than I.  Even if the person has a 500-inch jumbo display with 1,024 x 768, he will still see less of the spreadsheet than my 23-inch display.  For a display capable of showing 1,024x768, the person sees the same amount of information, whether the display is physically 14-inches or 500-inches.  The only difference is that with a larger display, he can see the spreadsheet from further away.  Remember the very first point above about the angle subtended by each dot? (Hint: if you fit the same number of dots onto a bigger screen, naturally each dot will be bigger.)

By quantity or the amount of information, I mean just that.  To explain what I mean, ten letters on a line is twice the amount of information as five letters on a line.  Ten letters on a line on a 14-inch screen is the same amount of information as ten letters filling up a line on a 500-inch screen.  Hope you can grasp this in totality, otherwise please re-read.

I will digress a bit to talk about address-ability.  All the previous discussion assume you have a computer capable of generating that 1024x768 or whatever "resolution" video signal. When you plug the cable to the display, the display input hardware must be able to synchronize with that signal.  That is, it is able to pick out that you are streaming 1024x768 dots per frame.  If a display cannot synchronize with the input signal, the picture (on an analog display) would be jumping or appear as some noise or the display can be damaged.  On a digital displays, usually you will see a blank screen or an informational message telling you what's wrong.  If the video signal is received properly, then the hardware will present that signal onto the display screen.  It is entirely possible that the display can synchronize with a much higher frequency video signal, say to 1920x1280, even if the display can show only, say, 1024x768.  The electronics in the display would "greek" the signal, averaging a few dots of the input signal into one dot for the screen.  So a beautiful image would appear as a compressed smudgy image on this "low resolution" screen. Most projectors are like this. The actual projection optics is expensive and the most common one today is capable of only 1024x768 (up from 800x600 of a few years ago). However, most projectors today can accept input signals of any "resolution" so as not to inconvenience users. How the signal is then projected varies. Some projectors greek them. Others show a viewport of 1024x768 and allow you to pan to see the bigger picture.

Now we come to the true use of the word "resolution".  Laser printers have very good resolution.  It's 300dpi since 1990.  The common standard now is either 600dpi or 1200dpi.  Most people cannot tell the difference (subtended angle again) when the resolution is higher than 300dpi.  Traditionally, displays have the lowest resolution, 100dpi is quite common.  But even 100dpi does not affect reading efficiency if something like ClearType, using graduating brightness to substitute for graduating dots, is used.

Lately, with cell phones, the displays have been increasing in resolution without increasing in size.  The iPhone 4 has 326dpi - 960 x 640 in 3.5" diagonal.  This is wasteful.  Lots of memory and electricity are used to render a lot of dots which cannot be consumed as they are not visible to most people.

In conclusion, for a given amount of displayable dots, if you want to transmit the maximum amount of information, you use the most simple typeface - a cell of 5x7 for each letter.  If you have plenty of dots to spare, then you can have the luxury of showing each letter more artistically with hundreds of dots.  Remember that the size of the display plays no part in the quantity of information you can show if the amount of dots are the same.  A bigger size display simply means you can see it from farther away.


Comments

shahnawaz khan said…
thank you for such an informative post. i thought i knew what resolution was until this.
shahnawaz khan said…
thank you for such an informative post. i thought i knew what resolution was until i read your post.

Popular posts from this blog

Attack Lab - Level 5 explained

Setting to English for the Xiaomi Wifi Router

Assembly version, File version, Product version