View Single Post
  #3  
Old January 6th 07, 08:04 PM posted to rec.photo.digital
Stephen M. Dunn
external usenet poster
 
Posts: 58
Default Resolution limit of image sensor

In article . com "Marc Wossner" writes:
$Can someone please explain to me if there is a connection between the
$Nyquist sampling theorem and the resolution limit of a digital image
$sensor?

Yes, there's a connection, but it's not the only factor.

My Canon EOS 20D's sensor has a resolution of 3504x2336 (that's
effective pixels; as with most sensors, there are some additional pixels
that don't actually get used as part of the image data). Nyquist
tells us that this can produce at most half that many line pairs.
That sets an upper limit on resolution for any given sensor.

But there are other factors that come into play. Lenses are
not perfect; they all result in some level of loss of contrast
and/or sharpness. So if you were to quadruple the number of pixels
by doubling the number in each dimension, that wouldn't necessarily
result in images with twice the sharpness or twice the detail, if
for no other reason than that the lens may not be up to the task.

You could also take the same number of pixels and make them
larger. The 20D's sensor is about 22x15mm, so it has approximately
160 pixels per millimeter. The 1D Mark IIN has the same number
of effective pixels but in a ~29x19mm sensor, yielding about
120 pixels per millimeter. So despite the same number of pixels,
a lens that deliver sharper results at 60 lp/mm than at 80 lp/mm
will give you sharper images on the 1D IIN than on the 20D.

$ I mean, does it imply something like a lowest mark as far as
$pixel spacing is concerned?

You can certainly increase the maximum resolution that the sensor
can capture by making the pixels smaller. But then you run into
other problems. Noise is one major problem here. There's a certain
level of background noise. A larger pixel captures more photons,
so the ratio of signal (photons) to noise can be pretty good. A
smaller pixel captures fewer photons, yielding a lower signal to noise
ratio. There's also the issue of Poisson distribution; the arrival
and distribution of photons are random, and even if you take a picture
of a subject which is perfectly even, some pixels will receive a few
more photons than others. Again, in a large pixel, this random variation
is small relative to the total number of photons, while in a small pixel,
this variation can be significant.

If you've ever compared a typical shot at ISO 400 from a compact
digital P&S (which has a relatively small sensor and therefore
tiny pixels) to a typical shot at ISO 400 from a DSLR (which has a
relatively large sensor and therefore large pixels), you'll understand
this in practical terms: the P&S picture is significantly noisier
than the DSLR picture.

There are also issues about data volumes as the number of
pixels rises. An 8 MP camera usually produces JPEGs that are
in the 3-4 MB ballpark. A 16 MP camera would produce files that are
about twice that size. How big a file do you need? How big a file
do you want to have to store? How much flash memory do you want to
have to take on holiday with you in order to store all your pictures?

And speed ... the 1D IIN can take about 8.5 frames per second,
with a resolution of 8.2 MP. The 1Ds II has a 16.7 MP sensor and can
only shoot at about 4 frames per second. It's not because the mechanical
bits can't keep up (both cameras are very similar mechanically, and are
based on a film camera, the 1V, which can get up to 10 fps) or because
Canon's engineers got lazy when designing the 1Ds II; there's
simply too much data to be moved around. There are digital backs for
medium-format cameras which yield tens of megapixels, and they
typically can't even do two frames per second, for the same reason.
--
Stephen M. Dunn
---------------- http://www.stevedunn.ca/ ----------------

------------------------------------------------------------------
Say hi to my cat -- http://www.stevedunn.ca/photos/toby/