A Photography forum. PhotoBanter.com

If this is your first visit, be sure to check out the FAQ by clicking the link above. You may have to register before you can post: click the register link above to proceed. To start viewing messages, select the forum that you want to visit from the selection below.

Go Back   Home » PhotoBanter.com forum » Digital Photography » Digital Photography
Site Map Home Register Authors List Search Today's Posts Mark Forums Read Web Partners

Resolution limits of cameras



 
 
Thread Tools Display Modes
  #1  
Old May 29th 13, 04:33 AM posted to rec.photo.digital
Paul Ciszek
external usenet poster
 
Posts: 244
Default Resolution limits of cameras

There was a recent thread about how many mexapixels of resolution various
lenses could support. Without getting into the specifics of any brand of
camera or lenses, I would like to share some relevant math, and look at
this problem from a different angle.

Dawes' Limit for the resolution of any optical system is:

R = 116/D

Where the resolution is in arcseconds and D is the diameter of the
aperture in mm. Converting to radians for convenience, I get:

R = 5.62E-4mm/D

Where 5.62E-4mm is 0.000562mm, or about the size of a wavelength of
yellow light. Remember, the is the ideal limit for a lens limited only
be diffraction. Now, angular sizes in radians can also be converted to
sizes on the camera's sensor by multiplying by the focal length, which I
will call F for now:

s = R*F or R = s/F

If we set s equal to the spacing between pixels of the camera's sensor,
the situation where the finest detail you can resolve with a perfect
diffraction limited lens is the same size as the resolution of the
camera's sensor is:

s/F = 5.62E-4mm/D

Doing a little algebra, this becomes:

F/D = s/5.62E-4mm

Now F/D, the focal length of the lens divided by the diameter, is what
we are used to calling the f number, so I will substitute lower case f:

f = s/5.62E-4mm

In other words, the pixel spacing on the camera's sensor translates into
a maximum f number beyond which the camera will be limited by
diffraction. For less-than-perfect real world lenses, the limit will be
even more restrictive. Likewise, red light with its longer wavelength
will diffract worse than yellow light. Plugging in the numbers for the
sensor in an Olympus OM-D (3456 pixels in a sensor 13mm high), I get
f/6.7; this camera is not diffraction limited for low f numbers, but it
really can't make use of any more pixels than it currently has; I hope
Olympus gives up the megapixel arms race and just concetrates on ISO,
lens quality, and autofocus speed. Looking up the specs for a Canon
700D, I get a little better, f/7.7, but it looks like they should stop
trying for more megapixels as well. Using the math the other way
around, it looks like a full frame sensor and a perfect f/8 lens could
make use of up to 42 megapixels, in theory. Since real world lenses and
images containing red light can't even approach this limit, it looks to
me like we are getting close to the end of the useful megapixels even
for full frame. We may already be there.

If anyone can find a flaw in my math, please point it out.

--
Please reply to: |"We establish no religion in this country, we command
pciszek at panix dot com | no worship, we mandate no belief, nor will we ever.
Autoreply is disabled | Church and state are, and must remain, separate."
| --Ronald Reagan, October 26, 1984
  #2  
Old May 29th 13, 09:04 AM posted to rec.photo.digital
David Taylor
external usenet poster
 
Posts: 1,146
Default Resolution limits of cameras

On 29/05/2013 04:33, Paul Ciszek wrote:
There was a recent thread about how many mexapixels of resolution various
lenses could support. Without getting into the specifics of any brand of
camera or lenses, I would like to share some relevant math, and look at
this problem from a different angle.

Dawes' Limit for the resolution of any optical system is:

R = 116/D

Where the resolution is in arcseconds and D is the diameter of the
aperture in mm. Converting to radians for convenience, I get:

R = 5.62E-4mm/D

Where 5.62E-4mm is 0.000562mm, or about the size of a wavelength of
yellow light. Remember, the is the ideal limit for a lens limited only
be diffraction. Now, angular sizes in radians can also be converted to
sizes on the camera's sensor by multiplying by the focal length, which I
will call F for now:

s = R*F or R = s/F

If we set s equal to the spacing between pixels of the camera's sensor,
the situation where the finest detail you can resolve with a perfect
diffraction limited lens is the same size as the resolution of the
camera's sensor is:

s/F = 5.62E-4mm/D

Doing a little algebra, this becomes:

F/D = s/5.62E-4mm

Now F/D, the focal length of the lens divided by the diameter, is what
we are used to calling the f number, so I will substitute lower case f:

f = s/5.62E-4mm

In other words, the pixel spacing on the camera's sensor translates into
a maximum f number beyond which the camera will be limited by
diffraction. For less-than-perfect real world lenses, the limit will be
even more restrictive. Likewise, red light with its longer wavelength
will diffract worse than yellow light. Plugging in the numbers for the
sensor in an Olympus OM-D (3456 pixels in a sensor 13mm high), I get
f/6.7; this camera is not diffraction limited for low f numbers, but it
really can't make use of any more pixels than it currently has; I hope
Olympus gives up the megapixel arms race and just concetrates on ISO,
lens quality, and autofocus speed. Looking up the specs for a Canon
700D, I get a little better, f/7.7, but it looks like they should stop
trying for more megapixels as well. Using the math the other way
around, it looks like a full frame sensor and a perfect f/8 lens could
make use of up to 42 megapixels, in theory. Since real world lenses and
images containing red light can't even approach this limit, it looks to
me like we are getting close to the end of the useful megapixels even
for full frame. We may already be there.

If anyone can find a flaw in my math, please point it out.


I've not checked the maths, but would ask what should be considered as a
"pixel". Is it the size on the sensor, the size of an RGBG quad (which
make up one full-colour pixel), or something in between? How does the
AA filter affect the effective size of the pixel? Just two point so
consider.

My own feeling on this is that there is no problem in having the sensor
out-resolve the lens by a moderate amount, as long as the overall noise
in the full image doesn't suffer. As has been discussed previously,
resolution is not a hard limit, but the point at which the MTF reaches
some arbitrary value. There will be some information beyond a
resolution limit defined in that way.
--
Cheers,
David
Web: http://www.satsignal.eu
  #3  
Old May 29th 13, 11:38 AM posted to rec.photo.digital
nospam
external usenet poster
 
Posts: 24,165
Default Resolution limits of cameras

In article , David Taylor
wrote:

I've not checked the maths, but would ask what should be considered as a
"pixel". Is it the size on the sensor,


it's the size of one pixel on the sensor (more accurately called a
sensel).

the size of an RGBG quad (which
make up one full-colour pixel), or something in between?


an rgbg quad does *not* make up a full colour pixel. bayer doesn't work
that way.

How does the
AA filter affect the effective size of the pixel?


it doesn't.

it band limits the maximum resolution though.
  #4  
Old May 29th 13, 12:12 PM posted to rec.photo.digital
Floyd L. Davidson
external usenet poster
 
Posts: 5,138
Default Resolution limits of cameras

nospam wrote:
In article , David Taylor
wrote:

I've not checked the maths, but would ask what should be considered as a
"pixel". Is it the size on the sensor,


it's the size of one pixel on the sensor (more accurately called a
sensel).

the size of an RGBG quad (which
make up one full-colour pixel), or something in between?


an rgbg quad does *not* make up a full colour pixel. bayer doesn't work
that way.

How does the
AA filter affect the effective size of the pixel?


it doesn't.

it band limits the maximum resolution though.


That is to say, if affects "the effective size of the
pixel" for calculating maximum resolution.

The AA filter makes a difference. And the Bayer filter
does too, though the "size of an RGBG quad" isn't a
direct proportion due to the unequal spatial sampling
for red and blue compared to green.

Regardless, the problem with the calculations were that
they showed when the image should become diffraction
limited. But we don't use lenses only at f/8, nor is
diffraction the only limiting factor. It also did not
account for potential processing to reduce effects of
diffraction. It also assumed resolution would be the
only reason to increase the pixel density.

--
Floyd L. Davidson http://www.apaflo.com/
Ukpeagvik (Barrow, Alaska)
  #5  
Old May 29th 13, 04:40 PM posted to rec.photo.digital
Wolfgang Weisselberg
external usenet poster
 
Posts: 5,285
Default Resolution limits of cameras

Paul Ciszek wrote:

Where 5.62E-4mm is 0.000562mm, or about the size of a wavelength of
yellow light.


Green light is more important.

If we set s equal to the spacing between pixels of the camera's sensor,
the situation where the finest detail you can resolve with a perfect
diffraction limited lens is the same size as the resolution of the
camera's sensor is:


s/F = 5.62E-4mm/D


[...]

In other words, the pixel spacing on the camera's sensor translates into
a maximum f number beyond which the camera will be limited by
diffraction.


[...]

If anyone can find a flaw in my math, please point it out.


How about the flaws in your assumptions? (The fact that
cameras don't have full RGB-pixels has already been named.)

Do you think that oversampling gives no advantages?

-Wolfgang
  #6  
Old May 29th 13, 05:46 PM posted to rec.photo.digital
Paul Ciszek
external usenet poster
 
Posts: 244
Default Resolution limits of cameras


In article ,
David Taylor wrote:

I've not checked the maths, but would ask what should be considered as a
"pixel". Is it the size on the sensor, the size of an RGBG quad (which
make up one full-colour pixel), or something in between? How does the
AA filter affect the effective size of the pixel? Just two point so
consider.


Well, what I meant was pixel *spacing*--there is no point in "sampling"
the focal plane at points closer together than the optical resolution.
Obviously it is nice to have the pixels take up as much of that space as
possible, to get the best signal to noise, but the size of the pixels
did not figure into my math.

But you bring up a good point about the color quads. You could say "I
don't have a bunch of pixels spaced 3.75um apart; Rather, I have an
array of red sensing pixels spaced 7.5um apart, another array of blue
sensing pixels spaced 7.5um apart, and another array of green sensing
pixels spaced 5.3um apart. The fact that these pixels are all 3.75um
away from each other is totally irrelevant."

--
Please reply to: |"We establish no religion in this country, we command
pciszek at panix dot com | no worship, we mandate no belief, nor will we ever.
Autoreply is disabled | Church and state are, and must remain, separate."
| --Ronald Reagan, October 26, 1984
  #8  
Old May 30th 13, 10:54 AM posted to rec.photo.digital
Eric Stevens
external usenet poster
 
Posts: 13,611
Default Resolution limits of cameras

On Thu, 30 May 2013 01:20:33 -0700 (PDT), RichA
wrote:

On May 28, 11:33*pm, (Paul Ciszek) wrote:
There was a recent thread about how many mexapixels of resolution various
lenses could support. *Without getting into the specifics of any brand of
camera or lenses, I would like to share some relevant math, and look at
this problem from a different angle.

Dawes' Limit for the resolution of any optical system is:

R = 116/D

Where the resolution is in arcseconds and D is the diameter of the
aperture in mm. *Converting to radians for convenience, I get:

R = 5.62E-4mm/D

Where 5.62E-4mm is 0.000562mm, or about the size of a wavelength of
yellow light. *Remember, the is the ideal limit for a lens limited only
be diffraction. *Now, angular sizes in radians can also be converted to
sizes on the camera's sensor by multiplying by the focal length, which I
will call F for now:

s = R*F or R = s/F

If we set s equal to the spacing between pixels of the camera's sensor,
the situation where the finest detail you can resolve with a perfect
diffraction limited lens is the same size as the resolution of the
camera's sensor is:

s/F = 5.62E-4mm/D

Doing a little algebra, this becomes:

F/D = s/5.62E-4mm

Now F/D, the focal length of the lens divided by the diameter, is what
we are used to calling the f number, so I will substitute lower case f:

f = s/5.62E-4mm

In other words, the pixel spacing on the camera's sensor translates into
a maximum f number beyond which the camera will be limited by
diffraction. *For less-than-perfect real world lenses, the limit will be
even more restrictive. *Likewise, red light with its longer wavelength
will diffract worse than yellow light. *Plugging in the numbers for the
sensor in an Olympus OM-D (3456 pixels in a sensor 13mm high), I get
f/6.7; this camera is not diffraction limited for low f numbers, but it
really can't make use of any more pixels than it currently has; I hope
Olympus gives up the megapixel arms race and just concetrates on ISO,
lens quality, and autofocus speed. *Looking up the specs for a Canon
700D, I get a little better, f/7.7, but it looks like they should stop
trying for more megapixels as well. *Using the math the other way
around, it looks like a full frame sensor and a perfect f/8 lens could
make use of up to 42 megapixels, in theory. *Since real world lenses and
images containing red light can't even approach this limit, it looks to
me like we are getting close to the end of the useful megapixels even
for full frame. *We may already be there.

If anyone can find a flaw in my math, please point it out.


Blue light?


http://www.picturescolourlibrary.co....go/2552962.jpg
--

Regards,

Eric Stevens
  #9  
Old May 31st 13, 07:21 AM posted to rec.photo.digital
David Taylor
external usenet poster
 
Posts: 1,146
Default Resolution limits of cameras

On 31/05/2013 04:07, RichA wrote:
[]
The problem with absolute calculations on resolution is that they
don't work, except for high-contrast point sources, hence the "Dawes
Limit." Under different circumstances, Dawes Limit (separation of two
points) has been exceeded (I've seen it) and under others, it is never
reaches (low contrast scenes) what it prescribes. Also, ultra high
contrast can cause things to be visible up to 50 times smaller than
resolution dictates they should because of scattering, either in the
human eye, or on a sensor.
Please see: J.B. Sedgwick, "The Amateur Astronomer's Handbook."


Maybe thinking in MTF terms, rather a simple fixed number for resolution
would help understanding?
--
Cheers,
David
Web: http://www.satsignal.eu
  #10  
Old May 31st 13, 09:14 AM posted to rec.photo.digital
Martin Brown
external usenet poster
 
Posts: 821
Default Resolution limits of cameras

On 31/05/2013 07:21, David Taylor wrote:
On 31/05/2013 04:07, RichA wrote:
[]
The problem with absolute calculations on resolution is that they
don't work, except for high-contrast point sources, hence the "Dawes
Limit." Under different circumstances, Dawes Limit (separation of two
points) has been exceeded (I've seen it) and under others, it is never
reaches (low contrast scenes) what it prescribes. Also, ultra high
contrast can cause things to be visible up to 50 times smaller than
resolution dictates they should because of scattering, either in the
human eye, or on a sensor.
Please see: J.B. Sedgwick, "The Amateur Astronomer's Handbook."


You can rely on the RichA troll to post misleading apparently
authoritative junk. The Dawes/Rayleigh limit does what it says on the
tin - namely the separation at which two equally bright point sources
can be separated with a just visible minimum in between them. You can
tell that there are two point sources from the non-round shape.

Dawes limit is a pretty good criterion for resolution which is why it
gets used. He accepted a 5% dip minima whereas Rayleigh chose 26%. You
can obviously resolve slightly tighter with a smaller dip in the middle.

http://en.wikipedia.org/wiki/Rayleigh_limit#Explanation
(isn't perfect but it is a lot less misleading than RichA)

Iff you have enormously high signal to noise and a very well
characterised point spread function then you can in practice do about 3x
better than the classical limit on point sources using non-linear
positive constrained deconvolution techniques like maximum entropy.

It is also worth pointing out that on the Earth Dawes limit is usually
hammered by poor atmospheric seeing for larger apertures and it is only
with the advent of cheap webcams and lucky seeing techniques that the
true performance for even 8" and 10" amateur scopes has been realised.

In a few special theoretical cases in Fourier optics you can in
principle get an infinite resolution improvement but they are irrelevant
to real observations with measurement noise. A simple example knowing
that the sky is everywhere positive and having measured:

DC = 1
cos(wt) = 1

There is only one unique solution to the problem that can match the
required positivity constraint - a delta_function at the origin.

You can see or detect much finer very high contrast linear structures
than the Dawes limit would imply because they are not dots! Basically if
there is enough signal to noise then you get a one pixel wide faint
feature with a slightly different intensity to its neighbours.

Rilles (canyons) on the moon are the classic example. The issue is
further complicated by the fact that the eye is rather good at detecting
linear detail and in some cases inventing it as happened with the
"canals on Mars" debacle. The human observer is not objective.

Maybe thinking in MTF terms, rather a simple fixed number for resolution
would help understanding?


The problem for camera lenses are that with a handful of exceptions they
are not diffraction limited for apertures wider than about f5.6

--
Regards,
Martin Brown
 




Thread Tools
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

vB code is On
Smilies are On
[IMG] code is Off
HTML code is Off
Forum Jump

Similar Threads
Thread Thread Starter Forum Replies Last Post
The Limits of my Equipment Robert Coe 35mm Photo Equipment 3 June 27th 10 02:21 AM
The Limits of my Equipment Noons 35mm Photo Equipment 1 June 21st 10 05:27 PM
Diffraction limits Alan Browne Digital SLR Cameras 16 February 28th 10 04:23 PM
Up sample without limits Scott W Digital Photography 24 April 3rd 06 06:39 AM
miniaturization limits for digital cameras? HanbB Digital Photography 34 March 3rd 05 02:15 PM


All times are GMT +1. The time now is 05:18 PM.


Powered by vBulletin® Version 3.6.4
Copyright ©2000 - 2024, Jelsoft Enterprises Ltd.
Copyright ©2004-2024 PhotoBanter.com.
The comments are property of their posters.