A Photography forum. PhotoBanter.com

If this is your first visit, be sure to check out the FAQ by clicking the link above. You may have to register before you can post: click the register link above to proceed. To start viewing messages, select the forum that you want to visit from the selection below.

Go Back   Home » PhotoBanter.com forum » Digital Photography » Digital Photography
Site Map Home Register Authors List Search Today's Posts Mark Forums Read Web Partners

[LONG] Theoretical estimates for film-equivalent digital sens



 
 
Thread Tools Display Modes
  #21  
Old March 14th 05, 09:54 PM
Ilya Zakharevich
external usenet poster
 
Posts: n/a
Default

[A complimentary Cc of this posting was sent to
paul
], who wrote in article :
How I see the pictures, the eye sensitivity for chrominance noise is
not much higher than 10% of sensitivity for luminance one. [But my
eyes are kinda special, so I would appreciate if somebody else - with
normal vision - confirms this.]


So that's equal noise on left & right? No doubt the left looks 90% more
noisy.


You mean "LESS noisy"?

No, it is not "equal noise". I'm afraid you need to read the
explanation at the beginning. In addition to "equal noise" having
little sense, as you can easily see, the noise on the right is
*different* at top and at bottom.

But numerical noise on the left "is close" to numerical noise on the
bottom of the right. Visual noise is an order of magnitude less...

Yours,
Ilya
  #22  
Old March 14th 05, 10:06 PM
Ilya Zakharevich
external usenet poster
 
Posts: n/a
Default

[A complimentary Cc of this posting was sent to
HvdV
], who wrote in article :
Hm, not so sure you were very wrong. I don't know much about lens design, but
I do know errors like spherical aberration scale up in a non-linear fashion
if you increase aperture.


Sure, but I was not talking about increase of aperture. I was talking
about using the same lense design *geometrically rescaled* for larger
sensor size.

BTW, if you keep aperture constant the diffraction spot stays the same. It
scales with the wavelength, the sine of the half-aperture angle, and for
completeness, also the refractive index of the medium.


Yes, this is what I wrote (not in so many words, though ;-).

b) One corollary is that when you scale sensor size AND LENSE up n
times, it makes sense to scale up the size of the pixel sqrt(n)
times. In other words, you should increase the sensitivity of the
sensor and number of pixels both the same amount - n times.
Interesting...


Sizing up the lens and sensor gets you more information about the object,
with the square of the scale. You can average that information with bigger
pixels to get a better SNR, but you could do that also in postprocessing.


It does not always make sense to do it in postprocessing; in absense
of readout noise more pixels can be recalculated in less pixel
losslessly, but I'm not sure that it is easy to reduce readout noise...

Even without readout noise, assuming that it does not make sense to
rasterize at resolution (e.g.) 3 times higher than the resolution of
the lense, when you rescale your lense+sensor (keeping the lense
design), you better rescale the pixel count and sensitivity the same
amount.

[Additional assumption: the sweet spot is not better than the maximal
aperture of the lense. E.g., the current prosumer 8MP lenses have the
sweet spot at maximal aperture; so if you rescale this design *down*,
the law above does not hold. BTW, rescaling them up from 2/3'' sensor
to 36x24mm sensor (3.93x rescale) will give, e.g., 28--200 F2.8 zoom
with corner-to-corner high resolution and sweet spot at 5.6.]

Yours,
Ilya
  #23  
Old March 15th 05, 10:03 PM
HvdV
external usenet poster
 
Posts: n/a
Default

Ilya Zakharevich wrote:

Sure, but I was not talking about increase of aperture. I was talking
about using the same lense design *geometrically rescaled* for larger
sensor size.

Sorry for making myself not clear, what I meant was that if you scale up,
keeping aperture angle constant, aberrations will act up. Suppose for a small
lens you have a manageable deviation from a spherical wave of Pi/4 phase
error, then for a twice larger system parts of the wave will arrive out of
phase at the focus, seriously affecting your resolution. Likely the error is
due to production flaws and imperfect design. To force back the phase error
both the production techniques and the design must be improved. That suggests
that your earlier assumption of steeply rising production cost are true.
If you compare 35mm to MF lenses you see that for the same view angle lenses
tend to have higher f-numbers and are much more expensive, 4x?


It does not always make sense to do it in postprocessing; in absense
of readout noise more pixels can be recalculated in less pixel
losslessly, but I'm not sure that it is easy to reduce readout noise...

CCDs for low light applications are usually capable of binning pixels to get
around this. I don't know whether this technique is used in any camera.

Even without readout noise, assuming that it does not make sense to
rasterize at resolution (e.g.) 3 times higher than the resolution of
the lense, when you rescale your lense+sensor (keeping the lense
design), you better rescale the pixel count and sensitivity the same
amount.

When readout noise is not a key factor it is IMO better to match the pixel
size to the optical bandwidth, making anti aliasing filters superfluous. With
all the image information in your computer it's then up to the post
processing to figure out what the image was. Just my hobby horse...

[Additional assumption: the sweet spot is not better than the maximal
aperture of the lense. E.g., the current prosumer 8MP lenses have the
sweet spot at maximal aperture; so if you rescale this design *down*,
the law above does not hold. BTW, rescaling them up from 2/3'' sensor

Nice point!
But instead of cheaper scaling down makes more outrageous designs possible
for the same price, in particular larger zoom ranges with similar apertures.
This leads to the feature battle where manufacturers advertise the MP number
and the zoom range.
to 36x24mm sensor (3.93x rescale) will give, e.g., 28--200 F2.8 zoom
with corner-to-corner high resolution and sweet spot at 5.6.]

If you scale up the 28--200 F2.0--F2.8 on the Sony 828 to 35mm you get indeed
something unaffordable.


-- Hans
  #24  
Old March 15th 05, 10:43 PM
Ilya Zakharevich
external usenet poster
 
Posts: n/a
Default

[A complimentary Cc of this posting was sent to
HvdV
], who wrote in article :
Sure, but I was not talking about increase of aperture. I was talking
about using the same lense design *geometrically rescaled* for larger
sensor size.


Sorry for making myself not clear, what I meant was that if you
scale up, keeping aperture angle constant, aberrations will act
up. Suppose for a small lens you have a manageable deviation from a
spherical wave of Pi/4 phase error, then for a twice larger system
parts of the wave will arrive out of phase at the focus, seriously
affecting your resolution.


I think we speak about the same issue using two different languages:
you discuss wave optic, I - geometric optic. You mention pi/4 phase,
I discuss "the spot" where rays going through different places on the
lense come to.

Assume that "wave optic" = "geometric optic" + "diffration". Under
this assumption (which I used) your "vague" discription is
*quantified* by using the geometric optic language: "diffration"
"circle" does not change when you scale, while "geometric optic" spot
grows linearly with the size. This also quantifies the dependence of
the "sweet spot" and maximal resolution (both changing with
sqrt(size)).

So if the assumption holds, my approach is more convenient. ;-) And,
IIRC, it holds in most situations. [I will try to remember the math
behind this.]

Likely the error is due to production flaws and imperfect design. To
force back the phase error both the production techniques and the
design must be improved. That suggests that your earlier assumption
of steeply rising production cost are true.


Let us keep these two issues separate (as a customer in a restorant
said: may I have soup separate and cockroaches separate?). Rescaling
of a *design* leads to sqrt(size) increase in sweet spot; rescaling of
*defects w.r.t. design* leads to steep cost-vs-size curve...

Even without readout noise, assuming that it does not make sense to
rasterize at resolution (e.g.) 3 times higher than the resolution of
the lense, when you rescale your lense+sensor (keeping the lense
design), you better rescale the pixel count and sensitivity the same
amount.


When readout noise is not a key factor it is IMO better to match the
pixel size to the optical bandwidth, making anti aliasing filters
superfluous.


I assume that "matching" is as above: having sensor resolution "K
times the lense resolution", for some number K? IIRC, military air
reconnaissance photos were (Vietnam era?) scanned several times above
the optical resolution, and it mattered. [Likewise for this 700 MP IR
telescope?] Of course, increasing K you hit a return-of-investment
flat part pretty soon, this is why I had chosen this low example value
"3" above...

[Additional assumption: the sweet spot is not better than the maximal
aperture of the lense. E.g., the current prosumer 8MP lenses have the
sweet spot at maximal aperture; so if you rescale this design *down*,
the law above does not hold. BTW, rescaling them up from 2/3'' sensor


Nice point!
But instead of cheaper scaling down makes more outrageous designs possible
for the same price, in particular larger zoom ranges with similar apertures.
This leads to the feature battle where manufacturers advertise the MP number
and the zoom range.


AFAIU, the current manufacturing gimmic is dSLRs. [If my analysis is
correct] in a year or two one can have a 1'' sensor with the same
performance as Mark II (since sensors with QE=0.8 are in production
today, all you need is to scale the design to 12MP, and use "good"
filter matrix). This would mean the 35mm world switching to lenses
which are 3 times smaller, 25 times lighter, and 100 times cheaper (or
correspondingly, MUCH MUCH better optic).

My conjecture is that today the marketing is based on this "100 times
cheaper" dread. The manufacturers are trying to lure the public to
buy as many *current design* lenses as possible; they expect that
these lenses are going to be useless in a few years, so people will
need to change their optic again.

[While for professionals, who have tens K$ invested in lenses, dSLRs
are very convenient, for Joe-the-public the EVFs of today are much
more practical; probably producers use the first fact to confuse the
Joes to by dSLRs too; note the stop of the development of EVF during
the last 1/2 year, when they reached the spot they start to compete
with dSLR, e.g., KM A200 vs A2 down-grading.]

This is similar to DVDs today: during last several months, when
blue-rays are at sight, studios started to digitize films as if there
is no tomorrow...

Thanks for a very interesting discussion,
Ilya
  #25  
Old March 22nd 05, 09:09 PM
Ilya Zakharevich
external usenet poster
 
Posts: n/a
Default

This discussion got a little bit too long. Here is a short summary.

A lot of people confuse counting photons with counting electrons.
This leads to statements like Roger Clark's

"these high-end cameras are reaching the limits of what
is possible from a theoretically perfect sensor."

Actually, even with sensor technology available now (for
mass-production), the sensitivity of the sensor he
considers can be improved 4.8 times; or the size can be
decreased 2.2 times without affecting MP count, sensitivity, and
noise.

Perfect Bayer filter sensors have equivalent film sensitivity of
12000 ISO (taking noise and resolution of Velvia 50 film as a
reference point). In other words, with such a sensor you get
equivalent resolution and noise of Velvia 50 film with 240 times
smaller exposure. For example, for 36x24mm format one gets a
12Mpixels sensor with 8.5 mkm square sensels with sensitivity
12000 ISO (calculated to achieve noise level better than one
of Velvia 50).

One illustration: since shooting with aperture smaller than F/16 does
not fully use the resolution of 8.5mkm sensels, for best results one
should use daylight exposure similar to F/16, 1/12000sec. (Of
course, one can lower the sensitivity of the sensor also by
controling the ADC; this would decrease the noise. E.g.,
decreasing the sensibility 8 times, one can achieve the noise
of a best-resolution 8x10in shot. However, I did not see any
indication that noise well below one given by Velvia 50 on 35mm
film results in any improvement of the image...)

Another illustration: the 8Mpixels sensor of EOS 1D Mark II (recall
that it achieves the noise level of Velvia 50 at sensitivity on or
above 1200ISO) has the "total" Quantum Efficiency about 14.1%.

The "total" efficiency of "the sensor assembly" is the product of
average "quantum efficiency" (in other words, transparency) of the
cells of the Bayer filter, and the quantum efficiency of the actual
sensor. To distinguish colors, some photons MUST be captured by the
Bayer filter; however, it is easy to design a filter with average
transparency 85% or above. On the other hand, currently there are
mass-produced sensors with QE=0.8; combining such a sensor with such
a filter, one can get the "total" efficiency of 68%. Thus the
sensitivity of the sensor of EOS 1D Mark II can be improved 4.8 times
without using any new technology...

The last illustration: using the same value QE=0.8, a 12MP sensor
of size 8.8x6.6mm (this is a 2/3'' sensor) has sensitivity of
655ISO. (Again, this is with better resolution and noise than
35mm Velvia 50!)

Recall that 2/3'' format is especially nice, since an affordable
lense in this format should provide the same resolution as a very
expensive lense in 35mm format. For example, compare a 35mm zoom
having the sweet spot at f/11 with a 2/3'' zoom having the sweet
spot at f/2.8; they have the same resolution at their sweet spots.
Recall also that the 2/3'' lenses have the same depth of field,
much larger zoom range, allow 4x shorter exposure at the same
resolution, and due to 4x smaller size are much easier to
image-stabilize on the sensor level.

P.S. One of the conclusions I make is that nowadays it does not makes
sense to ask for equivalent of digicams in terms of film cameras.
35mm film is not good enough to use the full potential of decent
35mm lenses; very soon it will be possible to produce affordable
sensors which will be able to exhaust potentials of these lenses.

It makes more sense to ask what kind of sensor "suits most"
a particular lense...

Ilya

  #26  
Old March 22nd 05, 09:09 PM
Ilya Zakharevich
external usenet poster
 
Posts: n/a
Default

This discussion got a little bit too long. Here is a short summary.

A lot of people confuse counting photons with counting electrons.
This leads to statements like Roger Clark's

"these high-end cameras are reaching the limits of what
is possible from a theoretically perfect sensor."

Actually, even with sensor technology available now (for
mass-production), the sensitivity of the sensor he
considers can be improved 4.8 times; or the size can be
decreased 2.2 times without affecting MP count, sensitivity, and
noise.

Perfect Bayer filter sensors have equivalent film sensitivity of
12000 ISO (taking noise and resolution of Velvia 50 film as a
reference point). In other words, with such a sensor you get
equivalent resolution and noise of Velvia 50 film with 240 times
smaller exposure. For example, for 36x24mm format one gets a
12Mpixels sensor with 8.5 mkm square sensels with sensitivity
12000 ISO (calculated to achieve noise level better than one
of Velvia 50).

One illustration: since shooting with aperture smaller than F/16 does
not fully use the resolution of 8.5mkm sensels, for best results one
should use daylight exposure similar to F/16, 1/12000sec. (Of
course, one can lower the sensitivity of the sensor also by
controling the ADC; this would decrease the noise. E.g.,
decreasing the sensibility 8 times, one can achieve the noise
of a best-resolution 8x10in shot. However, I did not see any
indication that noise well below one given by Velvia 50 on 35mm
film results in any improvement of the image...)

Another illustration: the 8Mpixels sensor of EOS 1D Mark II (recall
that it achieves the noise level of Velvia 50 at sensitivity on or
above 1200ISO) has the "total" Quantum Efficiency about 14.1%.

The "total" efficiency of "the sensor assembly" is the product of
average "quantum efficiency" (in other words, transparency) of the
cells of the Bayer filter, and the quantum efficiency of the actual
sensor. To distinguish colors, some photons MUST be captured by the
Bayer filter; however, it is easy to design a filter with average
transparency 85% or above. On the other hand, currently there are
mass-produced sensors with QE=0.8; combining such a sensor with such
a filter, one can get the "total" efficiency of 68%. Thus the
sensitivity of the sensor of EOS 1D Mark II can be improved 4.8 times
without using any new technology...

The last illustration: using the same value QE=0.8, a 12MP sensor
of size 8.8x6.6mm (this is a 2/3'' sensor) has sensitivity of
655ISO. (Again, this is with better resolution and noise than
35mm Velvia 50!)

Recall that 2/3'' format is especially nice, since an affordable
lense in this format should provide the same resolution as a very
expensive lense in 35mm format. For example, compare a 35mm zoom
having the sweet spot at f/11 with a 2/3'' zoom having the sweet
spot at f/2.8; they have the same resolution at their sweet spots.
Recall also that the 2/3'' lenses have the same depth of field,
much larger zoom range, allow 4x shorter exposure at the same
resolution, and due to 4x smaller size are much easier to
image-stabilize on the sensor level.

P.S. One of the conclusions I make is that nowadays it does not makes
sense to ask for equivalent of digicams in terms of film cameras.
35mm film is not good enough to use the full potential of decent
35mm lenses; very soon it will be possible to produce affordable
sensors which will be able to exhaust potentials of these lenses.

It makes more sense to ask what kind of sensor "suits most"
a particular lense...

Ilya

  #27  
Old March 22nd 05, 09:26 PM
Alfred Molon
external usenet poster
 
Posts: n/a
Default

In article , Ilya Zakharevich says...

The "total" efficiency of "the sensor assembly" is the product of
average "quantum efficiency" (in other words, transparency) of the
cells of the Bayer filter, and the quantum efficiency of the actual
sensor. To distinguish colors, some photons MUST be captured by the
Bayer filter; however, it is easy to design a filter with average
transparency 85% or above. On the other hand, currently there are
mass-produced sensors with QE=0.8; combining such a sensor with such
a filter, one can get the "total" efficiency of 68%. Thus the
sensitivity of the sensor of EOS 1D Mark II can be improved 4.8 times
without using any new technology...


Are you talking of front-illuminated or back-illuminated CCDs here ?
--

Alfred Molon
------------------------------
Olympus 4040, 5050, 5060, 7070, 8080, E300 forum at
http://groups.yahoo.com/group/MyOlympus/
Olympus 8080 resource - http://myolympus.org/8080/
  #28  
Old March 22nd 05, 10:05 PM
Ilya Zakharevich
external usenet poster
 
Posts: n/a
Default

[A complimentary Cc of this posting was sent to
Alfred Molon
], who wrote in article :
In article , Ilya Zakharevich says...

The "total" efficiency of "the sensor assembly" is the product of
average "quantum efficiency" (in other words, transparency) of the
cells of the Bayer filter, and the quantum efficiency of the actual
sensor. To distinguish colors, some photons MUST be captured by the
Bayer filter; however, it is easy to design a filter with average
transparency 85% or above. On the other hand, currently there are
mass-produced sensors with QE=0.8; combining such a sensor with such
a filter, one can get the "total" efficiency of 68%. Thus the
sensitivity of the sensor of EOS 1D Mark II can be improved 4.8 times
without using any new technology...


Are you talking of front-illuminated or back-illuminated CCDs here ?


Actually, what I saw was that both CCDs and CMOSes can "now" (it was
in papers of 2003 or 2004) achieve QE of 80%. Do not remember whether
it was front- or back- for CCDs; probably back-. However, my first
impression was that front- with microlenses can give the same
performance as back-, does not it?

Yours,
Ilya



  #29  
Old March 22nd 05, 10:05 PM
Ilya Zakharevich
external usenet poster
 
Posts: n/a
Default

[A complimentary Cc of this posting was sent to
Alfred Molon
], who wrote in article :
In article , Ilya Zakharevich says...

The "total" efficiency of "the sensor assembly" is the product of
average "quantum efficiency" (in other words, transparency) of the
cells of the Bayer filter, and the quantum efficiency of the actual
sensor. To distinguish colors, some photons MUST be captured by the
Bayer filter; however, it is easy to design a filter with average
transparency 85% or above. On the other hand, currently there are
mass-produced sensors with QE=0.8; combining such a sensor with such
a filter, one can get the "total" efficiency of 68%. Thus the
sensitivity of the sensor of EOS 1D Mark II can be improved 4.8 times
without using any new technology...


Are you talking of front-illuminated or back-illuminated CCDs here ?


Actually, what I saw was that both CCDs and CMOSes can "now" (it was
in papers of 2003 or 2004) achieve QE of 80%. Do not remember whether
it was front- or back- for CCDs; probably back-. However, my first
impression was that front- with microlenses can give the same
performance as back-, does not it?

Yours,
Ilya



  #30  
Old March 23rd 05, 09:17 AM
Alfred Molon
external usenet poster
 
Posts: n/a
Default

In article , Ilya Zakharevich says...

Are you talking of front-illuminated or back-illuminated CCDs here ?


Actually, what I saw was that both CCDs and CMOSes can "now" (it was
in papers of 2003 or 2004) achieve QE of 80%. Do not remember whether
it was front- or back- for CCDs; probably back-. However, my first
impression was that front- with microlenses can give the same
performance as back-, does not it?


Usually front-illuminated CCDs have QEs in the range 20-30%, while back-
illuminated ones have QEs up to 100%.
--

Alfred Molon
------------------------------
Olympus 4040, 5050, 5060, 7070, 8080, E300 forum at
http://groups.yahoo.com/group/MyOlympus/
Olympus 8080 resource - http://myolympus.org/8080/
 




Thread Tools
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

vB code is On
Smilies are On
[IMG] code is Off
HTML code is Off
Forum Jump

Similar Threads
Thread Thread Starter Forum Replies Last Post
8Mp Digital The Theoretical 35mm Quality Equivalent David J Taylor Digital Photography 33 December 23rd 04 10:18 PM


All times are GMT +1. The time now is 06:00 PM.


Powered by vBulletin® Version 3.6.4
Copyright ©2000 - 2024, Jelsoft Enterprises Ltd.
Copyright ©2004-2024 PhotoBanter.com.
The comments are property of their posters.