A Photography forum. PhotoBanter.com

If this is your first visit, be sure to check out the FAQ by clicking the link above. You may have to register before you can post: click the register link above to proceed. To start viewing messages, select the forum that you want to visit from the selection below.

Go Back   Home » PhotoBanter.com forum » Digital Photography » Digital Photography
Site Map Home Register Authors List Search Today's Posts Mark Forums Read Web Partners

[LONG] Theoretical estimates for film-equivalent digital sens



 
 
Thread Tools Display Modes
  #31  
Old March 23rd 05, 09:17 AM
Alfred Molon
external usenet poster
 
Posts: n/a
Default

In article , Ilya Zakharevich says...

Are you talking of front-illuminated or back-illuminated CCDs here ?


Actually, what I saw was that both CCDs and CMOSes can "now" (it was
in papers of 2003 or 2004) achieve QE of 80%. Do not remember whether
it was front- or back- for CCDs; probably back-. However, my first
impression was that front- with microlenses can give the same
performance as back-, does not it?


Usually front-illuminated CCDs have QEs in the range 20-30%, while back-
illuminated ones have QEs up to 100%.
--

Alfred Molon
------------------------------
Olympus 4040, 5050, 5060, 7070, 8080, E300 forum at
http://groups.yahoo.com/group/MyOlympus/
Olympus 8080 resource - http://myolympus.org/8080/
  #32  
Old March 23rd 05, 10:18 PM
Ilya Zakharevich
external usenet poster
 
Posts: n/a
Default

[A complimentary Cc of this posting was sent to
Alfred Molon
], who wrote in article :
Actually, what I saw was that both CCDs and CMOSes can "now" (it was
in papers of 2003 or 2004) achieve QE of 80%. Do not remember whether
it was front- or back- for CCDs; probably back-. However, my first
impression was that front- with microlenses can give the same
performance as back-, does not it?


Usually front-illuminated CCDs have QEs in the range 20-30%, while back-
illuminated ones have QEs up to 100%.


Thanks; probably I was not paying enough attention when reading these
papers. Anyway, I also saw this 100% number quoted in many places,
but the actual graphs of QE/vs/wavelength presented in the papers were
much closer to 80%...

Anyway, I would suppose that of these 4.84 which are the current
inefficiency (comparing to QE=0.8 sensor with a good Bayer matrix), at
least about 2..3 comes from using RGB Bayer (and I do not have a
slightest idea why they use RGB). This gives the QE of the "actual"
sensor closer to 30..40%. This is a kinda strange number - too good
for front-, too bad for back-. [Of course, the actual sensor is CMOS
;-]

Are there actual back-illumination sensor used in mass-production
digicams?

Thanks,
Ilya
  #33  
Old March 24th 05, 11:51 AM
Alfred Molon
external usenet poster
 
Posts: n/a
Default

In article , Ilya Zakharevich says...

Are there actual back-illumination sensor used in mass-production
digicams?


To my knowledge no - they are all used for astronomy. The production
process involves thinning the CCD to around 10 micrometer (or something
very thin). Then the back side of the CCD, which does not have all
layers with the circuitry which would obstruct light, is used as the
active side. But either the additional production process is expensive
or the resulting CCDs are too thin for mass production. Try doing a
Google search for "back illuminated CCDs".
--

Alfred Molon
------------------------------
Olympus 4040, 5050, 5060, 7070, 8080, E300 forum at
http://groups.yahoo.com/group/MyOlympus/
Olympus 8080 resource - http://myolympus.org/8080/
  #34  
Old March 24th 05, 11:51 AM
Alfred Molon
external usenet poster
 
Posts: n/a
Default

In article , Ilya Zakharevich says...

Are there actual back-illumination sensor used in mass-production
digicams?


To my knowledge no - they are all used for astronomy. The production
process involves thinning the CCD to around 10 micrometer (or something
very thin). Then the back side of the CCD, which does not have all
layers with the circuitry which would obstruct light, is used as the
active side. But either the additional production process is expensive
or the resulting CCDs are too thin for mass production. Try doing a
Google search for "back illuminated CCDs".
--

Alfred Molon
------------------------------
Olympus 4040, 5050, 5060, 7070, 8080, E300 forum at
http://groups.yahoo.com/group/MyOlympus/
Olympus 8080 resource - http://myolympus.org/8080/
  #35  
Old March 31st 05, 09:31 PM
HvdV
external usenet poster
 
Posts: n/a
Default

Hi Ilya,

(took me a while to come back to this topic)

I think we speak about the same issue using two different languages:
you discuss wave optic, I - geometric optic. You mention pi/4 phase,
I discuss "the spot" where rays going through different places on the
lense come to.

Assume that "wave optic" = "geometric optic" + "diffration". Under
this assumption (which I used) your "vague" discription is
*quantified* by using the geometric optic language: "diffration"
"circle" does not change when you scale, while "geometric optic" spot
grows linearly with the size. This also quantifies the dependence of
the "sweet spot" and maximal resolution (both changing with
sqrt(size)).

You can use geometrical optics to compute optical path lengths from an object
to any location behind the lens, but to find out what intensity you get there
you need to sum all light contributing to that point and take its phase into
account.
The point I tried to make earlier is that the geometry scales, but the
wavelength doesn't, so scaling up means scaling up phase errors. Take for
example a phase error caused by spherical aberration (SA) between rays
through the center of the lens and those from the rim, causing the rim-rays
to be focused in front of the focal plane. Doubling the phase error will at
least double that distance, depending on the aperture angle. To understand
the wild pattern created by all interphering phase shifted rays you need to
do that summation mentioned above. All in all this causes quite non-linear
effects on the 2D spot size as you scale the lens, but also seriously affects
its out off focus 3D shape, related to the bokeh.
If at the sweet spot (measured in f/d number) the size of the diffraction
spot balances against geometrical errors like chromatic aberration, scaling
of the lens means as you say scaling of the geometric spot. For the
unaberrated diffraction spot to match that you need to scale down the
sin(aperture_angle), roughly d/f, linearly. However, camera lenses have many
aberrations which are very sensitive to a change in lens diameter. For
example, SA depends of the 4th power of the distance to the optical axis,
In short, I don't understand how you derive a sqrt(f/d) rule for this.

It might be possible that you can find such a rule empirically by comparing
existing lenses, but then you can't exclude design or manufacturing changes.
For the purpose of this thread that is good enough though.

So if the assumption holds, my approach is more convenient. ;-) And,
IIRC, it holds in most situations. [I will try to remember the math
behind this.]

Please do!

Even without readout noise, assuming that it does not make sense to
rasterize at resolution (e.g.) 3 times higher than the resolution of
the lense, when you rescale your lense+sensor (keeping the lense
design), you better rescale the pixel count and sensitivity the same
amount.

BTW, there are also such devices like Electron Multiplying CCDs which tackle
that. No reason why these will not appear eventually in consumer electronics.


When readout noise is not a key factor it is IMO better to match the
pixel size to the optical bandwidth, making anti aliasing filters
superfluous.



I assume that "matching" is as above: having sensor resolution "K
times the lense resolution", for some number K? IIRC, military air
reconnaissance photos were (Vietnam era?) scanned several times above
the optical resolution, and it mattered. [Likewise for this 700 MP IR
telescope?] Of course, increasing K you hit a return-of-investment
flat part pretty soon, this is why I had chosen this low example value
"3" above...

'Resolution' is a rather vague term, usually it is taken as Half Intensity
Width of the point spread function, or using the Rayleigh criterion. Both are
not the same as the highest spatial frequency passed by the lens,
'resolution' is for camera type optics a bit (say 50%) larger than the
highest spatial frequency. In principle it is enough to sample at twice that
frequency, so with the 50% included your 3x is reproduced!
BTW, even a bad lens with a bloated PSF produces something up to the
bandwidth, so in that case the K factor will be even higher.




AFAIU, the current manufacturing gimmic is dSLRs. [If my analysis is

yes, a sort of horse-drawn carriage with a motor instead of the horse...
correct] in a year or two one can have a 1'' sensor with the same
performance as Mark II (since sensors with QE=0.8 are in production
today, all you need is to scale the design to 12MP, and use "good"
filter matrix). This would mean the 35mm world switching to lenses
which are 3 times smaller, 25 times lighter, and 100 times cheaper (or
correspondingly, MUCH MUCH better optic).

To keep sensitivity when scaling down the sensor, keeping the pixel count and
not being able to gain sensitivity, you need to keep the aperture diameter as
is, resulting in a lower f/d number, costs extra.

My conjecture is that today the marketing is based on this "100 times
cheaper" dread. The manufacturers are trying to lure the public to
buy as many *current design* lenses as possible; they expect that
these lenses are going to be useless in a few years, so people will
need to change their optic again.

As 'Joe' I bought a recommended-brand P&S, assuming modern lenses for tiny
CCDs would be fine. It's not, it's abysmal. IMO such cameras and most dSLRs
are not intended to last very long. After all, see what happens to
manufacturers which make durable quality cameras (Leica, Contax), that
strategy is not working anymore.

[While for professionals, who have tens K$ invested in lenses, dSLRs
are very convenient, for Joe-the-public the EVFs of today are much
more practical; probably producers use the first fact to confuse the
Joes to by dSLRs too; note the stop of the development of EVF during
the last 1/2 year, when they reached the spot they start to compete
with dSLR, e.g., KM A200 vs A2 down-grading.]

Hm, yes, I noted also the Sony F828 is also pretty old..

This is similar to DVDs today: during last several months, when
blue-rays are at sight, studios started to digitize films as if there
is no tomorrow...

Thanks for a very interesting discussion,

Likewise, cheers, Hans

  #36  
Old March 31st 05, 09:31 PM
HvdV
external usenet poster
 
Posts: n/a
Default

Hi Ilya,

(took me a while to come back to this topic)

I think we speak about the same issue using two different languages:
you discuss wave optic, I - geometric optic. You mention pi/4 phase,
I discuss "the spot" where rays going through different places on the
lense come to.

Assume that "wave optic" = "geometric optic" + "diffration". Under
this assumption (which I used) your "vague" discription is
*quantified* by using the geometric optic language: "diffration"
"circle" does not change when you scale, while "geometric optic" spot
grows linearly with the size. This also quantifies the dependence of
the "sweet spot" and maximal resolution (both changing with
sqrt(size)).

You can use geometrical optics to compute optical path lengths from an object
to any location behind the lens, but to find out what intensity you get there
you need to sum all light contributing to that point and take its phase into
account.
The point I tried to make earlier is that the geometry scales, but the
wavelength doesn't, so scaling up means scaling up phase errors. Take for
example a phase error caused by spherical aberration (SA) between rays
through the center of the lens and those from the rim, causing the rim-rays
to be focused in front of the focal plane. Doubling the phase error will at
least double that distance, depending on the aperture angle. To understand
the wild pattern created by all interphering phase shifted rays you need to
do that summation mentioned above. All in all this causes quite non-linear
effects on the 2D spot size as you scale the lens, but also seriously affects
its out off focus 3D shape, related to the bokeh.
If at the sweet spot (measured in f/d number) the size of the diffraction
spot balances against geometrical errors like chromatic aberration, scaling
of the lens means as you say scaling of the geometric spot. For the
unaberrated diffraction spot to match that you need to scale down the
sin(aperture_angle), roughly d/f, linearly. However, camera lenses have many
aberrations which are very sensitive to a change in lens diameter. For
example, SA depends of the 4th power of the distance to the optical axis,
In short, I don't understand how you derive a sqrt(f/d) rule for this.

It might be possible that you can find such a rule empirically by comparing
existing lenses, but then you can't exclude design or manufacturing changes.
For the purpose of this thread that is good enough though.

So if the assumption holds, my approach is more convenient. ;-) And,
IIRC, it holds in most situations. [I will try to remember the math
behind this.]

Please do!

Even without readout noise, assuming that it does not make sense to
rasterize at resolution (e.g.) 3 times higher than the resolution of
the lense, when you rescale your lense+sensor (keeping the lense
design), you better rescale the pixel count and sensitivity the same
amount.

BTW, there are also such devices like Electron Multiplying CCDs which tackle
that. No reason why these will not appear eventually in consumer electronics.


When readout noise is not a key factor it is IMO better to match the
pixel size to the optical bandwidth, making anti aliasing filters
superfluous.



I assume that "matching" is as above: having sensor resolution "K
times the lense resolution", for some number K? IIRC, military air
reconnaissance photos were (Vietnam era?) scanned several times above
the optical resolution, and it mattered. [Likewise for this 700 MP IR
telescope?] Of course, increasing K you hit a return-of-investment
flat part pretty soon, this is why I had chosen this low example value
"3" above...

'Resolution' is a rather vague term, usually it is taken as Half Intensity
Width of the point spread function, or using the Rayleigh criterion. Both are
not the same as the highest spatial frequency passed by the lens,
'resolution' is for camera type optics a bit (say 50%) larger than the
highest spatial frequency. In principle it is enough to sample at twice that
frequency, so with the 50% included your 3x is reproduced!
BTW, even a bad lens with a bloated PSF produces something up to the
bandwidth, so in that case the K factor will be even higher.




AFAIU, the current manufacturing gimmic is dSLRs. [If my analysis is

yes, a sort of horse-drawn carriage with a motor instead of the horse...
correct] in a year or two one can have a 1'' sensor with the same
performance as Mark II (since sensors with QE=0.8 are in production
today, all you need is to scale the design to 12MP, and use "good"
filter matrix). This would mean the 35mm world switching to lenses
which are 3 times smaller, 25 times lighter, and 100 times cheaper (or
correspondingly, MUCH MUCH better optic).

To keep sensitivity when scaling down the sensor, keeping the pixel count and
not being able to gain sensitivity, you need to keep the aperture diameter as
is, resulting in a lower f/d number, costs extra.

My conjecture is that today the marketing is based on this "100 times
cheaper" dread. The manufacturers are trying to lure the public to
buy as many *current design* lenses as possible; they expect that
these lenses are going to be useless in a few years, so people will
need to change their optic again.

As 'Joe' I bought a recommended-brand P&S, assuming modern lenses for tiny
CCDs would be fine. It's not, it's abysmal. IMO such cameras and most dSLRs
are not intended to last very long. After all, see what happens to
manufacturers which make durable quality cameras (Leica, Contax), that
strategy is not working anymore.

[While for professionals, who have tens K$ invested in lenses, dSLRs
are very convenient, for Joe-the-public the EVFs of today are much
more practical; probably producers use the first fact to confuse the
Joes to by dSLRs too; note the stop of the development of EVF during
the last 1/2 year, when they reached the spot they start to compete
with dSLR, e.g., KM A200 vs A2 down-grading.]

Hm, yes, I noted also the Sony F828 is also pretty old..

This is similar to DVDs today: during last several months, when
blue-rays are at sight, studios started to digitize films as if there
is no tomorrow...

Thanks for a very interesting discussion,

Likewise, cheers, Hans

  #37  
Old March 31st 05, 11:07 PM
Ilya Zakharevich
external usenet poster
 
Posts: n/a
Default

[A complimentary Cc of this posting was sent to
HvdV
], who wrote in article :
Even without readout noise, assuming that it does not make sense to
rasterize at resolution (e.g.) 3 times higher than the resolution of
the lense, when you rescale your lense+sensor (keeping the lense
design), you better rescale the pixel count and sensitivity the same
amount.


BTW, there are also such devices like Electron Multiplying CCDs
which tackle that. No reason why these will not appear eventually in
consumer electronics.


I think that electron multiplying may be useful only when readout
noise is comparable with Poisson noise. When you multiply electrons,
the initial Poisson noise is not changed, but your multiplication
constant can vary (e.g., be sometimes 5, sometimes 6 - unpredictably),
an additional Poisson-like noise is added to your signal.
Additionally, the readout noise is essentially decreased the same
number of times as the multiplication constant.

Looks like it does not make sense in the photography-related settings,
since the current readout noise is low enough compared to Poisson
noise at what is jugded to be "photographically good quality" (S/N
above 20 at 18% gray).

However, note that in other thread ("Lens quality") another limiting
factor was introduced: finite capacity of sensels per area. E.g.,
current state of art of capacity per area (Canon 1D MII, 52000
electrons per 8.2mkm sensel) limits the size of 2000 electrons cell to
1.6mkm. So without technological change, there is also a restriction
of sensitivy *from below*.

Combining two estimages, this gives the low limil of cell size at
1.6mkm. However, I think that the latter restriction is only
technological, and can be overcome with more circuitry per photocell.

'Resolution' is a rather vague term, usually it is taken as Half
Intensity Width of the point spread function, or using the Rayleigh
criterion. Both are not the same as the highest spatial frequency
passed by the lens,


Right. However, my impression is that at lens' sweet spot f-stop, all
these are closely related. At least I made calculations of MTF
functions of lenses limited by different aberrations, and all the
examples give approximately the same relations between these numbers
at the sweet spot.

To keep sensitivity when scaling down the sensor, keeping the pixel
count and not being able to gain sensitivity, you need to keep the
aperture diameter as is, resulting in a lower f/d number, costs
extra.


What happens is you keep the aperture diameter the same, and want to
keep the field of view the same, but the focal length smaller. This
"obviously" can't be done without addition additional elements.
However, these "additions" may happen on the "sensor" side of the
lens, not on the subject side. So the added elements are actually
small in diameter (since sensor is so much smaller), so much cheaper
to produce. This will not add a lot to the lens price.

Hmm, maybe this may work... The lengths of optical paths through the
"old" part of the lens will preserve their mismatches; if added
elements somewhat compensate these mismatches, it will have much
higher optical quality, and price not much higher than the original.

As 'Joe' I bought a recommended-brand P&S, assuming modern lenses for tiny
CCDs would be fine. It's not, it's abysmal. IMO such cameras and most dSLRs
are not intended to last very long. After all, see what happens to
manufacturers which make durable quality cameras (Leica, Contax), that
strategy is not working anymore.


Right. After 3 newer-generation VCRs almost immediately broke down, I
went to my garage, fetched a 15-years old VCR, and use it happily ever
after. :-(

Yours,
Ilya
  #38  
Old March 31st 05, 11:07 PM
Ilya Zakharevich
external usenet poster
 
Posts: n/a
Default

[A complimentary Cc of this posting was sent to
HvdV
], who wrote in article :
Even without readout noise, assuming that it does not make sense to
rasterize at resolution (e.g.) 3 times higher than the resolution of
the lense, when you rescale your lense+sensor (keeping the lense
design), you better rescale the pixel count and sensitivity the same
amount.


BTW, there are also such devices like Electron Multiplying CCDs
which tackle that. No reason why these will not appear eventually in
consumer electronics.


I think that electron multiplying may be useful only when readout
noise is comparable with Poisson noise. When you multiply electrons,
the initial Poisson noise is not changed, but your multiplication
constant can vary (e.g., be sometimes 5, sometimes 6 - unpredictably),
an additional Poisson-like noise is added to your signal.
Additionally, the readout noise is essentially decreased the same
number of times as the multiplication constant.

Looks like it does not make sense in the photography-related settings,
since the current readout noise is low enough compared to Poisson
noise at what is jugded to be "photographically good quality" (S/N
above 20 at 18% gray).

However, note that in other thread ("Lens quality") another limiting
factor was introduced: finite capacity of sensels per area. E.g.,
current state of art of capacity per area (Canon 1D MII, 52000
electrons per 8.2mkm sensel) limits the size of 2000 electrons cell to
1.6mkm. So without technological change, there is also a restriction
of sensitivy *from below*.

Combining two estimages, this gives the low limil of cell size at
1.6mkm. However, I think that the latter restriction is only
technological, and can be overcome with more circuitry per photocell.

'Resolution' is a rather vague term, usually it is taken as Half
Intensity Width of the point spread function, or using the Rayleigh
criterion. Both are not the same as the highest spatial frequency
passed by the lens,


Right. However, my impression is that at lens' sweet spot f-stop, all
these are closely related. At least I made calculations of MTF
functions of lenses limited by different aberrations, and all the
examples give approximately the same relations between these numbers
at the sweet spot.

To keep sensitivity when scaling down the sensor, keeping the pixel
count and not being able to gain sensitivity, you need to keep the
aperture diameter as is, resulting in a lower f/d number, costs
extra.


What happens is you keep the aperture diameter the same, and want to
keep the field of view the same, but the focal length smaller. This
"obviously" can't be done without addition additional elements.
However, these "additions" may happen on the "sensor" side of the
lens, not on the subject side. So the added elements are actually
small in diameter (since sensor is so much smaller), so much cheaper
to produce. This will not add a lot to the lens price.

Hmm, maybe this may work... The lengths of optical paths through the
"old" part of the lens will preserve their mismatches; if added
elements somewhat compensate these mismatches, it will have much
higher optical quality, and price not much higher than the original.

As 'Joe' I bought a recommended-brand P&S, assuming modern lenses for tiny
CCDs would be fine. It's not, it's abysmal. IMO such cameras and most dSLRs
are not intended to last very long. After all, see what happens to
manufacturers which make durable quality cameras (Leica, Contax), that
strategy is not working anymore.


Right. After 3 newer-generation VCRs almost immediately broke down, I
went to my garage, fetched a 15-years old VCR, and use it happily ever
after. :-(

Yours,
Ilya
  #39  
Old April 1st 05, 11:54 AM
David Littlewood
external usenet poster
 
Posts: n/a
Default

In article , Ilya Zakharevich
writes

However, note that in other thread ("Lens quality") another limiting
factor was introduced: finite capacity of sensels per area. E.g.,
current state of art of capacity per area (Canon 1D MII, 52000
electrons per 8.2mkm sensel) limits the size of 2000 electrons cell to
1.6mkm. So without technological change, there is also a restriction
of sensitivy *from below*.


"mkm"? Not a recognised unit; could you please clarify.

David
--
David Littlewood
  #40  
Old April 1st 05, 11:54 AM
David Littlewood
external usenet poster
 
Posts: n/a
Default

In article , Ilya Zakharevich
writes

However, note that in other thread ("Lens quality") another limiting
factor was introduced: finite capacity of sensels per area. E.g.,
current state of art of capacity per area (Canon 1D MII, 52000
electrons per 8.2mkm sensel) limits the size of 2000 electrons cell to
1.6mkm. So without technological change, there is also a restriction
of sensitivy *from below*.


"mkm"? Not a recognised unit; could you please clarify.

David
--
David Littlewood
 




Thread Tools
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

vB code is On
Smilies are On
[IMG] code is Off
HTML code is Off
Forum Jump

Similar Threads
Thread Thread Starter Forum Replies Last Post
8Mp Digital The Theoretical 35mm Quality Equivalent David J Taylor Digital Photography 33 December 23rd 04 10:18 PM


All times are GMT +1. The time now is 12:32 AM.


Powered by vBulletin® Version 3.6.4
Copyright ©2000 - 2024, Jelsoft Enterprises Ltd.
Copyright ©2004-2024 PhotoBanter.com.
The comments are property of their posters.