If this is your first visit, be sure to check out the FAQ by clicking the link above. You may have to register before you can post: click the register link above to proceed. To start viewing messages, select the forum that you want to visit from the selection below. |
|
|
Thread Tools | Display Modes |
#51
|
|||
|
|||
Alfred Molon wrote:
In article , Ilya Zakharevich says... Are there actual back-illumination sensor used in mass-production digicams? To my knowledge no - they are all used for astronomy. The production Good camera's for fluorescence microscopy use them too. I guess the efficiency gain is not sufficient to justify the current price difference ( $1) for use in digicams. -- hans |
#52
|
|||
|
|||
Alfred Molon wrote:
In article , Ilya Zakharevich says... Are there actual back-illumination sensor used in mass-production digicams? To my knowledge no - they are all used for astronomy. The production Good camera's for fluorescence microscopy use them too. I guess the efficiency gain is not sufficient to justify the current price difference ( $1) for use in digicams. -- hans |
#53
|
|||
|
|||
[A complimentary Cc of this posting was sent to
HvdV ], who wrote in article : However, note that in other thread ("Lens quality") another limiting factor was introduced: finite capacity of sensels per area. E.g., current state of art of capacity per area (Canon 1D MII, 52000 electrons per 8.2mkm sensel) limits the size of 2000 electrons cell to 1.6mkm. So without technological change, there is also a restriction of sensitivy *from below*. One advantage of the EMCCDs is there speed: up to 100fps. One could use that speed for example for smart averaging including motion compensation, depth of focus manipulation in combination with moving the focus, have stop here before getting carried away.... To do this, you need low readout noise. The data for Canon 1D MII (readout noise about 12 electrons) prohibits making more than about 4 "subexposition" per exposition (without significant reduction of noise). 'Resolution' is a rather vague term, usually it is taken as Half Intensity Width of the point spread function, or using the Rayleigh criterion. Both are not the same as the highest spatial frequency passed by the lens, Right. However, my impression is that at lens' sweet spot f-stop, all these are closely related. At least I made calculations of MTF functions of lenses limited by different aberrations, and all the examples give approximately the same relations between these numbers at the sweet spot. The theoretical bandlimit is not affected by the aberrations, but the 50% MTF point of course strongly. My point was with all kinds of individual aberrations I checked, at the sweet spot the 20% MTF point WITH aberrations was approximately at the same percentage of the cutoff frequency (given by diffration). From this it follows that particular "numeric" performance at the sweet-spot should be quite predictable. Of course, the quality of the lens image cannot be described by one number; so when the lens is at sweet spot for one parameter (e.g., radial MTF at 1/4 of diagonal size from center), it is far from sweet spot for other parameters. On the other hand, on a well-optimized lens a lot of parameters have sweet spots at the same aperture. [This follows from an assumption that improving one parameter will negatively affect others; so with multi-argument optimization a lot of parameters reach their margin values simultaneously.] However, these "additions" may happen on the "sensor" side of the lens, not on the subject side. So the added elements are actually small in diameter (since sensor is so much smaller), so much cheaper to produce. This will not add a lot to the lens price. Looking at prices for microscope lenses I'm not so sure :-) This particular market can bear? Hmm, maybe this may work... The lengths of optical paths through the "old" part of the lens will preserve their mismatches; if added elements somewhat compensate these mismatches, it will have much higher optical quality, and price not much higher than the original. I don't know much about lens designing, but I think that as soon as you add a single element, make one aspherical surface or use some glass with special dispersion properties you have to redo the entire optimization process. That might be not so hard provided the basic design ideas are good, but probably it is much pricier to manufacture the whole scaled up design to sufficient accuracy. Given an overall design (which stuff goes into which groups, etc), and given clear goal functions, I would expect the optimization process should be more or less trivial. So it follows that it is the design/goals part which must require some skill... ;-) Yours, Ilya |
Thread Tools | |
Display Modes | |
|
|
Similar Threads | ||||
Thread | Thread Starter | Forum | Replies | Last Post |
8Mp Digital The Theoretical 35mm Quality Equivalent | David J Taylor | Digital Photography | 33 | December 23rd 04 10:18 PM |