If this is your first visit, be sure to check out the FAQ by clicking the link above. You may have to register before you can post: click the register link above to proceed. To start viewing messages, select the forum that you want to visit from the selection below. |
|
|
Thread Tools | Display Modes |
#221
|
|||
|
|||
Why not make the sensor larger?
On Jun 20, 2:26 am, "David J. Littleboy" wrote:
"Neil Harrington" wrote: "Philip Homburg" wrote: There is for DoF and total number of photons (assuming equal total number of pixels). Differences in those things, yes. But the f/ number still doesn't change. In the film days, everyone used the same "sensor" (well, set of sensors) regardless of format size. That meant that the f/number abstraction made sense, since it told you the flux per unit area of film, and you knew how film responded to flux per unit area. The flux at the sensor is the same for any two lenses (of different focal lengths) at the same f/stop. Flux is the number of photons per unit area per unit time. That is why the same f/stop and shutter speed always result in the same "exposure", even if the lens changes. What changes between sensors is the total area, thus also the area/pixel; this leads to the decrease in the number of photons incident on each pixel that you write about below. It's because the number of pixels/ unit area is higher if the sensor is smaller (and the number of pixels the same). The reason you need a bigger aperture to get the same f/stop (ie the same flux) with a longer lens is that you project a larger image at the sensor; so you need to collect more photons to obtain the same flux at the sensor [because you have a larger image, so you need more photons to get the same number of photons/(unit area*unit time)]. But it makes less sense for dcams. The FZ20 folks think they have a 450mm f/2.8 lens, when the flux per pixel is a fraction of what the 30D sees from a 300mm f/5.6 zoom. |
#222
|
|||
|
|||
Why not make the sensor larger?
acl wrote:
On Jun 20, 2:26 am, "David J. Littleboy" wrote: "Neil Harrington" wrote: "Philip Homburg" wrote: There is for DoF and total number of photons (assuming equal total number of pixels). Differences in those things, yes. But the f/ number still doesn't change. In the film days, everyone used the same "sensor" (well, set of sensors) regardless of format size. That meant that the f/number abstraction made sense, since it told you the flux per unit area of film, and you knew how film responded to flux per unit area. The flux at the sensor is the same for any two lenses (of different focal lengths) at the same f/stop. Flux is the number of photons per unit area per unit time. That is why the same f/stop and shutter speed always result in the same "exposure", even if the lens changes. What changes between sensors is the total area, thus also the area/pixel; this leads to the decrease in the number of photons incident on each pixel that you write about below. It's because the number of pixels/ unit area is higher if the sensor is smaller (and the number of pixels the same). The reason you need a bigger aperture to get the same f/stop (ie the same flux) with a longer lens is that you project a larger image at the sensor; so you need to collect more photons to obtain the same flux at the sensor [because you have a larger image, so you need more photons to get the same number of photons/(unit area*unit time)]. But it makes less sense for dcams. The FZ20 folks think they have a 450mm f/2.8 lens, when the flux per pixel is a fraction of what the 30D sees from a 300mm f/5.6 zoom. I agree. see: The f/ratio Myth and Digital Cameras http://www.clarkvision.com/photoinfo/f-ratio_myth |
#223
|
|||
|
|||
Why not make the sensor larger?
Kennedy McEwen wrote:
In article , "Roger N. Clark (change username to rnclark)" writes I did not consider APD arrays because: 1) none are used in any consumer digital camera, I already implied that Roger in my original comment, although I was unintentionally ambiguous - it is the devices that exist, not consumer cameras with them 2) their use is limited to very low light situations e.g. as collected photon count increases above a few tens of photons, the effectiveness over a traditional array diminishes and becomes a saturation problem as light levels continue to increase. (Not that they aren't fantastic in certain situations.) But they would not be appropriate for everyday outdoor nor indoor photography. (We've been over this before.) I don't think we have, since I know I haven't mentioned this technology on here before - you are confusing APD arrays with CCD gain-transfer devices, a low noise gain process prior to readout but certainly not APDs which have low noise gain at the pixel itself. They last crop of such sensors we looked at had read noise around 15 electrons. Several DSLRs have read noise under 4 electrons and their are indications that Canon's new 1D Mark III are under 3 electrons. With such a sensor, by the time you collect 50 photons, the fractional noise contribution from read noise is very small compared to the Poisson noise. As I said, you are making a bad assumption. The device I was considering has a read noise of much less than 5e. It isn't used for low light work, quite the opposite, but for very high speed imaging - with an electronic shutter permitting exposures of less than 10nS, ie. faster than 1/100,000,000th of a second. With that sort of speed you can effectively freeze photons in flight, roughly 6" per ns (assuming that you have synchronised with your own light source, which travels double the path length to the subject, there and back again). Not something your average photographer would need, but it has some very useful benefits in other applications. My point is that every photon converted does not always yield "one, and only one, electron". There are sensor designs that yield far more than one electron per photon. I doubt we will see the devices I am talking about (with APD gains of 1000 or so) getting into consumer cameras, they are the wrong spectral response for a start, but it wouldn't surprise me to see APD arrays with gains of 5 or so being manufactured for consumer applications in due course, making read noise essentially irrelevant in any normal photographic situation. Oops. Yes we haven't discussed those devices before. It's quite interesting technology. What is the frames per second rate on the fastest devices these days? I agree that there are many devices that yield more than one electron, some which have been around for decades, but none are used in current consumer digital cameras. Roger |
#224
|
|||
|
|||
Why not make the sensor larger?
On Tue, 19 Jun 2007 23:52:46 +0200, Philip Homburg wrote:
Differences in those things, yes. But the f/ number still doesn't change. The focal length doesn't change either. ploink. Neither does Mssr. Le Humbug |
#225
|
|||
|
|||
Why not make the sensor larger?
Bill Funk wrote:
On Mon, 18 Jun 2007 19:31:11 -0600, "Roger N. Clark (change username to rnclark)" wrote: Bill Funk wrote: On Mon, 18 Jun 2007 07:26:14 -0600, "Roger N. Clark (change username to rnclark)" wrote: This is possible today, it just takes a supercomputer. Can a supercomputer get the electrons from each pixel off the sensor at speeds faster than the speed of light? Consider a 10 megapixel camera, 16 frames in 1/1000 second: that's moving electrons from pixels at a rate of 160 billion pixels/second. He didn't say, "in 1/000 second", he said, "at 1/1000 second". The first is a time period in which to take the shots, the second is a shutter speed. The point of the assertion was to image action using 16 exposures. So far, so good. If you did 16 1/1000 second exposures back to back with zero delay, it would take slightly longer than 1/60 second. No question, that's true. However, that is being said by you, and no one else. Many action shots would result in such major changes that there is no way image information would exist in all the frames to add them together (example baseball player catching a ball: the ball disappears into the mitt). Again, true. Then add delay between images of just 1/1000 second, and you are down to ~1/30 second for 16 images, making following action even more difficult, and it is still at a rate of about 5 billion pixels per second, 50 times faster than the current featest DSLR. OH, and add to that, this was done at midnight, as if there were enough photons in those 16 exposures to make a real picture. Too see what short night exposures might give, check out: http://www.clarkvision.com/photoinfo...ht.photography In particular, see Figure 12: a 1/50 second exposure in a moonlit scene: the brightest patches received only 7 photons/pixel with a 50 mm f/1.8 lens, and middle gray less than 2 photons/pixel. Roger Let's look at what was actually said: "That's not what I mean: I mean taking 16 shots at 1/1000 second of a race car at midnight at the 24 hours of whatever, ISO 16,000, and then compositing them into one final product, using all the information in all 16, to produce and unblurred and not noisy image. The computer decides what is car and what is background, and matches the images together and corrects them before averaging them. It's sort of like the matching done when making a panorama, but the computer has to figure out the motion vectors properly, and remap each of the 16 images to match. It has to do this in 3-D because of teh different planes in motion." You're adding the idea of taking these 16 shots in an extremely short time (just over 1/60 second, according to yor own post, above); Doug didn't say that. I understad some of the problems with doing what Doug propsed, but you added the 1/60 second (later ammended to 1/30 second after adding various needed delay), then proceeded to use that figure to argue. You can't do that. It's called a strawman argument. I'm not calling you a bad person. I'm just saying your argument here is one you're making against something you said, not Doug. I did not "add" or amend times as you say. I simply computed the consequences of the position being put forward by Doug, and then by your interpretation of Doug's idea. You interpreted Doug's position as sequential 1/1000 second exposures. I computed the consequences of that idea if: 1) the readout was instantaneous (giving the 1/60 second value), and 2) the readout was 1/1000 second (which is still many times faster than currently possible). The calculations are not based on something I said, but but based on what you and Doug said. Roger I think we may have a misunderstanding of what was said here. Let me work through it. This line is in the one of Doug's posts that started this particular thread: "This is possible today, it just takes a supercomputer." In order for this to be possible today, we must be talking about current cameras. Since no current cameras (even P&Ss) can shoot 16 frames in under a second (not to mention in 1/30 second, even in video mode), you added that of your own accord. Do I have this right? I understand what you wrote, and you are right, as I said, up to the point where you added the time element of 1/60 or 1/30 second. You are leaving out part of the information, and that of imaging a race car. That implies it must be done fast enough to get 16 frames of the race car. If you take a second to do that and the race car was a fair fraction of the frame, it has to be done very fast. You must get the frames fast enough that you have information in each of the 16 frames that can be combined to improve the final image. Action photography requires very fast response. Here are two successive frames of 2 cheetahs in action: http://www.clarkvision.com/galleries...2429b-700.html http://www.clarkvision.com/galleries...2430b-700.html (or simply press the "next" button on the page in the first image). These were done with a 1D Mark II camera at 8.5 frames/second, so 0.118 seconds between frames. The face of cheetah on top does not show in the first frame but does in the second frame. You could not combine the images to improve the face of the cheetah. If you wanted to use 16 frames, they would have to be done much faster (time from first to last frame) than 0.118 second. This is typical of action photography. Roger |
#226
|
|||
|
|||
Why not make the sensor larger?
Alfred Molon wrote:
But for instance in CMOS sensors charges do not have to travel a couple of cm along a row of pixel in a CCD manner - they can be addressed individually. The Janesick article does not state that readout times for individual CMOS pixels are huge (in fact no figures are provided). In any case, the proof that my suggestion actually works is the fact that a such a multiple read sensor actually exists: http://www.toshiba.com/taec/news/pre...ssp_07_466.jsp Also have a look at the PDF white paper at http://www.techonline.com/learning/techpaper/197700819 This device (or one very much like it) was already discussed in the newsgroup. Let's see, 96 dB with 2.2 micron pixels? 96 dB is magically exactly what 16-bits is (and note you would need at least an 18-bit converter to deliver 16 bits of real dynamic range). And 72 dB is 12-bits, as if anyone really believed 2.2-micron pixel cell phone cameras actually delivered 12 bits of dynamic range. Then let's talk photons. To get 96 db of dynamic range, you need to collect enough photons: 65535*(read noise in electrons). So if read noise matched the current lowest read noise consumer dcams available (~3 electrons in Canon 1D Mark III if reports prove out; ~3.9 electrons in 30D, 1D Mark II), then the little cell phone 2.2-micron pixels need to collect at least 3 * 65535 = 196,605 photons. Typical pixel storage is 1,000 to 2,000 electrons per square micron, so the Toshiba technology is over 40,000 electrons/sq. micron. What's fishy about this report? The previous thread on this, if I remember correctly, concluded the marketing department got carried away converting A/D bits and didn't focus on reality. We'll see when (if) the device actually shows up on the market and eclipses top end DSLRs with large pixels with dynamic ranges that currently don't quite reach 72 dB (12 bits). Roger |
#227
|
|||
|
|||
Why not make the sensor larger?
The point is that such a multiple read sensor exists, while you claimed
that it could not be done. What the real performance of this sensor is, is another issue, which will be verified in the reviews once a camera is available. -- Alfred Molon ------------------------------ Olympus 50X0, 7070, 8080, E3X0, E4X0 and E5X0 forum at http://tech.groups.yahoo.com/group/MyOlympus/ http://myolympus.org/ photo sharing site |
#228
|
|||
|
|||
Why not make the sensor larger?
In article , "Roger N. Clark (change
username to rnclark)" writes Kennedy McEwen wrote: I doubt we will see the devices I am talking about (with APD gains of 1000 or so) getting into consumer cameras, they are the wrong spectral response for a start, but it wouldn't surprise me to see APD arrays with gains of 5 or so being manufactured for consumer applications in due course, making read noise essentially irrelevant in any normal photographic situation. Oops. Yes we haven't discussed those devices before. It's quite interesting technology. What is the frames per second rate on the fastest devices these days? Those I am aware of run about 2-300Hz frame rate, but are normally used much lower, limited by the repeat rate of the light source. -- Kennedy Yes, Socrates himself is particularly missed; A lovely little thinker, but a bugger when he's ****ed. Python Philosophers (replace 'nospam' with 'kennedym' when replying) |
#229
|
|||
|
|||
Why not make the sensor larger?
Alfred Molon wrote:
In article , says... Yes, an improvement, but not complete elimination of the problem. The penalty being a larger lens mount diameter (relative to the sensor size), and hence compromising the ability to produce a really compact system. The Olympus E400 is currently the most compact DSLR on the market. That may be, but it could have been so much smaller if they had not chosen such a big lens mount. I used to hold out hope for the 4/3 system, but I am coming round more to the viewpoint that it's an opportunity lost. David |
#230
|
|||
|
|||
Why not make the sensor larger?
David J. Littleboy wrote:
[] But it makes less sense for dcams. The FZ20 folks think they have a 450mm f/2.8 lens, when the flux per pixel is a fraction of what the 30D sees from a 300mm f/5.6 zoom. David J. Littleboy Tokyo, Japan What makes those folk happy is that they can take good pictures with the same FoV as a 432mm lens, with a camera they can afford weighing just a few hundred grams, including image stabilisation. They have a 72mm (I think) lens to achieve that. I hope all the literature says "432mm (35mm equivalent) f/2.8" and not 432mm f/2.8, which would mislead. David |
Thread Tools | |
Display Modes | |
|
|
Similar Threads | ||||
Thread | Thread Starter | Forum | Replies | Last Post |
A sensor that CAN make use of a 16 bit converter?? | RichA | Digital SLR Cameras | 6 | March 13th 07 04:03 PM |
Larger sensor in compact camera | John Fryatt | Digital Photography | 34 | May 1st 06 08:50 AM |
Dust on sensor, Sensor Brush = hogwash solution? | MeMe | Digital SLR Cameras | 41 | February 13th 05 12:41 AM |
Dust on sensor, Sensor Brush = hogwash solution? | MeMe | Digital Photography | 23 | February 12th 05 04:51 PM |
FZ20 and image stabilization versus the larger sensor of the Sony 717 | Martin | Digital Photography | 6 | September 2nd 04 11:31 PM |