If this is your first visit, be sure to check out the FAQ by clicking the link above. You may have to register before you can post: click the register link above to proceed. To start viewing messages, select the forum that you want to visit from the selection below. |
|
|
Thread Tools | Display Modes |
#51
|
|||
|
|||
Entry level Nikon 24mp?!
In article , Eric Stevens
wrote: Lets start again. ok! One pixel is a cell collecting light. yes, but it's actually a sensel. the terms are often confused. One Bayer cell consists of 4 pixels: Red, Blue and 2 x Green. wrong. one bayer pixel is a matrix of multiple sensels which is often a 3x3 block of a given sensel plus its 8 surrounding sensels. there can be more sensels used in the demosaicing too. it's a lot more complex than people think it is, which is why there's so much confusion. Some people call a Bayer Cell a pixel. Other people call the basic light collecting elements 'pixels'. they can call it whatever they want, but a 2x2 block is not how bayer works. That four to one ratio is what lies behind my statement to which you object. it's wrong. |
#52
|
|||
|
|||
Entry level Nikon 24mp?!
In article , Eric Stevens
wrote: Four pixels make a Bayer cell. no they definitely do not. 2.7 million Bayer cells = 2.7 x 4 million pixels = 10.8Mp It all depends on how you define cell and pixel. no it doesn't. Wrong. :-( it's not wrong. 'tis. no it's not wrong. you're confused. |
#53
|
|||
|
|||
Entry level Nikon 24mp?!
In article , Eric Stevens
wrote: I suspect from what you have said that every active light-sensitive pixel was surrounded by a 'binned' inactive pixel. The result would be an array of 4 x 4 pixels such as: X R X G X X X X X G X B X X X X That would give 4 active pixels for every 16. that's not how they did it. all they did was take a sensor that had 10.8 megapixels and cover every 2x2 block of sensels with a red, green or blue filter plus a microlens, rather than cover a single sensel with a filter/microlens, as in a more typical implementation. instead of: r g r g g b g b r g r g you had: r r g g r r g g r r g g r r g g g g b b g g b b g g b b g g b b r r g g r r g g r r g g r r g g there used to be a closeup picture of the sensor where you could see the individual sensels but it's gone. |
#55
|
|||
|
|||
Entry level Nikon 24mp?!
On Sun, 15 Apr 2012 23:47:11 -0400, nospam
wrote: In article , Eric Stevens wrote: I suspect from what you have said that every active light-sensitive pixel was surrounded by a 'binned' inactive pixel. The result would be an array of 4 x 4 pixels such as: X R X G X X X X X G X B X X X X That would give 4 active pixels for every 16. that's not how they did it. all they did was take a sensor that had 10.8 megapixels and cover every 2x2 block of sensels with a red, green or blue filter plus a microlens, rather than cover a single sensel with a filter/microlens, as in a more typical implementation. instead of: r g r g g b g b r g r g you had: r r g g r r g g r r g g r r g g g g b b g g b b g g b b g g b b r r g g r r g g r r g g r r g g there used to be a closeup picture of the sensor where you could see the individual sensels but it's gone. Right. I get it. Can you now explain what Floyd meant by 'binned'? Regards, Eric Stevens |
#56
|
|||
|
|||
Entry level Nikon 24mp?!
Eric Stevens wrote:
Can you now explain what Floyd meant by 'binned'? http://en.wikipedia.org/wiki/Data_binning It's badly explained, unfortunately. Basically, what you do is trade off resolution for sensitivity and less read noise. Instead of passing every pixels' captured electrons to the A/D converter singly, you combine the charge of the pixels you are binning and send the combined charge to the A/D converter. If you bin 2x2, you have about[1] 4 times the electrons to convert to a DN (digital number), which means a lot when the A/D converter needs relatively many electrons compared to what your pixel captures. Additionally, if you add them up now (instead of after the A/D converter, e.g. by downsampling), you get the read noise only once, instead of 4 times, which means SQRT(4) = 2 times less read noise. Classic bayer pattern sensors are very hard to bin, since you'd have to bin non-adjacent cells (so just transferring the charge to the next cell in a CCD doesn't work) and you get strange artifacts because you combine a rather large area, so with bayer pattern sensors you will usually just downscale. (Note that a r r g g r r g g g g b b g g b b pattern isn't a classic bayer pattern unless you do bin it properly 2x2.) Binning is therefore usually found in scientific sensors, which are often monochromatic and get their colour information (if any) often from a colour wheel --- which will often not just include red, green and blue, but, depending on the task, also other bandpass, narrow bandpass and IR or UV filters. Binning also reduces the amount of data created, which, when it has to be transmitted, can be a problem when you're a million miles from Earth. http://www.ccd.com/ccd103.html http://www.andor.com/learning/digita...ras/?docid=320 http://www.starrywonders.com/binning.html http://www.noao.edu/outreach/aop/glossary/binning.html http://www.photometrics.com/resource...ne/binning.php http://www.roperscientific.de/binning.html -Wolfgang [1] photon noise! |
#57
|
|||
|
|||
Entry level Nikon 24mp?!
Floyd L. Davidson wrote:
Paul Furman wrote: Floyd L. Davidson wrote: Eric wrote: rOn Tue, 10 Apr 2012 17:33:35 +1000, wrote: "Eric wrote: The Nikon D1 was 2.4 Mp. The actual sensor has 10,655,552 sensor locations, and 4 each of them are binned in order to produce a single value for the Bayer Encoded RAW file. The binning is done in hardware. They should do that with any new 24MP camera, as an option at least. It's probably not possible implement as a switchable option in hardware without introducing serious noise. Noise is not the problem. Electronics, not being a CCD and having to bin over large spaces (to bridge between 4 reds or blues) is. Image quality is too, then. Some Canons have a smaller optional raw format. Canon's "smaller option raw format" is not a raw format. It is a raw format, but not in the sense that you get the direct output of each sensel. In practice those that use it, the advantages they use RAW for, but without having to store all the data they don't care for in first place. What's not clear to me, is whether that makes a sharper low res image in the same way reducing after raw conversion will do. Resizing to a smaller resolution does not make an image "sharper" as such. It thins the borders between ajdacent areas, just as wavelet sharpen does. So not only the acutancy can rise. It removes high frequency detail, which, depending on the image and the amount of downsizing, isn't there in first place (excluding high frequency noise). From bayer patterns being very good but not perfect in the restoration of high frequencies ( SQRT(2) of the pixel to pixel distance) over the roll-off of AA filters down to camera shake, slight misfocus and subject movement ... there can be many reasons why high frequency detail simply isn't there in first place. and in that sense reduces sharpness. Which is irrelevant, unless you're actually seeing it at a distance and resolution where such detail, if it was were, would be visible. Most images just aren't viewed at 100%. Most photos aren't all that sharp at full resolution, if for no other reason than the existence of an antialiasing filter. That isn't really true. The AA filter, in a properly processed image, has just about exactly the same amount of high frequency detail as a similar camera without the AA filter, except that at frequencies very close to the Nyquist limit the signal to noise ratio will be slightly reduced on the camera with the AA filter (and conversely on camera without the AA filter the SNR will be reduced by aliasing distortion throughout the frequency spectrum). That assumes a hard cut-off of the AA filter. I understand that that's pretty hard to do in the real world. In any case the reason images are apparently not sharp when viewed at 100% is because a Bayer Color Filter encoded camera simply cannot produce a tone transition in less than some number of pixels, and the larger the demosiacing matrix the higher the minimum for transition, as well as the more accurate the colors. Actually, you are wrong here. Intelligent demosaicing detects borders and doesn't smear them, irrespective of their size. It's not a simple averaging process. Of course that isn't really "sharpness", but acutance, and again it can be increased with either a high pass filter (Sharpen) or application of Unsharp Mask. Of course that's really sharpness, as such an averaging process would reduce or destroy the visibility of e.g. thin parallel lines, thus reducing resolution drastically. -Wolfgang |
#58
|
|||
|
|||
Entry level Nikon 24mp?!
Wolfgang Weisselberg wrote:
Floyd L. Davidson wrote: Paul Furman wrote: Floyd L. Davidson wrote: Eric wrote: rOn Tue, 10 Apr 2012 17:33:35 +1000, wrote: "Eric wrote: The Nikon D1 was 2.4 Mp. The actual sensor has 10,655,552 sensor locations, and 4 each of them are binned in order to produce a single value for the Bayer Encoded RAW file. The binning is done in hardware. They should do that with any new 24MP camera, as an option at least. It's probably not possible implement as a switchable option in hardware without introducing serious noise. Noise is not the problem. Electronics, not being a CCD and having to bin over large spaces (to bridge between 4 reds or blues) is. Image quality is too, then. Yes. That adds up to *noise* is the problem. Some Canons have a smaller optional raw format. Canon's "smaller option raw format" is not a raw format. It is a raw format, but not in the sense that you get the direct output of each sensel. It simply is not a raw format. It is interpolated data, not raw data that needs to be interpolated. In practice those that use it, the advantages they use RAW for, but without having to store all the data they don't care for in first place. It's just a TIFF RGB format. Whoopee. They could shoot JPEG to get the same effect... What's not clear to me, is whether that makes a sharper low res image in the same way reducing after raw conversion will do. Resizing to a smaller resolution does not make an image "sharper" as such. It thins the borders between ajdacent areas, just as wavelet sharpen does. So not only the acutancy can rise. It removes high frequency detail, which, depending on the image and the amount of downsizing, isn't there in first place (excluding high frequency noise). Well, its true that if you shoot pictures of gray cards, there isn't much high frequency detail. Otherwise there is. From bayer patterns being very good but not perfect in the restoration of high frequencies ( SQRT(2) of the pixel to pixel distance) over the roll-off of AA filters down to camera shake, slight misfocus and subject movement ... there can be many reasons why high frequency detail simply isn't there in first place. But for good photographers, who use good technique, there is almost always enough high frequency detail to make a very visible difference. That is *precisely* why applying Sharpen almost always has a significant effect. and in that sense reduces sharpness. Which is irrelevant, unless you're actually seeing it at a distance and resolution where such detail, if it was were, would be visible. Most images just aren't viewed at 100%. Prints commonly are. Most photos aren't all that sharp at full resolution, if for no other reason than the existence of an antialiasing filter. That isn't really true. The AA filter, in a properly processed image, has just about exactly the same amount of high frequency detail as a similar camera without the AA filter, except that at frequencies very close to the Nyquist limit the signal to noise ratio will be slightly reduced on the camera with the AA filter (and conversely on camera without the AA filter the SNR will be reduced by aliasing distortion throughout the frequency spectrum). That assumes a hard cut-off of the AA filter. I understand that that's pretty hard to do in the real world. It is virtually impossible to do, but that is not required for what was described. The high frequency components just below the Nyquist Limit are reduced by the AA filter, but by no means eliminated because the filter is not particularly sharp. Because of that the actual frequency where maximum frequency distortion occurs will intentionally be placed just above the Nyquist Limit. Most designs will hit a minimum and at higher frequencies will have a slope similar to the slope at lower frequencies. (On some that slope may be very steep at frequencies well above the Nyquist Limit, but hopefully that will also be at the upper limits for any given lens that might be used, so between the filter and the lens there is a very low response level at those frequencies too.) Whatever, the AA filter is a Low Pass filter, and a High Pass filter can be applied in post processing with an almost opposite frequency response to perfectly correct the rolloff provided by the AA filter. At frequencies just below the Nyquist Limit (and be aware that here are absolutely *no* frequencies above the Limit) the correction from the Sharpen HP filter is at its highest. The filter increases noise just as much as it increases desired signal, and that is why the SNR is worse than it would without the AA filter but the frequency response will be almost precisely the same. Of course at lower and lower frequencies there is less and less correction by the Sharpen filter, and therefore less and less change to the SNR compared to an image taken without the AA filter. In any case the reason images are apparently not sharp when viewed at 100% is because a Bayer Color Filter encoded camera simply cannot produce a tone transition in less than some number of pixels, and the larger the demosiacing matrix the higher the minimum for transition, as well as the more accurate the colors. Actually, you are wrong here. Intelligent demosaicing detects borders and doesn't smear them, irrespective of their size. It's not a simple averaging process. You simply cannot get a one pixel length tone transition, without applying a Sharpen HP or USM filter. Yes different demosaicing algorithms can produce sharper results, but absent some form of sharpen, there are no truly "sharp" transitions. Of course that isn't really "sharpness", but acutance, and again it can be increased with either a high pass filter (Sharpen) or application of Unsharp Mask. Of course that's really sharpness, as such an averaging process would reduce or destroy the visibility of e.g. thin parallel lines, thus reducing resolution drastically. There is no actual increase in resolution. All that happens with either USM or an HP filter is that the difference between existing tone transition edges in increased. -- Floyd L. Davidson http://www.apaflo.com/ Ukpeagvik (Barrow, Alaska) |
#59
|
|||
|
|||
Entry level Nikon 24mp?!
On Mon, 16 Apr 2012 13:01:21 +0200, Wolfgang Weisselberg
wrote: Eric Stevens wrote: Can you now explain what Floyd meant by 'binned'? http://en.wikipedia.org/wiki/Data_binning It's badly explained, unfortunately. Basically, what you do is trade off resolution for sensitivity and less read noise. Instead of passing every pixels' captured electrons to the A/D converter singly, you combine the charge of the pixels you are binning and send the combined charge to the A/D converter. If you bin 2x2, you have about[1] 4 times the electrons to convert to a DN (digital number), which means a lot when the A/D converter needs relatively many electrons compared to what your pixel captures. Additionally, if you add them up now (instead of after the A/D converter, e.g. by downsampling), you get the read noise only once, instead of 4 times, which means SQRT(4) = 2 times less read noise. Right. I get it. That's the statistical use of the term 'binned'. The problem I had is that there are several different meanings/applications for the term 'binned' and none of them seemed to make sense in the particular context. The closest I could get was 'binned' as in 'dumped' i.e. the data was ignored. Classic bayer pattern sensors are very hard to bin, since you'd have to bin non-adjacent cells (so just transferring the charge to the next cell in a CCD doesn't work) and you get strange artifacts because you combine a rather large area, so with bayer pattern sensors you will usually just downscale. Understood. (Note that a r r g g r r g g g g b b g g b b pattern isn't a classic bayer pattern unless you do bin it properly 2x2.) Binning is therefore usually found in scientific sensors, which are often monochromatic and get their colour information (if any) often from a colour wheel --- which will often not just include red, green and blue, but, depending on the task, also other bandpass, narrow bandpass and IR or UV filters. Binning also reduces the amount of data created, which, when it has to be transmitted, can be a problem when you're a million miles from Earth. http://www.ccd.com/ccd103.html http://www.andor.com/learning/digita...ras/?docid=320 http://www.starrywonders.com/binning.html http://www.noao.edu/outreach/aop/glossary/binning.html http://www.photometrics.com/resource...ne/binning.php http://www.roperscientific.de/binning.html -Wolfgang [1] photon noise! Many thanks. Regards, Eric Stevens |
#60
|
|||
|
|||
Entry level Nikon 24mp?!
Floyd L. Davidson wrote:
Wolfgang Weisselberg wrote: Floyd L. Davidson wrote: Paul Furman wrote: Floyd L. Davidson wrote: Eric wrote: rOn Tue, 10 Apr 2012 17:33:35 +1000, wrote: "Eric wrote: The Nikon D1 was 2.4 Mp. The actual sensor has 10,655,552 sensor locations, and 4 each of them are binned in order to produce a single value for the Bayer Encoded RAW file. The binning is done in hardware. They should do that with any new 24MP camera, as an option at least. It's probably not possible implement as a switchable option in hardware without introducing serious noise. Noise is not the problem. Electronics, not being a CCD and having to bin over large spaces (to bridge between 4 reds or blues) is. Image quality is too, then. Yes. That adds up to *noise* is the problem. That's like reasoning that drugs are the problem for accidents on icy roads. At least *try* to understand that not all image problems are noise and that not all technological problems result in noise. Some Canons have a smaller optional raw format. Canon's "smaller option raw format" is not a raw format. It is a raw format, but not in the sense that you get the direct output of each sensel. It simply is not a raw format. It is not an image, it has to be cooked to provide an image, therefore it's raw. It is interpolated data, not raw data that needs to be interpolated. Could you expand on that? In practice those that use it, the advantages they use RAW for, but without having to store all the data they don't care for in first place. It's just a TIFF RGB format. Whoopee. They could shoot JPEG to get the same effect... News for you: CR2 is a JPEG RGB format. What's not clear to me, is whether that makes a sharper low res image in the same way reducing after raw conversion will do. Resizing to a smaller resolution does not make an image "sharper" as such. It thins the borders between ajdacent areas, just as wavelet sharpen does. So not only the acutancy can rise. No reply? It removes high frequency detail, which, depending on the image and the amount of downsizing, isn't there in first place (excluding high frequency noise). Well, its true that if you shoot pictures of gray cards, there isn't much high frequency detail. Otherwise there is. Please do some FT on real world images, and prove your claim. From bayer patterns being very good but not perfect in the restoration of high frequencies ( SQRT(2) of the pixel to pixel distance) over the roll-off of AA filters down to camera shake, slight misfocus and subject movement ... there can be many reasons why high frequency detail simply isn't there in first place. But for good photographers, who use good technique, there is almost always enough high frequency detail to make a very visible difference. A "visible difference" between what and what? Do try a double blind test on printed (or web sized) images, one 'original' from bayer and one downsized to 70% ... That is *precisely* why applying Sharpen almost always has a significant effect. 'significant' meaning you can measure the difference, but you cannot see it? 'significant' meaning a difference of several orders of magnitude? BTW, whether Sharpen has a 'significant effect' depends very much on where the high pass filter cuts off lower frequencies. and in that sense reduces sharpness. Which is irrelevant, unless you're actually seeing it at a distance and resolution where such detail, if it was were, would be visible. Most images just aren't viewed at 100%. Prints commonly are. 'viewing distance'. 'loupe'. Most photos aren't all that sharp at full resolution, if for no other reason than the existence of an antialiasing filter. That isn't really true. The AA filter, in a properly processed image, has just about exactly the same amount of high frequency detail as a similar camera without the AA filter, except that at frequencies very close to the Nyquist limit the signal to noise ratio will be slightly reduced on the camera with the AA filter (and conversely on camera without the AA filter the SNR will be reduced by aliasing distortion throughout the frequency spectrum). That assumes a hard cut-off of the AA filter. I understand that that's pretty hard to do in the real world. It is virtually impossible to do, but that is not required for what was described. The high frequency components just below the Nyquist Limit are reduced by the AA filter, but by no means eliminated because the filter is not particularly sharp. Because of that the actual frequency where maximum frequency distortion occurs will intentionally be placed just above the Nyquist Limit. Most designs will hit a minimum and at higher frequencies will have a slope similar to the slope at lower frequencies. (On some that slope may be very steep at frequencies well above the Nyquist Limit, but hopefully that will also be at the upper limits for any given lens that might be used, so between the filter and the lens there is a very low response level at those frequencies too.) Whatever, the AA filter is a Low Pass filter, and a High Pass filter can be applied in post processing with an almost opposite frequency response to perfectly correct the rolloff provided by the AA filter. At frequencies just below the Nyquist Limit (and be aware that here are absolutely *no* frequencies above the Limit) the correction from the Sharpen HP filter is at its highest. The filter increases noise just as much as it increases desired signal, and that is why the SNR is worse than it would without the AA filter but the frequency response will be almost precisely the same. Of course at lower and lower frequencies there is less and less correction by the Sharpen filter, and therefore less and less change to the SNR compared to an image taken without the AA filter. So, where do I get a high pass filter that works exactly to counteract the AA filter, and why do other filters provide better results when the result is inspected visually? In any case the reason images are apparently not sharp when viewed at 100% is because a Bayer Color Filter encoded camera simply cannot produce a tone transition in less than some number of pixels, and the larger the demosiacing matrix the higher the minimum for transition, as well as the more accurate the colors. Actually, you are wrong here. Intelligent demosaicing detects borders and doesn't smear them, irrespective of their size. It's not a simple averaging process. You simply cannot get a one pixel length tone transition, without applying a Sharpen HP or USM filter. Yes different demosaicing algorithms can produce sharper results, but absent some form of sharpen, there are no truly "sharp" transitions. What is 'truly "sharp"'? A single hot pixel? An aliased line? I see lots of transitions from A to B with just one pixel in between --- and since I cannot align the camera pixel borders perfectly with the image ... Actually, looking at resolution tests, one gets transitions of 1.5 and less pixels. Of course that isn't really "sharpness", but acutance, and again it can be increased with either a high pass filter (Sharpen) or application of Unsharp Mask. Of course that's really sharpness, as such an averaging process would reduce or destroy the visibility of e.g. thin parallel lines, thus reducing resolution drastically. There is no actual increase in resolution. All that happens with either USM or an HP filter is that the difference between existing tone transition edges in increased. Ah, how exactly do you define 'resolution' and how do you define MTF? -Wolfgang |
Thread Tools | |
Display Modes | |
|
|
Similar Threads | ||||
Thread | Thread Starter | Forum | Replies | Last Post |
Nikon D3000 - entry level? | David J Taylor[_16_] | Digital Photography | 2 | May 20th 10 03:51 PM |
entry level P & S | No Name | Digital Photography | 1 | November 9th 08 05:02 PM |
entry level digital SLR recommandation. nikon D50 or D40 or any other cameras.TIA | jamie kim | Digital Photography | 2 | March 6th 07 12:25 AM |
Buying my first ZLR (entry level) | Susan McGee | Digital ZLR Cameras | 15 | January 5th 05 02:52 PM |
Best Entry Level Camera? | Linda_N | Digital Photography | 3 | October 25th 04 01:39 AM |