If this is your first visit, be sure to check out the FAQ by clicking the link above. You may have to register before you can post: click the register link above to proceed. To start viewing messages, select the forum that you want to visit from the selection below. |
|
|
Thread Tools | Display Modes |
#21
|
|||
|
|||
Questions about isolating green channel in RAW data
Alan Browne wrote:
Or get the dcraw source code and modify it to do as you need. (Programming skills needed). Use the -d or -D options to /dcraw/, generate a PGM format file, and then convert it from a binary format to an ASCII format. For example: dcraw -D DSC_0000.NEF will produce a monochrome image, DCS_0000.PGM, that does not scale the RGB values (use of the -d option would scale R and B values). Imagemagic's /convert/ tool can be used to convert that to an ascii format: convert DCS_0000.PGM -compress none 0000.PGM The result is an ascii text file that can be manipulated with text tools rather than requiring C or C++ programming ability. Another technique would be to produce an interpolated image: dcraw -6 -W -g 1 1 DCS_0000.NEF will produce a 16 bit linear encoded PPM file, which also can be converted to an ASCII format: convert DCS_0000.PPM -compress none 0000.PPM With the PGM format it would be necessary to determine the Bayer Color Filter pattern used by the particular camera to figure out how to remove/extract R, G, or B raw data. With the PPM format the raw data is interpolated, and only the format for the PPM data needs to be understood. -- Floyd L. Davidson http://www.apaflo.com/ Ukpeagvik (Barrow, Alaska) |
#22
|
|||
|
|||
Questions about isolating green channel in RAW data
In article , Martin Brown
wrote: The calculation is an estimate. which means it's not faked. It wasn't actually measured. It is interpolated from the data that you do have. This is usually sufficient for most images but not always. that's calculating it, not faking it. IOW it is not "precise" because the actual information at that photosite is unknown. the calculations are very precise. the error can be measured and it's *very* low. it's in no way 'guessed' or 'faked'. Precise calculations are meaningless in the absence of data. The estimate could be accurate to 1000 decimal places and that would not make it any more 'true'. there's plenty of data. millions of sampling points. But still no green or blue data where red was measured and all permutations of these colour exclusions. however, the green & blue data for a red pixel can be calculated from neighboring pixels. is it perfect? no, but nothing is. Since it is an estimate of what would have been in that location had the information not been filtered out it remains an estimate no matter how low the error may be. which means it's not faked. It is always an inferred value based on the data that you do have. It could still be wrong and would certainly *BE* wrong if the target was one of the pathological test cards so beloved of Foveon supporters. it's still not faked. and the edge cases can be discarded. you could also have aliasing errors, even on a monochrome sensor. The error cannot be measured since the data was thrown away. you have the source image and the output of the demosaic, so the error can be calculated. Only *iff* you actually have a source image that was fully sampled in the first place. That is how the algorithms are tuned against the real world images - but they can still struggle a bit with white picket fences at shallow angles to the sensor array. The Moire fringing in chroma is very hard to remove without losing some real image data too. If you use a Bayer sampled CCD sensor then you are making assumptions about the target image that are usually valid but there are situations where the Bayer demosaic cannot get the right answer. These situations are usually contrived but they do sometimes occur in real life. there are always edge cases. nothing is perfect. An example is imaging the sun in the pure red light of H-alpha 656nm which presents serious problems to a Bayer array demosaicer. Early Kodak ones would go completely haywire on this source material. how often does that happen in the real world? |
#23
|
|||
|
|||
Questions about isolating green channel in RAW data
On 2013.06.04 20:14 , nospam wrote:
In article , Alan Browne wrote: The calculation is an estimate. which means it's not faked. Here it means "not the truth". IOW it is not "precise" because the actual information at that photosite is unknown. the calculations are very precise. the error can be measured and it's *very* low. it's in no way 'guessed' or 'faked'. Precise calculations are meaningless in the absence of data. The estimate could be accurate to 1000 decimal places and that would not make it any more 'true'. there's plenty of data. millions of sampling points. Proof that you do not get it. Filling the blue channel at a given point only requires information from the several pixels around it. You don't need (certainly don't WANT) millions of sampling points. Since it is an estimate of what would have been in that location had the information not been filtered out it remains an estimate no matter how low the error may be. which means it's not faked. The error cannot be measured since the data was thrown away. you have the source image and the output of the demosaic, so the error can be calculated. Absolutely not. You do not have the truth about the R & B channels at the pixel where you filtered to get green only. The information (the truth) was left behind when the photo was taken. There is one camera that solves this problem, but it is limited in scope (must be on a tripod): the Hasselblad H3DII-39MS which takes 3 full frame shots of each color + 4th shot as a registration check). It then composes output RBG for each location with real, sampled, unfiltered colour. This is to get over color interpolation as well as softness introduced by the Bayer pattern. It is used to get very high quality photos of artwork, museum pieces and so on. And demostrates (see the comparison photos on the H site that interpolated data contributes to loss of contrast. -- "A Canadian is someone who knows how to have sex in a canoe." -Pierre Berton |
#24
|
|||
|
|||
Questions about isolating green channel in RAW data
In article , Alan Browne
wrote: Precise calculations are meaningless in the absence of data. The estimate could be accurate to 1000 decimal places and that would not make it any more 'true'. there's plenty of data. millions of sampling points. Proof that you do not get it. Filling the blue channel at a given point only requires information from the several pixels around it. You don't need (certainly don't WANT) millions of sampling points. i'm not saying millions of samples for each pixel. i'm saying there are millions of pixels so there's plenty of data. typically any given pixel uses 9-25 sensels. it could be more but the benefit is not usually worth it. Since it is an estimate of what would have been in that location had the information not been filtered out it remains an estimate no matter how low the error may be. which means it's not faked. The error cannot be measured since the data was thrown away. you have the source image and the output of the demosaic, so the error can be calculated. Absolutely not. You do not have the truth about the R & B channels at the pixel where you filtered to get green only. The information (the truth) was left behind when the photo was taken. you don't need to sample the truth. it can be calculated and then compared to the original. different bayer algorithms have different error rates. this can and has been measured. the simple ones that just do a linear calculation have a higher error that the more sophisticated ones, which have lower errors. different algorithms have their strengths and weaknesses, and there are always edge cases. |
#25
|
|||
|
|||
Questions about isolating green channel in RAW data
On 2013.06.05 18:50 , nospam wrote:
In article , Alan Browne wrote: Precise calculations are meaningless in the absence of data. The estimate could be accurate to 1000 decimal places and that would not make it any more 'true'. there's plenty of data. millions of sampling points. Proof that you do not get it. Filling the blue channel at a given point only requires information from the several pixels around it. You don't need (certainly don't WANT) millions of sampling points. i'm not saying millions of samples for each pixel. i'm saying there are millions of pixels so there's plenty of data. typically any given pixel uses 9-25 sensels. it could be more but the benefit is not usually worth it. If that many - it could be quite a bit fewer. IAC the millions above was not relevant at all. Since it is an estimate of what would have been in that location had the information not been filtered out it remains an estimate no matter how low the error may be. which means it's not faked. The error cannot be measured since the data was thrown away. you have the source image and the output of the demosaic, so the error can be calculated. Absolutely not. You do not have the truth about the R & B channels at the pixel where you filtered to get green only. The information (the truth) was left behind when the photo was taken. you don't need to sample the truth. it can be calculated and then compared to the original. What original? The original was filtered away in the camera. It is gone. Does not exist anymore. Never got to the sensor. Converted to heat and left to the general entropy of the universe. different bayer algorithms have different error rates. this can and has been measured. the simple ones that just do a linear calculation have a higher error that the more sophisticated ones, which have lower errors. different algorithms have their strengths and weaknesses, and there are always edge cases. Regardless, they are not 'truth' and never will be. -- "A Canadian is someone who knows how to have sex in a canoe." -Pierre Berton |
#26
|
|||
|
|||
Questions about isolating green channel in RAW data
In article , Alan Browne
wrote: The error cannot be measured since the data was thrown away. you have the source image and the output of the demosaic, so the error can be calculated. Absolutely not. You do not have the truth about the R & B channels at the pixel where you filtered to get green only. The information (the truth) was left behind when the photo was taken. you don't need to sample the truth. it can be calculated and then compared to the original. What original? the subject you're photographing. The original was filtered away in the camera. It is gone. Does not exist anymore. Never got to the sensor. Converted to heat and left to the general entropy of the universe. however, the subject is still there. those who develop bayer algorithms measure both the subject and the result and try to get it as as accurate as possible. they are doing an amazing job of it too. different bayer algorithms have different error rates. this can and has been measured. the simple ones that just do a linear calculation have a higher error that the more sophisticated ones, which have lower errors. different algorithms have their strengths and weaknesses, and there are always edge cases. Regardless, they are not 'truth' and never will be. it's *very* close to the truth, indistinguishable in nearly all cases. |
#27
|
|||
|
|||
Questions about isolating green channel in RAW data
On 2013.06.05 22:27 , nospam wrote:
In article , Alan Browne wrote: The error cannot be measured since the data was thrown away. you have the source image and the output of the demosaic, so the error can be calculated. Absolutely not. You do not have the truth about the R & B channels at the pixel where you filtered to get green only. The information (the truth) was left behind when the photo was taken. you don't need to sample the truth. it can be calculated and then compared to the original. What original? the subject you're photographing. We're talking about measurement and estimate variance so how do you do that in a quantifiable way? The original was filtered away in the camera. It is gone. Does not exist anymore. Never got to the sensor. Converted to heat and left to the general entropy of the universe. however, the subject is still there. those who develop bayer algorithms measure both the subject and the result and try to get it as as accurate as possible. they are doing an amazing job of it too. different bayer algorithms have different error rates. this can and has been measured. the simple ones that just do a linear calculation have a higher error that the more sophisticated ones, which have lower errors. different algorithms have their strengths and weaknesses, and there are always edge cases. Regardless, they are not 'truth' and never will be. it's *very* close to the truth, indistinguishable in nearly all cases. Close to the truth is not the truth. As I pointed out in another post, the only way to get that true reading is to use a camera capable of that such as the Hasselbald H3D-39MS. -- "A Canadian is someone who knows how to have sex in a canoe." -Pierre Berton |
#28
|
|||
|
|||
Questions about isolating green channel in RAW data
In article , Alan Browne
wrote: Regardless, they are not 'truth' and never will be. it's *very* close to the truth, indistinguishable in nearly all cases. Close to the truth is not the truth. it's closer than film, and that didn't do any chroma interpolation. As I pointed out in another post, the only way to get that true reading is to use a camera capable of that such as the Hasselbald H3D-39MS. even that isn't the truth. nothing is perfect. |
#29
|
|||
|
|||
Questions about isolating green channel in RAW data
On 06/06/2013 23:26, nospam wrote:
In article , Alan Browne wrote: Regardless, they are not 'truth' and never will be. it's *very* close to the truth, indistinguishable in nearly all cases. Close to the truth is not the truth. it's closer than film, and that didn't do any chroma interpolation. How do arrive at that bizarre claim? Fine grain colour film like Kodachrome 25 could easily take on a modern CCD sensor and would unlike the Bayer masked image sample all colours at all sites. As I pointed out in another post, the only way to get that true reading is to use a camera capable of that such as the Hasselbald H3D-39MS. even that isn't the truth. nothing is perfect. But the point here is that there is a whole known class of images that Bayer cannot sensibly measure. They are rare in natural scenes but they are not negligible. You seem to think that demosaicing can do magic! It is always limited to work from the raw data that it has available and the sampling effects that go with it. The eye generally cannot tell the difference because the human eye puts a far greater weight on luminance resolution than on colour which is why chroma subsampling works so well. The limitations of the human eye are the crucial factor. Bayer demosaic gets away with an approximation that works in practice except for pathological targets and a handful of awkward natural images. Notably things like red flowers with black veins on and tree branches sillouetted against clear blue sky. These would show a distinct difference at a pixel level if fully chroma sampled. -- Regards, Martin Brown |
#30
|
|||
|
|||
Questions about isolating green channel in RAW data
In article , Martin Brown
wrote: Regardless, they are not 'truth' and never will be. it's *very* close to the truth, indistinguishable in nearly all cases. Close to the truth is not the truth. it's closer than film, and that didn't do any chroma interpolation. How do arrive at that bizarre claim? Fine grain colour film like Kodachrome 25 could easily take on a modern CCD sensor and would unlike the Bayer masked image sample all colours at all sites. digital has more accurate colour (lower delta-e) as well as higher resolution than film. that makes it closer to the truth than film could ever be. not that people want the truth. take velvia for example. or hdr. As I pointed out in another post, the only way to get that true reading is to use a camera capable of that such as the Hasselbald H3D-39MS. even that isn't the truth. nothing is perfect. But the point here is that there is a whole known class of images that Bayer cannot sensibly measure. They are rare in natural scenes but they are not negligible. You seem to think that demosaicing can do magic! those are edge cases. if you like to shoot colour resolution charts, as the foveon fanbois do, then bayer is a bad choice. however, most people shoot real world scenes, so it's not an issue. It is always limited to work from the raw data that it has available and the sampling effects that go with it. The eye generally cannot tell the difference because the human eye puts a far greater weight on luminance resolution than on colour which is why chroma subsampling works so well. The limitations of the human eye are the crucial factor. Bayer demosaic gets away with an approximation that works in practice except for pathological targets and a handful of awkward natural images. in other words, it doesn't matter except in the lab and on very rare occasion in the real world. film isn't perfect either. Notably things like red flowers with black veins on and tree branches sillouetted against clear blue sky. These would show a distinct difference at a pixel level if fully chroma sampled. no it wouldn't, and bayer captures more chroma than the human eye can resolve anyway. |
Thread Tools | |
Display Modes | |
|
|
Similar Threads | ||||
Thread | Thread Starter | Forum | Replies | Last Post |
why the blue channel(B) is noisier than other two channels(R, G)? | Ray Fischer | Digital Photography | 0 | March 13th 11 10:34 PM |
Create Your Own Music Channel | **World Photo Galleries** | Digital Photography | 0 | January 28th 06 01:47 PM |
Bits per channel | Siddhartha Jain | Digital Photography | 34 | January 7th 05 06:12 PM |
Red channel to BW | ned | Digital Photography | 3 | July 19th 04 02:05 AM |