If this is your first visit, be sure to check out the FAQ by clicking the link above. You may have to register before you can post: click the register link above to proceed. To start viewing messages, select the forum that you want to visit from the selection below. |
|
|
Thread Tools | Display Modes |
#61
|
|||
|
|||
Pentax K10D beats (sharpness, detail) Canon 40D?
David J Taylor wrote:
Wolfgang Weisselberg wrote: David J Taylor wrote: Floyd L. Davidson wrote: But a 43.1 KHz tone is extremely easy to filter out. Of course, I was trying to make the numbers easier, and to demonstrate how something could alias right down into the wanted band. But a similar image frequency is far harder to filter out. As I already mentioned, in audio oversampling is typically used to ameliorate these issues, but as far as I know, oversampling has not yet been applied to DSLRs. Oversampling in audio, unless I misunderstand, would be sampling with a much higher frequency than the target 44.1kHz frequency. In DSLRs that would mean using 4 (2x oversampling) or 16 (4x oversampling) pixels where there is now one. For a 6MP resolution you'd need 24 million pixels at just 2x oversampling in width and height. For physical reasons so many pixels need large (and expensive) sensors or must deal with small full well sizes and photon noise. On the other hand, the new sRAWs may be downsampled from ordinary RAWs, and thus count as oversampling + downsampling. Of course you can get the same effect by resizing your photo intelligently from 8MPix to, say 2MPix (2x) or 0.9MPix (3x) -Wolfgang Wolfgang, Yes, the oversampling allows the first (analog) filter to be simpler, and the subsequent filtering to be done digitally. Final samples are delivered at 44.1KHz (or higher in studio work). If oversampling were used for digital cameras, I don't think that the photon noise and dynamic range would be significantly worse, as the same area of silicon is used per output pixel. In your 2Mpix or 0.9Mpix analogy, it would mean that you could use an anti-aliasing filter with an equivalently strong cut-off (i.e. less sharp 8Mpix images). There remains one problem with digital photography though, as someone mentioned recently, that the very simple anti-aliasing filter used does not have a sharp cut-off just before the Nyquist frequency, it has a more gentle slope, meaning that there will be some compromise between what some call "resolution" and the damaging effects of aliasing. Different cameras will behave differently, and different users may prefer different results. That was the point I was trying to get at in my original question. Many DSLR users already accept that different images will respond differently to different levels of sharpening, and that for best effect sharpening should be applied at whatever level of resolution the final print or screen image is to be viewed at, which may not be the same as the original camera image resolution. Hence most cameras now come with menu-settable changes in the degree of sharpening applied to their jpgs, and to what level of image resolution. Since the extra power and sophistication of large computer processors compared to those in the camera mean that image editors can employ more sophisticated sharpening than the cameras, many DSLR users now prefer to do the minimum in camera and most of it later in their own preferred image editor. What I don't understand is why the same kind of flexibility is not applied to AA filtering. I can't see any technological barriers to it. Given the current state of camera technology it would only be of benefit to those using expensive high quality kit, but the fact that some camera makers (such as Leica, Sony, & Fuji, IIRC) are already in their top models edging in the direction of less optical AA filtering in order to exploit more of the native resolution in their technology it seems to me that camera technology has now got to the point where this has become relevant. -- Chris Malcolm DoD #205 IPAB, Informatics, JCMB, King's Buildings, Edinburgh, EH9 3JZ, UK [http://www.dai.ed.ac.uk/homes/cam/] |
#62
|
|||
|
|||
Pentax K10D beats (sharpness, detail) Canon 40D?
Chris Malcolm wrote:
David J Taylor wrote: Floyd L. Davidson wrote: But a 43.1 KHz tone is extremely easy to filter out. Of course, I was trying to make the numbers easier, and to demonstrate how something could alias right down into the wanted band. Filters are complex only when the cutoff has a steep slope. .. which is required for audio as an essentially flat response is required, with a sharp cut-off before the Nyquist frequency. Making the pass-band acceptably flat also increases the complexity of the filter, as does control of group delay. As I already mentioned, in audio oversampling is typically used to ameliorate these issues, but as far as I know, oversampling has not yet been applied to DSLRs. Because in the case of digital cameras you'd need a lens capable of resolution at the oversampling frequencies, and with today's technology that would cost far too much to be worth it as a component of an anti-aliasing system. That's why in the realm of digital cameras we're still in the engineering morass of systems which are struggling to reach the levels of quality at which the practical problems will be as well behaved as the theoretical mathematical analyses. The resolution of the lens would remain unchanged. You are simply trying to capture a more accurate "8Mpix" (or whatever) representation of the image in the focal plane of the lens. I agree that the cost would be too high at the moment, and with your comments on the subsequent results! Cheers, David |
#63
|
|||
|
|||
Pentax K10D beats (sharpness, detail) Canon 40D?
John Bean wrote:
[] Genuine question (I really don't know): does the microphone need to have a response in excess of the oversampling frequency when making a digital audio recording? It seems to me that the lens and microphone have an analogous similarity in this discussion. No, it does not. David |
#64
|
|||
|
|||
Pentax K10D beats (sharpness, detail) Canon 40D?
Chris Malcolm wrote:
[] I know. I used to be one of them. More than twenty five years ago I was writing code to get rid of aliasing artefacts in monochrome 256x256 digital camera images :-) That was back in the old days when the only way to acquire a digital camera was to make it yourself by unsoldering the metal top of a military spec dynamic RAM chip and then focusing an image on it with a lens. Fascinating - did you have a lot of success with your code? What nature of test images did you use (edges or bar wedges), and did you end up designing in the frequency or time/space domain? Cheers, David |
#65
|
|||
|
|||
Pentax K10D beats (sharpness, detail) Canon 40D?
On Oct 28, 1:14 pm, Chris Malcolm wrote:
John Bean wrote: On 27 Oct 2007 09:27:51 GMT, Chris Malcolm wrote: I can't understand why you say that no amount of post-processing will remove it without degrading the image. I don't understand why you can't do the same post-processing computationally as you can optically. I see no mathematical or computational barrier to doing exactly the same thing computationally as optically. I thought it was just a question of convenience and marketing. Because there's no way to separate what is real from what is fake, that's the whole problem. That's not what I was talking about. I was simply talking about the theoretical equivalence of an optical AA filter, and a computational one designed to do exactly the same job. How can it do the same job? Once you have sampled the signal, it has aliasing artifacts; it's not really possible to tell which are aliasing artifacts and which detail (but I imagine there are heuristic algorithms; even the best I've seen seem to just remove the colour artifacts only). The AA filter isn't there to "mask out" false detail, it's there to prevent it from happening by eliminating the out of band spacial frequencies that would interfere with the pitch of the sensor grid to produce the aliasing in the first place. That *is* what I was talking about, and my point was that that problem only occurs in certain images. So if you apply the necessary filtering to the images that need it, you can get higher detail resolution in those that don't. In fact few camera makers apply enough AA filtering to get rid of *all* the problems *all* the time. They apply enough to get rid of most of it. "Most" is a question of taste. As someone else pointed out, the Leica M8 designers preferred a weaker AA filter and more detail. Which of course led certain ignorant reviewers to pontificate about there being a fault in the camera. |
#66
|
|||
|
|||
Pentax K10D beats (sharpness, detail) Canon 40D?
On Oct 28, 1:26 pm, "David J Taylor" -this-
bit.nor-this-bit.co.uk wrote: Chris Malcolm wrote: [] That's not what I was talking about. I was simply talking about the theoretical equivalence of an optical AA filter, and a computational one designed to do exactly the same job. Chris, One problem with filtering in the optical domain may be that in the computational domain, negative values of light intensity are allowed, whereas they don't exist for broadband incoherent light detected by a silicon sensor. Are they possible for monochromatic and/or coherent light? So in the optical domain you are constrained to a positive impulse response for the filter (I think, corrections welcome!). One thing I never understood is that people keep talking about ideal filters as step functions. But such a filter would cause ringing at the cutoff frequency); in fact, there is always a tradeoff between how much power you allow to pass in frequencies higher than your "cutoff" and how much signal distortion you get back in real space. I've seen several papers discussing this (mainly by introducing different metrics and quantifying several different filter shapes). So here's my question: you seem to know about such filters in audio. What is done there? I know that oversampling is common, but do people normally strive for sharp cutoffs? Is this tradeoff important there? Of course, this doesn't alter the fact you are trying to prevent the higher spatial frequencies from reaching the sensor before the spatial sampling. Whether these frequencies are present does indeed depend on the image and other factors, and how much degradation of the image by artefacts or by lack of the higher spatial frequencies an individual will accept is a subjective measure. |
#67
|
|||
|
|||
Pentax K10D beats (sharpness, detail) Canon 40D?
On Sun, 28 Oct 2007 11:33:14 GMT, "David J Taylor"
wrote: John Bean wrote: [] Genuine question (I really don't know): does the microphone need to have a response in excess of the oversampling frequency when making a digital audio recording? It seems to me that the lens and microphone have an analogous similarity in this discussion. No, it does not. I'd have been surprised if it did, but since practical digital audio (rather than theory) is not within my area of expertise I thought I'd ask. Far too many people around here like to sound like experts whether or not they actually are. -- John Bean |
#68
|
|||
|
|||
Pentax K10D beats (sharpness, detail) Canon 40D?
On Oct 28, 1:46 pm, Chris Malcolm wrote:
David J Taylor wrote: Chris Malcolm wrote: David J Taylor No, it's false sharpness. Once the higher spatial frequencies have been aliased down to lower, more visible frequencies, no amount of post-processing will remove the damage done without causing serious degradation of the image. This is what I can't understand. Aliasing is a particular problem which occurs with a particular kind of image detail. You can't get rid of it, oprtically or by image processing, without losing some of the image detail and resolution that would be present in the image without the filtering. It's a trade off. To get rid of all possible aliasing you have to lose quite a bit of detail. That's why most camera manufacturers don't get rid of all of it, they just get rid of most of it, and where "most" lies in the trade off between aliasing artefacts and detail loss is a question of taste. Yes, in optics it's a compromise, as optical anti-aliasing filters are much simpler than those used in audio. How much of a problem it is also depends on the subject (how much high-frequency detail) and the lens (what is the lens MTF around and above the Nyquist frequency). A long exposure through turbulent atmosphere can also affect the MTF between subject and lens. I can't understand why you say that no amount of post-processing will remove it without degrading the image. I don't understand why you can't do the same post-processing computationally as you can optically. I see no mathematical or computational barrier to doing exactly the same thing computationally as optically. I thought it was just a question of convenience and marketing. Perhaps it's easier to think in audio terms. With inadequate filtering before digitisation, a CD recording system sampling at 44.1KHz would render a signal which had an actual frequency of 43.1KHz as a tone of 1KHz. Of course, an audio system will have a very good low-pass filter before the digitisation, ensure that no 43.1KHz information reaches the ADC. But once you have the 1KHz present, the only way to remove it is to remove /any/ 1KHz information in the signal. Is that any clearer? I understand anti-aliasing perfectly in audio. What makes the audio problem and its typical solutions different is that it's easy with quite cheap technology to make systems with accurate responses well above the frequency range of human hearing. So the aliasing problems and the technology to remove them behave very much more like the simplified theoretical mathematical models. That makes the practical engineering and design problems much simpler. Well, is there some aspect of the discussion here that you think doesn't really correspond to reality? (since you mention simplified theoretical models, which is of course true). I am curious, rather than confrontational. BTW: the design of the low-pass filter is complex and can cause overshoots and ringing as seen in some photos. Multi-level sampling at different rates can be used to simplify the filter design. It's quite an interesting area - there's probably quite a lot of expertise in your Engineering Department, but I don't have any names. I know. I used to be one of them. More than twenty five years ago I was writing code to get rid of aliasing artefacts in monochrome 256x256 digital camera images :-) That was back in the old days when the only way to acquire a digital camera was to make it yourself by unsoldering the metal top of a military spec dynamic RAM chip and then focusing an image on it with a lens. That's interesting. Do you know of current algorithms that do something other than remove colour artifacts? I haven't seen any (but I never looked carefully into the matter). |
#69
|
|||
|
|||
Pentax K10D beats (sharpness, detail) Canon 40D?
acl wrote:
On Oct 28, 1:26 pm, "David J Taylor" [] One problem with filtering in the optical domain may be that in the computational domain, negative values of light intensity are allowed, whereas they don't exist for broadband incoherent light detected by a silicon sensor. Are they possible for monochromatic and/or coherent light? I don't know - that's outside my area of expertise, and I no longer have contacts I could ask. So in the optical domain you are constrained to a positive impulse response for the filter (I think, corrections welcome!). One thing I never understood is that people keep talking about ideal filters as step functions. But such a filter would cause ringing at the cutoff frequency); in fact, there is always a tradeoff between how much power you allow to pass in frequencies higher than your "cutoff" and how much signal distortion you get back in real space. I've seen several papers discussing this (mainly by introducing different metrics and quantifying several different filter shapes). So here's my question: you seem to know about such filters in audio. What is done there? I know that oversampling is common, but do people normally strive for sharp cutoffs? Is this tradeoff important there? Yes, it's a trade-off. With 44KHz sampling you might have an analog filter with roll-off from, say, 19KHz to minimise the overshoot. But with oversampling at, say, 192KHz, the analog filter can start its roll of at 22KHz and be down to zero by 96KHz - easy! The rest of the filtering is done in the digital domain where design and control of the filter response is so much easier. I don't work in that field, but I believe that much professional audio is now recorded at 96KHz or 192KHz, and only finally down-sampled to 44.1KHz for CD production. Cheers, David |
#70
|
|||
|
|||
Pentax K10D beats (sharpness, detail) Canon 40D?
On Oct 28, 3:05 pm, "David J Taylor" -this-
bit.nor-this-bit.co.uk wrote: acl wrote: On Oct 28, 1:26 pm, "David J Taylor" [] One problem with filtering in the optical domain may be that in the computational domain, negative values of light intensity are allowed, whereas they don't exist for broadband incoherent light detected by a silicon sensor. Are they possible for monochromatic and/or coherent light? I don't know - that's outside my area of expertise, and I no longer have contacts I could ask. I was using the Socratic method! Negative intensity is impossible for both. But negative amplitude is possible (eg people do it when they interfere laser beams), which I imagine is what you actually had in mind when you made the qualification... I can't see how this could be used to make anything useful as a filter though, so what Kennedy said is probably going to stay true for some time. So in the optical domain you are constrained to a positive impulse response for the filter (I think, corrections welcome!). One thing I never understood is that people keep talking about ideal filters as step functions. But such a filter would cause ringing at the cutoff frequency); in fact, there is always a tradeoff between how much power you allow to pass in frequencies higher than your "cutoff" and how much signal distortion you get back in real space. I've seen several papers discussing this (mainly by introducing different metrics and quantifying several different filter shapes). So here's my question: you seem to know about such filters in audio. What is done there? I know that oversampling is common, but do people normally strive for sharp cutoffs? Is this tradeoff important there? Yes, it's a trade-off. With 44KHz sampling you might have an analog filter with roll-off from, say, 19KHz to minimise the overshoot. But with oversampling at, say, 192KHz, the analog filter can start its roll of at 22KHz and be down to zero by 96KHz - easy! The rest of the filtering is done in the digital domain where design and control of the filter response is so much easier. I don't work in that field, but I believe that much professional audio is now recorded at 96KHz or 192KHz, and only finally down-sampled to 44.1KHz for CD production. Excellent, thanks for the clear reply. |
Thread Tools | |
Display Modes | |
|
|
Similar Threads | ||||
Thread | Thread Starter | Forum | Replies | Last Post |
Pentax K10D | Tyler Heibeck | Digital SLR Cameras | 31 | October 2nd 07 01:15 PM |
Pentax K10D or Canon 30D | GS[_2_] | Digital SLR Cameras | 19 | June 16th 07 10:49 PM |
Pentax K10d | frederick | Digital SLR Cameras | 44 | September 17th 06 09:25 PM |
Pentax K10D now on Pentax site | Pete D | Digital SLR Cameras | 0 | September 14th 06 01:13 AM |
Canon Kit Lens beats Nikon in every test. | Steve Franklin | Digital SLR Cameras | 17 | August 19th 05 10:31 PM |