If this is your first visit, be sure to check out the FAQ by clicking the link above. You may have to register before you can post: click the register link above to proceed. To start viewing messages, select the forum that you want to visit from the selection below. |
|
|
Thread Tools | Display Modes |
#281
|
|||
|
|||
Finally got to the point where no new camera holds my interest (waiting for specific offering)
On Sat, 12 Jan 2019 09:37:37 -0500, Alan Browne
wrote: On 2019-01-11 20:48, Eric Stevens wrote: On Fri, 11 Jan 2019 12:47:20 -0500, Alan Browne wrote: On 2019-01-10 23:38, Eric Stevens wrote: On Thu, 10 Jan 2019 07:43:24 -0500, Alan Browne wrote: On 2019-01-10 04:12, Eric Stevens wrote: On Wed, 9 Jan 2019 08:36:24 -0500, Alan Browne wrote: Its amazing what Google can produce. This is DxO's own account of the situation at: https://www.dxomark.com/dxomark-came...ol-and-scores/ "Dynamic range corresponds to the ratio between the highest brightness a camera can capture (saturation) and the lowest brightness it can capture (which is typically when noise becomes more important than the signal — that is, a signal-to-noise ratio below 0 dB). A value of 12 EV is excellent, with differences below 0.5 EV usually not noticeable. Dynamic range is an open scale." This appears to confirm that the situation is as I deduced: they are not testing the dynamic range as recorded in a raw file. They are testing the range that a camera can capture. i.e. it is the dynmaic range of the sensor. It is not the dynamic range of the raw file. It doesn't actually say that, however. How, specifically, are they bypassing the raw file to get the data? By measuring not the data in the file but the range of brightness that the camera can capture from their test set up. And where _specifically_ are they getting that data? What is the probe point? What is the probe? Read the URL. The use multiple light sources, each of different calibrated illuminance. It's rather like photographing an gray-scale wedge. Oh. Thanks. Now it's CLEAR TO ME YOU HAVE NO CLUE. My question related to how they measure the brightness at the camera. And you're not replying with an answer to that. At all. The object is the measurement of the ability of the camera to detect both low light and bright light. To do this they get the camera to photograph a target containg multiple light sources covering a range of luminances. Some are too bright and the other too dark for the camera to properly capture. It is the difference between these which determines the dynamic range of the sensor. note that they evaluate the DR in terms of a RGB composite analysis, the details of which I am not aware. With this technique there is no need to measure the brightness at the camera. IOW: You have no clue. I wish you wouldn't insist to that effect. I know and understand what is going on. My problem is trying to explain it without the use of diagrams. Here's a clue. Where they are getting the "brightness" information at the camera is from the camera image file. Period. Why? Because there is no means for them to sample the sensor between the sensor and the ADC. Why? Because that would require a high end clean room and data that is proprietary to the camera sensor maker. The ONLY data they can get is from the file and the data in there is from a 14 bit ADC. There is no more information in there than what 14 bits can hold. There may indeed be a mysterious compression as you claim, but that has its own image quality consequences. This has been pointed out to you several times but you seem determined to want to believe the unbelievable ... AS you say, they (probably) have to take it from an image file. Although they could tether the camera. They use as a target a light source of multiple zones ranging from exceedingly dim to bright. The image produced by the camera shows both the lowest light level the sensor can detect without sinking into the swamps of noise and the highest before all buckets overflow. At this point it doesn't matter what the brightness or dimness is that the image file attributes to those light sources. What does matter is that DxO now know which pair of light sources define the upper and lower bounds of the range. DxO also know the brightness of the various light sources. Knowing this and the brightness of both the dimmest and brightest light sources, they now know the upper and lower bounds of the sensors dynamic range. They report the difference between them in EVs. In the case of the Nikon D750 they evaluate the DR as 14.5 EVs which is 1:23,178. (From here on I'm going to simplify so as to avoid logarithms). A problem arises when this is to be encoded by a 14 bit ADC which can only hand integers up to 16,384. The answer is to scale the conversion. Each integer in the output stream represents not 1 analog integer from the sensor but 1.41. This way the wider DR of the sensor is conveyed by the 14 bit stream with the full resolution of which the 14 bit stream is capable. No doubt the Nikon RAW decoder will be aware of the scaling. -- Regards, Eric Stevens |
#282
|
|||
|
|||
Finally got to the point where no new camera holds my interest(waiting for specific offering)
On 1/12/2019 10:39 PM, Eric Stevens wrote:
On Sat, 12 Jan 2019 09:37:37 -0500, Alan Browne wrote: On 2019-01-11 20:48, Eric Stevens wrote: On Fri, 11 Jan 2019 12:47:20 -0500, Alan Browne wrote: On 2019-01-10 23:38, Eric Stevens wrote: On Thu, 10 Jan 2019 07:43:24 -0500, Alan Browne wrote: On 2019-01-10 04:12, Eric Stevens wrote: On Wed, 9 Jan 2019 08:36:24 -0500, Alan Browne wrote: Its amazing what Google can produce. This is DxO's own account of the situation at: https://www.dxomark.com/dxomark-came...ol-and-scores/ "Dynamic range corresponds to the ratio between the highest brightness a camera can capture (saturation) and the lowest brightness it can capture (which is typically when noise becomes more important than the signal — that is, a signal-to-noise ratio below 0 dB). A value of 12 EV is excellent, with differences below 0.5 EV usually not noticeable. Dynamic range is an open scale." This appears to confirm that the situation is as I deduced: they are not testing the dynamic range as recorded in a raw file. They are testing the range that a camera can capture. i.e. it is the dynmaic range of the sensor. It is not the dynamic range of the raw file. It doesn't actually say that, however. How, specifically, are they bypassing the raw file to get the data? By measuring not the data in the file but the range of brightness that the camera can capture from their test set up. And where _specifically_ are they getting that data? What is the probe point? What is the probe? Read the URL. The use multiple light sources, each of different calibrated illuminance. It's rather like photographing an gray-scale wedge. Oh. Thanks. Now it's CLEAR TO ME YOU HAVE NO CLUE. My question related to how they measure the brightness at the camera. And you're not replying with an answer to that. At all. The object is the measurement of the ability of the camera to detect both low light and bright light. To do this they get the camera to photograph a target containg multiple light sources covering a range of luminances. Some are too bright and the other too dark for the camera to properly capture. It is the difference between these which determines the dynamic range of the sensor. note that they evaluate the DR in terms of a RGB composite analysis, the details of which I am not aware. With this technique there is no need to measure the brightness at the camera. IOW: You have no clue. I wish you wouldn't insist to that effect. I know and understand what is going on. My problem is trying to explain it without the use of diagrams. ...snip... Nothing stopping you from posting links with your diagrams. -- == Later... Ron C -- |
#283
|
|||
|
|||
Finally got to the point where no new camera holds my interest (waiting for specific offering)
In article , Eric Stevens
wrote: The fstop is [Image distance]/lens aperture. also wrong. f/stop = focal length/aperture. Aha! Your correction of me is an approximation. it's not in any way an approximation. See https://en.wikipedia.org/wiki/F-number from that link, The f-number of an optical system (such as a camera lens) is the ratio of the system's focal length to the diameter of the entrance pupil. exactly what i said it is. "... as one focuses closer, the lens' effective aperture becomes smaller, making the exposure darker. The working f-number is often described in photography as the f-number corrected for lens extensions by a bellows factor. This is of particular importance in macro photography". selective snipping. that's *not* cool. Do you think I should not have snipped the several thousand irrelevant words, tables and images between the two parts I quoted? At least I marked that I snipped. your snipping altered its meaning. that quote is from the working f-number section, https://en.wikipedia.org/wiki/F-number#Working_f-number For what it is worth, I have never heard of the term 'working f-number'. I have always known of it as 'effective f-number'. which doesn't change anything. the takeaway is that different terms can mean the same thing. for example, stop and ev... which begins: The f-number accurately describes the light-gathering ability of a lens only for objects an infinite distance away. This limitation is typically ignored in photography, where f-number is often used regardless of the distance to the object. note this part: 'this limitation is typically ignored in photography'. But not if you are trying to pin me down on an exact definition. except i'm not. As the Wiki says, the f value is normally calculated on the basis of the focal length i.e with the lens focused at infinity. That's fine for characterising a lens and not bad either for exposure calculation providing the subject is not too close. where 'not too close' is 'everything other than extreme close-up, where extension tubes and/or bellows are needed'. But if you are being exact, for exposure calculation purposes, the f value has to be based on the actual image distance (not just the image distance when focused at infinity). except when it has no effect on the result. also keep in mind that digital has a lot more latitude than film, so there's a lot more room for error. did you somehow miss that part? No I didn't, but we are currently discussing precise meanings, not conventional abbreviations of terms. no, we're discussing real world usage. it's ignored for a very good reason: the difference is insignificant, except in certain situations, namely macro. see below. moving on, In optical design, an alternative is often needed for systems where the object is not far from the lens. In these cases the working f-number is used. which is not relevant here. It's precisely relevant as that is the definition I gave you. What's more, for photographic purposes it is always accurate. except it doesn't need to be accurate, and there's a lot of slop in the rest of the system anyway. {formula snipped} In photography this means that as one focuses closer, the lens' effective aperture becomes smaller, making the exposure darker. The working f-number is often described in photography as the f-number corrected for lens extensions by a bellows factor. This is of particular importance in macro photography. not exactly. Depends upon how exactly you manage your photography. no. it has nothing to do with how anyone manages anything. the effective (aka working) f/stop does indeed change as one focuses closer, just not for the reason they claim. for someone arguing about precise definitions, you should be all over their error. as one focuses closer, the effective *focal* *length* becomes *longer*, however, it's not enough to matter in typical situations. for macro, where the working distance is very short, the effective focal length can become quite long, requiring lens extensions (tubes, bellows, etc.). http://www.nicovandijk.net/pb6E.jpg focusing closer can also be done via a close-up lens, which will have no effect on the f/stop. however, it's an additional optical element in the path and most of them aren't all that good. tl;dr you're *really* grasping at straws. A quote from Hamlet's Mill is appropriate: "In other words we must take language seriously. Imprecise language discloses the lack of precision of thought." another diversion. |
#284
|
|||
|
|||
Finally got to the point where no new camera holds my interest (waiting for specific offering)
In article , Eric Stevens
wrote: If you want to continue arguing to the contrary I will be happy to leave you to it. Reciprocity games. "Introduction to photography 101." You are still missing the point: lens aperture, shutter speeds or ISOs are not identical to stops. I have not missed any points at all. To a PHOTOGRAPHER your PEDANTRY is MEANINGLESS and in fact MISLEADING. To some photographers ... it is annoying. But it is amazing what I have learned out of this thread. what's amazing is that it doesn't appear that you've learned a thing, despite repeated explanations from several people. I apologise for not embracing the full scope of your range of knowledge but defective as it is my memory will not let me forget what I already know and understand. which for this, is next to nothing. |
#285
|
|||
|
|||
Finally got to the point where no new camera holds my interest (waiting for specific offering)
In article , Eric Stevens
wrote: But the digital DR of the output of the ADC is not the same as the analog DR of the sensor. Nor is there any reason why it should be. nobody said it was, however, it's always going to be limited by the adc. The recorded output of the ADC is limited by the capabilities of the ADC. But these have no effect on the capabilities of the sensor. again, nobody said the adc would alter the sensor's capability. You have been strenuously arguing that it will limit it. as has everyone else, because it does. A meat grinder will alter a sheep. But before the grinder the sheep will remain a sheep and after the grinder the sheep will be whatever the grinder produces. If you are going to determine the height of a sheep would you prefer to do it before or after the grinder? it's entirely irrelevant, in every possible way. it's also not correct. the sheep cease to be sheep well before it reaches the meat grinder. You will argue that this is irellevant, but it's not. it is. very much so. The place to measure the DR of a sensor is at the sensor and not at the output of an ADC. DxO's method measures the DR at the sensor. only for those designing cameras and sensors, and it doesn't. for *everyone* else (including you, since you don't do either of those), the place to measure dynamic range are from *the* *images*, which means after the adc + isp + anything else in the image path. The sensor will always do do whatever it can do and it's DR can be scaled up or down to fit the output bandwidth of the ADC. It is because of the scaling that you can have the output of a sensor with a 14.8 stop DR scaled down to to suit a 14 bit ADC. It's not a big deal. except that it's *not* scaled. Of course it is. In any sensible design the DR of the input to the ADC will be designed to accept the DR of the sensor's output. actually, the adc will be designed to match that of the sensor's capabilities. it would be a waste of money to have one greatly outperform the other. imaginary cameras do not count. I don't know where you keep finding these imaginary cameras. from your endless supply. If the sensor can discriminate between luminance levels from 'c' to 'q' it will always retain that ability irrespective of the capabilities of the ADC. How the ADC encodes it is another matter, and how that image is decoded by RAW decoder is another matter again. There is enormous scope for fiddling and adjustments. except that no fiddling or adjustments are being done. You would know that if you worked for the right section of Nikon. Or perhaps you have reverse engineered a D800? You will also have to know what goes on in the Nikon RAW decoder. So I don't actually believe you know. there is no need to work for nikon to know that the sensor directly feeds the adc. your own links even confirmed that in a block diagram. Is there any argument about that? sure is. read your own posts, where you have this mysterious interim processing going on. if this was not the case, the 'fiddling' would be well known since the camera would perform differently than previous cameras, ... But they do, they do no they don't. they perform as expected. there are no breakthroughs other than computationally, such as google night sight, which has nothing to do with the sensor technology or adc. ... and it would likely be marketed as a benefit (e.g., 'new hdr sensor'), and hotly argued because the camera is 'not pure' or some such. It is merely your assumption that every individual design advance would be trumpeted by the marketing department. of course they will. it's a competitive advantage over the other products. and if it really *is* the sensor they're measuring, then it should be the *same* for the *same* sensor, and it is not. Not when you shove another piece of glass in front of one of the sensors. no effect on dynamic range. So you keep saying. That doesn't make it true. it's true because it is true. No glass (or in this case the material of the AA filter) offers 100% transmissability to the electromagnetic spectrum. You continuing to argue otherwise is plain silly. you're really grasping at straws now. nobody said 100%. it's close enough to 100% below nyquist that it makes absolutely no difference whatsoever. do you think a uv filter affects dynamic range? Nor when you realise that not all sensors will be identical and all measurements are subject to errors. especially when the methodology is itself an error. How can it be an error when they make clear what they are testing and how? when their numbers defy physics. But they don't. but they do. In spite of multiple lengthy explanations, the understanding of how it is that sensor DRs can exceed 14 stops continues to allude you. Such a failure on your part would be less understandable if you had not given evidence that you have neither properly read nor understood the explanations put before you. you have not read and certainly not understood the numerous explanations put before you. if they're supposedly measuring the sensor's dynamic range, explain why the nikon d50 & d70 differ by a half-stop, both of which used the same popular 6mp sony sensor (as did pentax). other results also differ. I have no way of knowing but the first thing I would suspect is the circuitry between the sensor and the ADC. then you'd be wrong. there is nothing between the sensor and adc, in those two or any other camera under discussion. Do you know whether the ADC is pipelined, or perhaps Nikon use one ADC per column of pixels? In any case, do you know whether the voltage divider resistors all have the same value? That the sort of thing which Nikon's competitors would like to know. For that matter, is the ADC the same in each camera under discussion? I'm afraid I don't share your confidence in your certainty. none of that is in any way relevant. It is to the illumination of your knowledge of the workings of Nikon digital cameras. that's irrelevant. you're just spewing buzzwords hoping to fool people. the d50 & d70 are basically the same camera, with minor feature differences, such as the d70 having two control wheels versus one, compact flash versus sd card, slightly faster frame rate, wired remote option, flash commander mode and some minor other things i don't remember, none of which have *any* effect on the dynamic range. If you examine first https://imaging.nikon.com/lineup/dslr/d50/index.htm and then https://imaging.nikon.com/lineup/dslr/d70/ you will see that the D70 boasts of: "New Nikon DX Format CCD image sensor for 3,008 x 2,000-pixel images New advanced digital image processor to optimize image quality, control auto white balance, auto tone and color control" that's just marketing babble. Actually you could be right. I see the D70 preceded the D50 by some 16 months and what was new for the D70 was old hat for for the D50. yep. But, nevertheless, I would be surprised if 16 months of development did not result in the slight improvement of the sensor DR range shown by the D50. except that your claim is that dxo measures the sensor, not the camera, so if both have the same sensor, the dynamic range would be the same. at least try to be consistent. also, the d50 and d70s came out at the same time, so any supposed improvement would be in *both* cameras, yet there's a difference. https://www.dxomark.com/Cameras/Nikon/D50 https://www.dxomark.com/Cameras/Nikon/D70s |
#286
|
|||
|
|||
Finally got to the point where no new camera holds my interest (waiting for specific offering)
In article , Eric Stevens
wrote: arguing about a non-existent camera, one which is likely to never exist at all, is pointless and actually, rather bizarre. I thought we were arguing about what DxO have done when measuring the DR of the various Nikon cameras you have cited. yes, so why bring up a non-existent camera? What makes you think I have? your posts keep describing ones that don't exist. |
#287
|
|||
|
|||
Finally got to the point where no new camera holds my interest (waiting for specific offering)
In article , Eric Stevens
wrote: The recorded output of the ADC is limited by the capabilities of the ADC. But these have no effect on the capabilities of the sensor. If If there is no way to encode the information, then that is the mootest of moot points. That may well be but, as I have several times said, it is possible to scale the dynamic range of the sensor to fit the narrower dynamic range of the ADC. it's 'possible', but as you've repeatedly been told, cameras don't work that way. |
#288
|
|||
|
|||
Finally got to the point where no new camera holds my interest (waiting for specific offering)
In article , Eric Stevens
wrote: There is no reason why the DR of the sensor should not be compressed to make it fit within the limits of the ADC. Already explained to you: compression does not improve DR without consequences in quality elsewhere. Well, if you don't compress it, you have to chop off one or both ends. could it be you're starting to learn something? |
#289
|
|||
|
|||
Finally got to the point where no new camera holds my interest (waiting for specific offering)
In article , Eric Stevens
wrote: Allright then. Please explain to your readers how you set a lens to an EV of 20. For what ISO and speed? No, no. No ISO or speed. The lens calibration is equivalent to stop settings according to nospam so it must be possible to set a lens to a particular EV. I picked 20 as an example. by picking 20 (or any number), you demonstrate you don't understand it. Now if I said, for example f/11 you would understand it. But that is not an EV. as i said, you don't understand it. here's a hint: what's the difference between f/8 and f/11 ? You are fortunate. I happen to have a lens on my desk. I've just measured the difference is about 4mm. good work! since we now know that 1ev = 4mm, it's a very simple calculation to set a lens to your desired ev 20: simply zoom the lens to 80mm, or alternately, choose a fixed focal length 80mm lens. math is fun. |
#290
|
|||
|
|||
Finally got to the point where no new camera holds my interest (waiting for specific offering)
In article , Eric Stevens
wrote: Google shows that a hell of a lot of work has been done on the spectral properties of anti-aliasing filters. You should write to all the authors and tell them they are wasting their time. spectral properties aren't the issue. Of course it is, unless the AA filter has none. The light which determines the RGB image has to bass through the filter and some must be lost on the way. the only thing that's lost is *spatial* *detail* above nyquist (and a little below since this is the real world). below nyquist, there is no effect, and a grey card or other test target has essentially no detail, so it's *well* below nyquist. if they were shooting resolution test charts or scenes with fine detail, there would be a difference, as one would expect, but they're not, at least not in this test. All of this is irrelevant. The point is no solid is perfectly transparent. Certainly the material of the AA filter will not be. it's close enough that it doesn't make a difference below nyquist. need more straws, or have you run out? you claim that dxo is shooting grey cards in their testing, which does not have detail anywhere close to nyquist, therefore the presence or absence of an aa filter will have no effect whatsoever. You are not even reading what I say! I said nothing of the sort but I have several times stated what it is that DxO do. Now you seem to be arguing without having read or understood what it is you are arguing about. actually, i am reading what you say, which is well beyond ludicrous at this point. Nowhere did I say they used a grey card. I compared their method to using a grey card. close enough DxO have described their method in one of the URLs I have posted. It is now obvious you have not properly read or understood their description. Nor have you properly read and understood the sections of their descriptions which I have quoted. You have not even properly read or understood my paraphrasing of what DxO have written. false. As you so often have been accused, you have been arguing for the sake of arguing but without even paying proper attention to what you have been arguing about. that describes you. |
Thread Tools | |
Display Modes | |
|
|
Similar Threads | ||||
Thread | Thread Starter | Forum | Replies | Last Post |
Finally got to the point where no new camera holds my interest (waiting for specific offering) | Alfred Molon[_4_] | Digital Photography | 2 | December 24th 18 02:37 PM |
Please, tell me Zeiss's offering to the camera world won't be areskinned SONY!! | Neil[_9_] | Digital Photography | 1 | August 27th 18 01:00 PM |
Need a camera with specific features: | Gary Smiley | Digital Photography | 1 | May 22nd 06 02:31 AM |
Canon Offering $600+ Rebate on Digital Camera Equipment (3x Rebate Offers) | Mark | Digital Photography | 6 | November 4th 04 10:27 AM |