If this is your first visit, be sure to check out the FAQ by clicking the link above. You may have to register before you can post: click the register link above to proceed. To start viewing messages, select the forum that you want to visit from the selection below. |
|
|
Thread Tools | Display Modes |
#291
|
|||
|
|||
Finally got to the point where no new camera holds my interest (waiting for specific offering)
In article , Eric Stevens
wrote: Wow! You could pass Photography 101, chapter 3 (Basics of exposure). CONGRATS! Now you have followed me that far, you may be interested to see https://upload.wikimedia.org/wikiped...ina_Ib_with_EV. jpg Notice how the bottom of the shutter speed ring has a pointer with which you can set the EV. That enables the aperture ring (dimly seen behind the pointer) to be moved in synchronism with the shutter speed ring so as to maintain constant the preset EV. This is an early pre-prescient camera. According to nospam modern cameras don't need the EV to be set. The aperture ring knows the EV all on it's own. You're being willingly and deliberately obtuse. No, I am attempting to use language with precision. I find it essential in technical matters. except you aren't understanding what any of it *means*. |
#292
|
|||
|
|||
Finally got to the point where no new camera holds my interest (waiting for specific offering)
In article , Eric Stevens
wrote: How, specifically, are they bypassing the raw file to get the data? By measuring not the data in the file but the range of brightness that the camera can capture from their test set up. And where _specifically_ are they getting that data? What is the probe point? What is the probe? Read the URL. The use multiple light sources, each of different calibrated illuminance. It's rather like photographing an gray-scale wedge. Oh. Thanks. Now it's CLEAR TO ME YOU HAVE NO CLUE. My question related to how they measure the brightness at the camera. And you're not replying with an answer to that. At all. The object is the measurement of the ability of the camera to detect both low light and bright light. To do this they get the camera to photograph a target containg multiple light sources covering a range of luminances. Some are too bright and the other too dark for the camera to properly capture. It is the difference between these which determines the dynamic range of the sensor. note that they evaluate the DR in terms of a RGB composite analysis, the details of which I am not aware. With this technique there is no need to measure the brightness at the camera. IOW: You have no clue. I wish you wouldn't insist to that effect. I know and understand what is going on. nothing you've said so far indicates that you do. My problem is trying to explain it without the use of diagrams. draw them, upload to imgur and link it (using ). Here's a clue. Where they are getting the "brightness" information at the camera is from the camera image file. Period. Why? Because there is no means for them to sample the sensor between the sensor and the ADC. Why? Because that would require a high end clean room and data that is proprietary to the camera sensor maker. The ONLY data they can get is from the file and the data in there is from a 14 bit ADC. There is no more information in there than what 14 bits can hold. There may indeed be a mysterious compression as you claim, but that has its own image quality consequences. This has been pointed out to you several times but you seem determined to want to believe the unbelievable ... AS you say, they (probably) have to take it from an image file. Although they could tether the camera. that won't change anything. statements such as that add to the ever growing pile of evidence that you don't understand this. |
#293
|
|||
|
|||
Finally got to the point where no new camera holds my interest (waiting for specific offering)
In article , Eric Stevens
wrote: The physical truth of the matter is that deep down at the shreds lies noise. Usually a lot more noise than signal. Yep. An interesting point: according to DxO, when the test uses a paper target, some of the noise may actually be the texture of the paper of the target. that would be a flaw in their testing. between these that determines the dynamic range of the sensor. The fact that the DR is scaled to 14 bits is of secondary consideration. If 14 bits is all that ammters why go to all the trouble and expense of developing high DR sensors? Let's have a cheap sensor and hang it on a 14 bit ADC. As I pointed out several times engineers will usually "right size" the ADC to the sensor if maximum signal performance is desired. So if they put in a 14 bit ADC, there is likely less than 14 bits of honest-to-goodness signal. IOW: You're peddling hard to fit 7 pounds of **** into a 5 pound bag. True, but fortunately DR is compressible. so is ****. |
#294
|
|||
|
|||
Finally got to the point where no new camera holds my interest(waiting for specific offering)
On 1/12/2019 11:10 PM, nospam wrote:
In article , Eric Stevens wrote: Allright then. Please explain to your readers how you set a lens to an EV of 20. For what ISO and speed? No, no. No ISO or speed. The lens calibration is equivalent to stop settings according to nospam so it must be possible to set a lens to a particular EV. I picked 20 as an example. by picking 20 (or any number), you demonstrate you don't understand it. Now if I said, for example f/11 you would understand it. But that is not an EV. as i said, you don't understand it. here's a hint: what's the difference between f/8 and f/11 ? You are fortunate. I happen to have a lens on my desk. I've just measured the difference is about 4mm. good work! since we now know that 1ev = 4mm, it's a very simple calculation to set a lens to your desired ev 20: simply zoom the lens to 80mm, or alternately, choose a fixed focal length 80mm lens. math is fun. :-) !!!! -- L... RC -- |
#295
|
|||
|
|||
Finally got to the point where no new camera holds my interest (waiting for specific offering)
On Sat, 12 Jan 2019 23:10:19 -0500, nospam
wrote: In article , Eric Stevens wrote: But the digital DR of the output of the ADC is not the same as the analog DR of the sensor. Nor is there any reason why it should be. nobody said it was, however, it's always going to be limited by the adc. The recorded output of the ADC is limited by the capabilities of the ADC. But these have no effect on the capabilities of the sensor. again, nobody said the adc would alter the sensor's capability. You have been strenuously arguing that it will limit it. as has everyone else, because it does. A meat grinder will alter a sheep. But before the grinder the sheep will remain a sheep and after the grinder the sheep will be whatever the grinder produces. If you are going to determine the height of a sheep would you prefer to do it before or after the grinder? it's entirely irrelevant, in every possible way. it's also not correct. the sheep cease to be sheep well before it reaches the meat grinder. Oh no. In my example the sheep are thrown straight into the meat grinder. In any case: it's not irellevant. You will argue that this is irellevant, but it's not. it is. very much so. It's an analogy. The place to measure the DR of a sensor is at the sensor and not at the output of an ADC. DxO's method measures the DR at the sensor. only for those designing cameras and sensors, and it doesn't. for *everyone* else (including you, since you don't do either of those), the place to measure dynamic range are from *the* *images*, which means after the adc + isp + anything else in the image path. Well DxO have explained what they are doing (quite clearly I would have thought) and they are measuring it at the sensor. You should write to them and explain they have got it wrong. No ordinary photographer is interested in the DR of the sensor. The sensor will always do do whatever it can do and it's DR can be scaled up or down to fit the output bandwidth of the ADC. It is because of the scaling that you can have the output of a sensor with a 14.8 stop DR scaled down to to suit a 14 bit ADC. It's not a big deal. except that it's *not* scaled. Of course it is. In any sensible design the DR of the input to the ADC will be designed to accept the DR of the sensor's output. actually, the adc will be designed to match that of the sensor's capabilities. I'm glad we have agreed on that. So now we go from transforming the number of electrons in each light well into a binary representation using 14 bit. I have called this process scaling. I would like to know what you call it. it would be a waste of money to have one greatly outperform the other. imaginary cameras do not count. I don't know where you keep finding these imaginary cameras. from your endless supply. If the sensor can discriminate between luminance levels from 'c' to 'q' it will always retain that ability irrespective of the capabilities of the ADC. How the ADC encodes it is another matter, and how that image is decoded by RAW decoder is another matter again. There is enormous scope for fiddling and adjustments. except that no fiddling or adjustments are being done. You would know that if you worked for the right section of Nikon. Or perhaps you have reverse engineered a D800? You will also have to know what goes on in the Nikon RAW decoder. So I don't actually believe you know. there is no need to work for nikon to know that the sensor directly feeds the adc. your own links even confirmed that in a block diagram. Is there any argument about that? sure is. read your own posts, where you have this mysterious interim processing going on. ????? if this was not the case, the 'fiddling' would be well known since the camera would perform differently than previous cameras, ... But they do, they do no they don't. they perform as expected. Better and better with each new model in a series. Are you claiming this is not behaving differently? there are no breakthroughs other than computationally, such as google night sight, which has nothing to do with the sensor technology or adc. ... and it would likely be marketed as a benefit (e.g., 'new hdr sensor'), and hotly argued because the camera is 'not pure' or some such. It is merely your assumption that every individual design advance would be trumpeted by the marketing department. of course they will. it's a competitive advantage over the other products. There are probably hundreds of improvements in each new design. You can't expect them to extoll them all. Besides, there are some they may not want to bring to the attention of the opposition. and if it really *is* the sensor they're measuring, then it should be the *same* for the *same* sensor, and it is not. Not when you shove another piece of glass in front of one of the sensors. no effect on dynamic range. So you keep saying. That doesn't make it true. it's true because it is true. No glass (or in this case the material of the AA filter) offers 100% transmissability to the electromagnetic spectrum. You continuing to argue otherwise is plain silly. you're really grasping at straws now. nobody said 100%. You implied 100% when you denied that the AA filter would have any affect on the light falling on the sensor. it's close enough to 100% below nyquist that it makes absolutely no difference whatsoever. do you think a uv filter affects dynamic range? Probably, but whether or not it is significant is another matter. Nor when you realise that not all sensors will be identical and all measurements are subject to errors. especially when the methodology is itself an error. How can it be an error when they make clear what they are testing and how? when their numbers defy physics. But they don't. but they do. You clearly fail that question. Go back and consider the sheep. In spite of multiple lengthy explanations, the understanding of how it is that sensor DRs can exceed 14 stops continues to allude you. Such a failure on your part would be less understandable if you had not given evidence that you have neither properly read nor understood the explanations put before you. you have not read and certainly not understood the numerous explanations put before you. Mere repetition of belief does not make for numerous explanations. if they're supposedly measuring the sensor's dynamic range, explain why the nikon d50 & d70 differ by a half-stop, both of which used the same popular 6mp sony sensor (as did pentax). other results also differ. I have no way of knowing but the first thing I would suspect is the circuitry between the sensor and the ADC. then you'd be wrong. there is nothing between the sensor and adc, in those two or any other camera under discussion. Do you know whether the ADC is pipelined, or perhaps Nikon use one ADC per column of pixels? In any case, do you know whether the voltage divider resistors all have the same value? That the sort of thing which Nikon's competitors would like to know. For that matter, is the ADC the same in each camera under discussion? I'm afraid I don't share your confidence in your certainty. none of that is in any way relevant. It is to the illumination of your knowledge of the workings of Nikon digital cameras. that's irrelevant. So you are happy to make authoritarian statements in complete ignorance of the relevant facts. you're just spewing buzzwords hoping to fool people. the d50 & d70 are basically the same camera, with minor feature differences, such as the d70 having two control wheels versus one, compact flash versus sd card, slightly faster frame rate, wired remote option, flash commander mode and some minor other things i don't remember, none of which have *any* effect on the dynamic range. If you examine first https://imaging.nikon.com/lineup/dslr/d50/index.htm and then https://imaging.nikon.com/lineup/dslr/d70/ you will see that the D70 boasts of: "New Nikon DX Format CCD image sensor for 3,008 x 2,000-pixel images New advanced digital image processor to optimize image quality, control auto white balance, auto tone and color control" that's just marketing babble. Actually you could be right. I see the D70 preceded the D50 by some 16 months and what was new for the D70 was old hat for for the D50. yep. But, nevertheless, I would be surprised if 16 months of development did not result in the slight improvement of the sensor DR range shown by the D50. except that your claim is that dxo measures the sensor, not the camera, so if both have the same sensor, the dynamic range would be the same. at least try to be consistent. But 16 months later, do they both have the same sensor? also, the d50 and d70s came out at the same time, so any supposed improvement would be in *both* cameras, yet there's a difference. https://www.dxomark.com/Cameras/Nikon/D50 https://www.dxomark.com/Cameras/Nikon/D70s The dates of release are listed in https://en.wikipedia.org/wiki/Nikon#...ompact_cameras In any case, arguing with you is leaving a foul taste in my mouth. This is more than enough. -- Regards, Eric Stevens |
#296
|
|||
|
|||
Finally got to the point where no new camera holds my interest (waiting for specific offering)
On Sat, 12 Jan 2019 23:10:24 -0500, nospam
wrote: In article , Eric Stevens wrote: Google shows that a hell of a lot of work has been done on the spectral properties of anti-aliasing filters. You should write to all the authors and tell them they are wasting their time. spectral properties aren't the issue. Of course it is, unless the AA filter has none. The light which determines the RGB image has to bass through the filter and some must be lost on the way. the only thing that's lost is *spatial* *detail* above nyquist (and a little below since this is the real world). below nyquist, there is no effect, and a grey card or other test target has essentially no detail, so it's *well* below nyquist. if they were shooting resolution test charts or scenes with fine detail, there would be a difference, as one would expect, but they're not, at least not in this test. All of this is irrelevant. The point is no solid is perfectly transparent. Certainly the material of the AA filter will not be. it's close enough that it doesn't make a difference below nyquist. So you are now arguing that the AA filter will pass all frequencies of the electromagnetic spectrum with equal facility? need more straws, or have you run out? you claim that dxo is shooting grey cards in their testing, which does not have detail anywhere close to nyquist, therefore the presence or absence of an aa filter will have no effect whatsoever. You are not even reading what I say! I said nothing of the sort but I have several times stated what it is that DxO do. Now you seem to be arguing without having read or understood what it is you are arguing about. actually, i am reading what you say, which is well beyond ludicrous at this point. Nowhere did I say they used a grey card. I compared their method to using a grey card. close enough "In other words we must take language seriously. Imprecise language discloses the lack of precision of thought." -- DxO have described their method in one of the URLs I have posted. It is now obvious you have not properly read or understood their description. Nor have you properly read and understood the sections of their descriptions which I have quoted. You have not even properly read or understood my paraphrasing of what DxO have written. false. The fact that you claim that I said DxO have been shooting grey cards is more than enough evidence to show that it is not false. As you so often have been accused, you have been arguing for the sake of arguing but without even paying proper attention to what you have been arguing about. that describes you. -- Regards, Eric Stevens |
#297
|
|||
|
|||
Finally got to the point where no new camera holds my interest (waiting for specific offering)
On Sat, 12 Jan 2019 23:10:16 -0500, nospam
wrote: In article , Eric Stevens wrote: The fstop is [Image distance]/lens aperture. also wrong. f/stop = focal length/aperture. Aha! Your correction of me is an approximation. it's not in any way an approximation. See https://en.wikipedia.org/wiki/F-number from that link, The f-number of an optical system (such as a camera lens) is the ratio of the system's focal length to the diameter of the entrance pupil. exactly what i said it is. "... as one focuses closer, the lens' effective aperture becomes smaller, making the exposure darker. The working f-number is often described in photography as the f-number corrected for lens extensions by a bellows factor. This is of particular importance in macro photography". selective snipping. that's *not* cool. Do you think I should not have snipped the several thousand irrelevant words, tables and images between the two parts I quoted? At least I marked that I snipped. your snipping altered its meaning. The several thousand words, tables and images between the two parts I quoted contained a great deal of irrelevant meaning. That's why I snipped them. that quote is from the working f-number section, https://en.wikipedia.org/wiki/F-number#Working_f-number For what it is worth, I have never heard of the term 'working f-number'. I have always known of it as 'effective f-number'. which doesn't change anything. the takeaway is that different terms can mean the same thing. Or in your hands the same term can mean different things. --- tedious quibbling snipped --- In other words we must take language seriously. Imprecise language discloses the lack of precision of thought." -- |
#298
|
|||
|
|||
Finally got to the point where no new camera holds my interest (waiting for specific offering)
On Sat, 12 Jan 2019 23:02:46 -0500, Ron C wrote:
On 1/12/2019 10:39 PM, Eric Stevens wrote: On Sat, 12 Jan 2019 09:37:37 -0500, Alan Browne wrote: On 2019-01-11 20:48, Eric Stevens wrote: On Fri, 11 Jan 2019 12:47:20 -0500, Alan Browne wrote: On 2019-01-10 23:38, Eric Stevens wrote: On Thu, 10 Jan 2019 07:43:24 -0500, Alan Browne wrote: On 2019-01-10 04:12, Eric Stevens wrote: On Wed, 9 Jan 2019 08:36:24 -0500, Alan Browne wrote: Its amazing what Google can produce. This is DxO's own account of the situation at: https://www.dxomark.com/dxomark-came...ol-and-scores/ "Dynamic range corresponds to the ratio between the highest brightness a camera can capture (saturation) and the lowest brightness it can capture (which is typically when noise becomes more important than the signal — that is, a signal-to-noise ratio below 0 dB). A value of 12 EV is excellent, with differences below 0.5 EV usually not noticeable. Dynamic range is an open scale." This appears to confirm that the situation is as I deduced: they are not testing the dynamic range as recorded in a raw file. They are testing the range that a camera can capture. i.e. it is the dynmaic range of the sensor. It is not the dynamic range of the raw file. It doesn't actually say that, however. How, specifically, are they bypassing the raw file to get the data? By measuring not the data in the file but the range of brightness that the camera can capture from their test set up. And where _specifically_ are they getting that data? What is the probe point? What is the probe? Read the URL. The use multiple light sources, each of different calibrated illuminance. It's rather like photographing an gray-scale wedge. Oh. Thanks. Now it's CLEAR TO ME YOU HAVE NO CLUE. My question related to how they measure the brightness at the camera. And you're not replying with an answer to that. At all. The object is the measurement of the ability of the camera to detect both low light and bright light. To do this they get the camera to photograph a target containg multiple light sources covering a range of luminances. Some are too bright and the other too dark for the camera to properly capture. It is the difference between these which determines the dynamic range of the sensor. note that they evaluate the DR in terms of a RGB composite analysis, the details of which I am not aware. With this technique there is no need to measure the brightness at the camera. IOW: You have no clue. I wish you wouldn't insist to that effect. I know and understand what is going on. My problem is trying to explain it without the use of diagrams. ...snip... Nothing stopping you from posting links with your diagrams. I'm happy to do that if it becomes necessary. But first I have to create the diagrams. -- Regards, Eric Stevens |
#299
|
|||
|
|||
Finally got to the point where no new camera holds my interest(waiting for specific offering)
On 2019-01-12 20:18, Eric Stevens wrote:
On Sat, 12 Jan 2019 09:26:28 -0500, Alan Browne wrote: On 2019-01-11 18:20, Eric Stevens wrote: On Fri, 11 Jan 2019 12:51:28 -0500, Alan Browne wrote: On 2019-01-11 10:28, nospam wrote: In article , Eric Stevens wrote: The problem is clearly DXO's testing methods. No matter how you look at this, you have to be able to imagine all kinds of sources of inaccurate measurements, especially if they are slight. I have to agree with nospam and Alan. You can't get DR outside of the limits of the ADC because that is the output you see, but you can certainly get test results outside of that limit. But the digital DR of the output of the ADC is not the same as the analog DR of the sensor. Nor is there any reason why it should be. nobody said it was, however, it's always going to be limited by the adc. Got that Eric? What is 'it'? The DR of the sensor or the DR of the output of the ADC? Obvious. The ADC is the limiting factor. Always. There is NO WAY for DxO to probe the sensor directly (and it would be meaningless to everyone even if they could...) The ADC won't be the limiting factor if it has a better dynamic range than the sensor. The limiting factor is most likely the sensor and the engineers select the ADC appropriately. They could have sampled at 16 bits - but that would just be sampling more noise in the bottom 2+ bits. I suppose if you needed an entropy source that's as good as any if you use a couple dozen sensor sites ... I think at this point I should stop and ask you whereabouts in the pipeline the sensor DR range should be measured. As perhaps an extreme example, do you think it should be measured by what is output to a memory card? ... or should it be closer to the sensor? If so, where (and how)? Where delivered to the end user: the stored image file (raw). That is the only thing that counts and is useful to the user of the product. Nothing else matters. And if, as you propose, some compression has occurred between the ADC and the stored value, then the hocus pocus has simply exchanged 1 form of noise for another. To the engineer deciding on what bit depth to use, intimate knowledge about the sensors would have him (in the design/systems engineering phase) do tests on the sensors to determine its statistics and appropriately decide on the bit depth. -- "2/3 of Donald Trump's wives were immigrants. Proof that we need immigrants to do jobs that most Americans wouldn't do." - unknown protester |
#300
|
|||
|
|||
Finally got to the point where no new camera holds my interest(waiting for specific offering)
On 2019-01-12 21:15, Eric Stevens wrote:
On Sat, 12 Jan 2019 09:25:23 -0500, Alan Browne wrote: On 2019-01-11 18:18, Eric Stevens wrote: The recorded output of the ADC is limited by the capabilities of the ADC. But these have no effect on the capabilities of the sensor. If If there is no way to encode the information, then that is the mootest of moot points. That may well be but, as I have several times said, it is possible to scale the dynamic range of the sensor to fit the narrower dynamic range of the ADC. To which I've replied numberous times. In a nutshell, you're trading one form of noise for another. -- "2/3 of Donald Trump's wives were immigrants. Proof that we need immigrants to do jobs that most Americans wouldn't do." - unknown protester |
Thread Tools | |
Display Modes | |
|
|
Similar Threads | ||||
Thread | Thread Starter | Forum | Replies | Last Post |
Finally got to the point where no new camera holds my interest (waiting for specific offering) | Alfred Molon[_4_] | Digital Photography | 2 | December 24th 18 02:37 PM |
Please, tell me Zeiss's offering to the camera world won't be areskinned SONY!! | Neil[_9_] | Digital Photography | 1 | August 27th 18 01:00 PM |
Need a camera with specific features: | Gary Smiley | Digital Photography | 1 | May 22nd 06 02:31 AM |
Canon Offering $600+ Rebate on Digital Camera Equipment (3x Rebate Offers) | Mark | Digital Photography | 6 | November 4th 04 10:27 AM |