If this is your first visit, be sure to check out the FAQ by clicking the link above. You may have to register before you can post: click the register link above to proceed. To start viewing messages, select the forum that you want to visit from the selection below. |
|
|
Thread Tools | Display Modes |
#241
|
|||
|
|||
Lightroom vs. Apertu Curves
In article , Floyd L. Davidson
wrote: That is because highlight are usually clipped or near to being clipped simply because nature has a higher dynamic range than the camera can record. Which of course is irrelevant to the topic being discussed. Probably is, for you. The rest of the world is trying to understand how to make good photographs, not how to beat their chest on Usenet. a good description of you, who pretends to know about apps he's never used and then tries to tell everyone else they're wrong, including those who use the apps quite frequently. |
#242
|
|||
|
|||
Lightroom vs. Apertu Curves
On Wed, 13 Aug 2014 18:11:24 -0800, (Floyd L.
Davidson) wrote: Eric Stevens wrote: On 13 Aug 2014 09:18:11 GMT, Sandman wrote: It would, if the image was taken with RAW. Software works with 14 bits of data in a 8 bit workspace (your monitor colorspace) and if the software supports it, it can access that data and bring it up/down into 8 bit. Providing that the colour space does not change, going fro 14 bit to 8 bit (and vice versa) does not change the depths of the dark shadows or the brightness of the highlights which have been seen by the camera. With 14 bit data you can chop up the full dynamic range of (say) red into 2^14 = 16,384 divisions. If you instead convert it to 8 bit data you can chop up the full dynamic range 2^8 = 256 divisions. 14 bit gives you much smoother colour and luminance transitions. It does not extend the dynamic range of the camera. Except that it does. A 14 bit linear file can encode a maximum dynamic range of (6.02n + 1.76) dB, where n is the number of bits per sample. Hence the maximum dynamic range that can be recorded by a 14 bit depth RAW file is 86 dB, or 14.3 fstops. Compare that to an 8 bit gamma corrected data set that can encode about 9.1 fstops. It's more than 8 because of the gamma correction. The significance is the reason we have 14 bit depth RAW files today instead of the 10 bit files from the 1990's. And we will soon enough be working with 16 bit depth RAW files, specifically because the dynamic range of sensors is getting larger. At the beginning of your response you said "Except that it does" and then you go on to say "can". When you say something 'can' it doesn't mean that it actually 'does'. My understanding is that few (if any) cameras encode more image detail at 14 bits than they do at 12 or even 10. Nor did I recognise that equation in your first paragraph: so I went hunting. I found Cambridge Color which at http://www.cambridgeincolour.com/tut...amic-range.htm said: "As an example, 10-bits of tonal precision translates into a possible brightness range of 0-1023 (since 210 = 1024 levels). Assuming that each A/D converter number is proportional to actual image brightness (meaning twice the pixel value represents twice the brightness), 10-bits of precision can only encode a contrast ratio of 1024:1. Most digital cameras use a 10 to 14-bit A/D converter, and so their theoretical maximum dynamic range is 10-14 stops. However, this high bit depth only helps minimize image posterization since total dynamic range is usually limited by noise levels. Similar to how a high bit depth image does not necessarily mean that image contains more colors, if a digital camera has a high precision A/D converter it does not necessarily mean it can record a greater dynamic range. In effect, dynamic range can be thought of as the height of a staircase whereas bit depth can be thought of as the number of steps. In practice, the dynamic range of a digital camera does not even approach the A/D converter's theoretical maximum; 8-12 stops is generally all one can expect from the camera." Also http://theory.uchicago.edu/~ejm/pix/.../noise-p3.html "Both Canon and Nikon have introduced a finer level quantization of the sensor signal in digitizing and recording the raw data, passing from 12-bit tonal gradation in older models to 14-bit tonal depth in newer models. A priori, one might expect this transition to bring an improvement in image quality -- after all, doesn't 14-bit data have over four times the levels (16384) compared to 12-bit data (4096)? It would seem obvious that 14-bit tonal depth would allow for smoother tonal transitions, and perhaps less possibility of posterization. Well, those expectations are unmet, and the culprit is noise." --- and later --- "Quantizing the signal from the sensor in steps much finer than the level of the noise is thus superfluous and wasteful; quantizing the noise in steps much coarser than the level of the noise risks posterization. As long as the noise exceeds the quantization step, the difference between the coarser and finer quantization is imperceptible. As long as noise continues to exceed the quantization step in post-processing, it doesn't matter how one edits the image after the fact, since any squeezing/stretching of the levels also does the same to the noise, which will always be larger than the level spacing no matter how it is squeezed or stretched. On the other hand, quantizing the signal in steps coarser than the noise can lead to posterization. Ideally, the noise should slightly exceed the quantization step, in order that roundoff errors introduced by quantization are negligible, and that no bits are wasted in digitizing the noise. --- and later again --- "Curiously, most 14-bit cameras on the market (as of this writing) do not merit 14-bit recording. The noise is more than four levels in 14-bit units on the Nikon D3/D300, Canon 1D3/1Ds3 and 40D. The additional two bits are randomly fluctuating, since the levels are randomly fluctuating by +/- four levels or more. Twelve bits are perfectly adequate to record the image data without any loss of image quality, for any of these cameras (though the D3 comes quite close to warranting a 13th bit). A somewhat different technology is employed in Fuji cameras, whereby there are two sets of pixels of differing sensitivity. Each type of pixel has less than 12 bits of dynamic range, but the total range spanned from the top end of the less sensitive pixel to the bottom end of the more sensitive pixel is more than 13 stops, and so 14-bit recording is warranted. A qualification is in order here -- the Nikon D3 and D300 are both capable of recording in both 12-bit and 14-bit modes. The method of recording 14-bit files on the D300 is substantively different from that for recording 12-bit files; in particular, the frame rate slows by a factor 3-4. Reading out the sensor more slowly allows it to be read more accurately, and so there may indeed by a perceptible improvement in D300 14-bit files over D300 12-bit files (specifically, less read noise, including pattern noise). That does not, however, mean that the data need be recorded at 14-bit tonal depth -- the improvement in image quality comes from the slower readout, and because the noise is still more than four 14-bit levels, the image could still be recorded in 12-bit tonal depth and be indistinguishable from the 14-bit data it was derived from." Both sites are well worth reading in full. The point is that unless Sandman's camera has exceptionally low noise levels there almost certainly will be no useful information from the last few bits added to bring it up to 16 bit data. http://discover.store.sony.com/sony-...h_imaging.html describes the BIONZ X processor used in Sandman's camera, as receiving 14 bits from the sensor, processing it in 16 bits, and sending it on as a 14 bit raw. Presumably the 16 bit processing is to minimise the significance of truncation errors in the arithmetic. Basically it is 14 bit data in and 14 bit data out. Regretably it doesn't look as though there is any extra data to be recovered even if Sandman is somehow going to conveert it to 16 bit. It also looks as though what Floyd has written below is fine in theory but isn't what happens in practice. The limit remains the overall noise level. Nor, if the original image was overexposed will changing exposure allow details to be recovered from burned out highlights. All it will do is raise or lower brightness between the two extremes of bright and dark. There is a difference between burned out highlights in 8 bit space and 14 bit space. In short: |-------------- 14 bit range ------------| |------ 8 bit range ------| It is possible to compress that 14 bit range into a 8 bit range and thus reveal data that was otherwise invisible to you via your monitor. I am afraid that in your example above the 8 bit range and the 14 bit range are identical. It's whatever is built into sensor. It's just that the 14 bit chops it up more finely. No, but that doesn't make Sandman right either! When compression is not used, the comparison looks more like this: Dark Light |-------------- 14 bit range ------------| |------ 8 bit range ------| |---- Displayed ------| |-- blocked -| |-- / \ / \ Recoverable The blocked part cannot be recovered from the 8 bit range. What can be "recovered" from the 8 bit range is the part that is too dark to show up on a display (print or monitor). And while what is shown has nothing in the highlights that can be recovered, sometimes there is. But it is usually a very slight amount of information. For example, most prints and monitors wash out any 8 bit values greater than about 1/2 stop down from pure white. In most editors that means any value above about 245 is pure white and has no detail. If the brightness is reduced to where the brightest value is at 245, about 1/2 an fstop of detail is "recovered". It is also possible to compress the 14 bit dynamic range into an 8 bit dynamic range. The low compression will not look right, but detail from the original range will exist rather than be just clipped. -- Regards, Eric Stevens |
#243
|
|||
|
|||
Lightroom vs. Apertu Curves
In article , Eric Stevens
wrote: Also http://theory.uchicago.edu/~ejm/pix/.../noise-p3.html ... "Curiously, most 14-bit cameras on the market (as of this writing) do not merit 14-bit recording. The noise is more than four levels in 14-bit units on the Nikon D3/D300, Canon 1D3/1Ds3 and 40D. The additional two bits are randomly fluctuating, since the levels are randomly fluctuating by +/- four levels or more. Twelve bits are perfectly adequate to record the image data without any loss of image quality, for any of these cameras (though the D3 comes quite close to warranting a 13th bit). A somewhat different technology is employed in Fuji cameras, whereby there are two sets of pixels of differing sensitivity. Each type of pixel has less than 12 bits of dynamic range, but the total range spanned from the top end of the less sensitive pixel to the bottom end of the more sensitive pixel is more than 13 stops, and so 14-bit recording is warranted. A qualification is in order here -- the Nikon D3 and D300 are both capable of recording in both 12-bit and 14-bit modes. The method of recording 14-bit files on the D300 is substantively different from that for recording 12-bit files; in particular, the frame rate slows by a factor 3-4. Reading out the sensor more slowly allows it to be read more accurately, and so there may indeed by a perceptible improvement in D300 14-bit files over D300 12-bit files (specifically, less read noise, including pattern noise). That does not, however, mean that the data need be recorded at 14-bit tonal depth -- the improvement in image quality comes from the slower readout, and because the noise is still more than four 14-bit levels, the image could still be recorded in 12-bit tonal depth and be indistinguishable from the 14-bit data it was derived from." Both sites are well worth reading in full. The point is that unless Sandman's camera has exceptionally low noise levels there almost certainly will be no useful information from the last few bits added to bring it up to 16 bit data. there is definitely a benefit from 14 bit, but it will only be realized if the scene has dynamic range wide enough where 12 bit would be a limiting factor. if the subject has only 9 stops, then even 12 bit is overkill. http://www.clarkvision.com/imagedeta...formance.summa ry/dynamic_range_a.gif http://discover.store.sony.com/sony-...C/tech_imaging. html describes the BIONZ X processor used in Sandman's camera, as receiving 14 bits from the sensor, processing it in 16 bits, and sending it on as a 14 bit raw. Presumably the 16 bit processing is to minimise the significance of truncation errors in the arithmetic. Basically it is 14 bit data in and 14 bit data out. Regretably it doesn't look as though there is any extra data to be recovered even if Sandman is somehow going to conveert it to 16 bit. 16 bits is because that's the way computers work. the data is still 14 bit. It also looks as though what Floyd has written below is fine in theory but isn't what happens in practice. The limit remains the overall noise level. a lot of what he writes isn't what happens in practice since he writes about stuff he doesn't actually use and gets a lot of things wrong. |
#244
|
|||
|
|||
Lightroom vs. Apertu Curves
Sandman wrote:
In article , sid wrote: Floyd L. Davidson: Call it increased intensity then. Sandman: You can call it whatever you want, you won't get it anyway. Or better yet, why not give a detailed explanation just what the exposure slider in Lightroom does, or Aperture. It doesn't matter *what* the slider labelled "exposure" actually does in your favourite software, it most definitely is not adjusting exposure. Which is just semantics. It does its best to simulate the effect of camera exposure. but not really, your data from above says "With the base values of rgb(0, 30, 250), these are the results: True brightness +10: rgb(10, 40, 255) New PS brightness +10: rgb(3, 36, 255) Exposure +10: rgb(3, 40, 253)" which suggests 2 things. 1. the exposure slider has increased brightness just not in a linear fashion 2. the exposure slider doesn't emulate exposure It can not create data that does not exist No! Really? (unless it's using RAW data extended dynamic range, but I don't think any does). Now that's just babble The point is that it is NOT adjusting image brightness. Of course it is, you said so above. It may very well be adjusting the image in a very pleasing way for you, giving the impression that it's adjusting exposure, but it isn't. As is oft said around here "it does *way* more than that" Indeed. And I don't care what they call it. It's not a brightness slider. Sandman: I know, I know, you can't becaus eyou know nothing about neither application so you're just here to add white noise about stuff you know nothing about. Oh, as usual, then. It has already been shown in this thread that the exposure slider in lightroom does not adjust the image in a uniform way, it treats the highlights midtones and shadows differently. Much like how exposure works in a camera. Not really, it does something else that makes for a pleasing image. That's not how exposure works is it? It is, as close as it can anyway. It's not as close as it can, it could do it exactly right if that was what was wanted, but it's not. If I increase the exposure time on my camera the image is exposed for longer right across the frame, regardless of highlight or shadow in the scene. Does that mean that an image would get brighter or darker depending on the length of exposure? Changing the shutter speed of your camera will not make the image uniformly brighter or darker. That does not answer the question I asked. Your camera has a higher dynamic range than your monitor. What does that have to do with the effects of image brightness in relation to exposure length? It will differ from camera to camera. I shouldn't think so. Opening the shutter for twice as long lets twice the light in regardless of camera model But if you expose longer, the brightness will be more pronounced in ligher parts than in darker parts of the scene. Here's a picture to show this: http://sandman.net/files/luminance_exposure.jpg As you can see, the lighter parts of the scene have higher brightness than the darker parts. Dark: ~10 difference Midtones: ~20 difference Light: ~35 difference What? you mean that if you double exposure length you get twice the light? -- sid |
#245
|
|||
|
|||
Lightroom vs. Apertu Curves
nospam wrote:
In article , sid wrote: It has already been shown in this thread that the exposure slider in lightroom does not adjust the image in a uniform way, it treats the highlights midtones and shadows differently. That's not how exposure works is it? http://www.sprawls.org/ppmi2/FILMCON/FILMCON03.jpg Apart from the fact that relates to film not digital, it's not right anyway according to the data that Jonas furnished us with. -- sid |
#246
|
|||
|
|||
Lightroom vs. Apertu Curves
On Thu, 14 Aug 2014 07:25:37 -0700, Savageduck
wrote: On 2014-08-14 09:14:05 +0000, Eric Stevens said: On Wed, 13 Aug 2014 22:38:29 -0700, Savageduck wrote: Don't get back to me on this subject until you have done at least that. You say you prefer DxO, but you have yet to make the comparison. Make it and let me know what you have discovered. I think I prefer DxO to ACR as a raw editor for Photoshop. That much is clear. Now try LR5. I intend to. I will when the time comes but right now I want to do things that LR can't do. -- Regards, Eric Stevens |
#247
|
|||
|
|||
Lightroom vs. Apertu Curves
On Thu, 14 Aug 2014 15:45:49 -0400, nospam
wrote: In article 2014081322382923711-savageduck1@REMOVESPAMmecom, Savageduck wrote: "Not in LR or in ACR, only when it is converted for use in another application such as Photoshop." It's converted every time you do anything with it. You put a lot of work into that speculative essay, but no. Now if you had put as much effort into actually using LR and ACR, you might be able to look at this from the perspective of a user. that's the key. floyd and others are looking at it from the perspective of a software developer and arguing over details that make no difference to actually using the software. But actually using the software is not the point. It's what software can actually achieve which is currently what the argument is about. The actual buttons and levers available to the user are a secondary consideration. Somehow I doubt that you have even taken the time to look at any of the LR tutorial videos. he probably hasn't. Don't get back to me on this subject until you have done at least that. You say you prefer DxO, but you have yet to make the comparison. Make it and let me know what you have discovered. -- Regards, Eric Stevens |
#248
|
|||
|
|||
Lightroom vs. Apertu Curves
On Thu, 14 Aug 2014 15:45:39 -0400, nospam
wrote: --- vast snip --- It's converted every time you do anything with it. it's not converted to an interim format. everything is rendered on the fly. I agree with that but I still think we are talking about different things. A NEF file from my camera contains formatted raw data which is different the raw data from an ARW file from Sandman's camera. While they stay different, the raw data from the two cameras require different rendering software. What I am saying is that as soon as they are loaded into LR each of the files are converted into a common LR internal format. This entails demosaicing, setting the color profile etc and ends up with the respective images being held in the same internal format. Because the two files are no longer different they no longer need two different rendering/editing engines. All of the transformation from the raw files to the common internal format happens in the brief period of time it takes to load the raw image and is transparent to the user. Yeah, I know that with raw files you have ACR first, but I expect ACR employs the same trick, otherwise you would need individually different software for each raw file. -- Regards, Eric Stevens |
#249
|
|||
|
|||
Lightroom vs. Apertu Curves
On 2014-08-14 23:04:10 +0000, Eric Stevens said:
On Thu, 14 Aug 2014 07:25:37 -0700, Savageduck wrote: On 2014-08-14 09:14:05 +0000, Eric Stevens said: On Wed, 13 Aug 2014 22:38:29 -0700, Savageduck wrote: Don't get back to me on this subject until you have done at least that. You say you prefer DxO, but you have yet to make the comparison. Make it and let me know what you have discovered. I think I prefer DxO to ACR as a raw editor for Photoshop. That much is clear. Now try LR5. I intend to. I will when the time comes but right now I want to do things that LR can't do. Aah! You are going grocery shopping. -- Regards, Savageduck |
#250
|
|||
|
|||
Lightroom vs. Apertu Curves
On 14 Aug 2014 09:28:36 GMT, Sandman wrote:
In article , nospam wrote: Eric Stevens: I think he is right. It's not the raw file which is edited. What is edited is a file derived from which ever one of the very my different raw files has been presented. The raw file remains untouched. the raw file is never touched. all of the adjustments are applied to the raw data on the fly. This is a pretty interesting aspect. When you're looking at a catalog in Lightroom that contains 20k files in the smallest thumbnail in grid mode, it will show you on screen about 40 photos. I have a very hard time, especially considering that scrolling is very smooth, that all these photos are RAW files read from disk, have adjustments applied to them and portrayed as thumbnails. In fact, I outright claim they are not. Hence the use of previews - cached preview versions of these files. Now, this isn't in contradiction to your statement above, of course, you said nothing about thumbnail view. But that would explain why Lightroom is so slow to open the develop view for a photo, in that it reads the RAW data, and the apply the adjustment chain, each and every single time. I expect that's what it does. I am surprised you find it slow. I find raw images appear in less than a second. Mind you, that's 12 Mb images and yours may be larger. Maybe it's something to do with the way you have it catalogued? Aperture has a, to me, better way to handle this, everything that is shown on screen is the JPG preview version. And when you make edits, they are made to the JPG but saved in relation to the RAW file. Whenever an edit requires data from the RAW file (like the extended curves), it will read that data, make the adjustment and update the JPG. DxO does a similar thing. All of the edits you see on the screen are to a stripped down preview screen while the heavy work is going on in the background. That's why Aperture is so fast to work in, everything is done with "smart previews" as Lightroom seem to call it. But full-size smart previews. When exporting images from Aperture, the RAW files are again accessed, the adjustment chain applied and then exported to disc. That's also why Aperture, when an image has been processed by an older Aperture version, has the option of "reprocess original", which does the initial RAW - JPG conversion again, but now with the updated RAW engine. Makes sense. -- Regards, Eric Stevens |
Thread Tools | |
Display Modes | |
|
|
Similar Threads | ||||
Thread | Thread Starter | Forum | Replies | Last Post |
Lightroom and Aperture, shared library? | Sandman | Digital Photography | 15 | May 15th 14 05:09 PM |
PhotoShop Elements, Aperture and Lightroom | nospam | Digital Photography | 0 | May 23rd 08 10:09 PM |
PhotoShop Elements, Aperture and Lightroom | C J Campbell | Digital Photography | 1 | May 23rd 08 10:08 PM |
Aperture, Lightroom: beyond Bridge; who needs them? | Frank ess | Digital Photography | 0 | June 4th 07 06:42 PM |
Lightzone/Lightroom/Aperture | D.M. Procida | Digital SLR Cameras | 20 | April 27th 07 07:00 AM |