A Photography forum. PhotoBanter.com

If this is your first visit, be sure to check out the FAQ by clicking the link above. You may have to register before you can post: click the register link above to proceed. To start viewing messages, select the forum that you want to visit from the selection below.

Go Back   Home » PhotoBanter.com forum » Digital Photography » Digital Photography
Site Map Home Register Authors List Search Today's Posts Mark Forums Read Web Partners

Catching All The Details In High Dynamic Range Pictures W/O Multiple Exposures



 
 
Thread Tools Display Modes
  #12  
Old September 23rd 10, 11:45 AM posted to rec.photo.digital
TheRealSteve
external usenet poster
 
Posts: 325
Default Catching All The Details In High Dynamic Range Pictures W/O Multiple Exposures


On Wed, 22 Sep 2010 01:30:30 -0500, SneakyP
wrote:

ransley wrote in
:

On Sep 16, 8:53*am, Martin Brown
wrote:
On 16/09/2010 12:40, ransley wrote:





On Sep 16, 1:35 am,
wrote:
Here's a thought on processing those pixels of info that comprise
a pi

cture
(TAKEN with just one picture cycle) . *Integrate each time period
wi

th an
exposure of x seconds. take next picture in camera, in intervals
of readings between time x1 and time x2. *Continue on with
differential

s of
image gathering by watching the cells as they collect photons of
light

in
specified time periods. Make the sampling period vary according to
the dynamic range of the picture i.e. the more photons collected
should ki

ck in
a formula for desensitizing the sensor when a certain plateau of
brigh

ness
is reached. *It's like compressing the low and high ends to better
represent actual camera dynamic range with what is actually being
seen

. *I
don't know if monitors can represent the full range of colors and
intensities, but there should be some kind of tradeoff between
squeezi

ng
picture brightness/darkness towards a more palipable realistic
look an

d
getting a picture that actually looks like what it did when you
took i

t.

Pointers on photograpy tips appreciated.
* Thanks.
--
__
SneakyP
To email me, you know what to do.

Supernews, if you get a complaint from a Jamie Baillie, please
see:htt

p://www.canadianisp.ca/jamie_baillie.html

Yea, watch cells as they collect photons of light, you are smokin
some good stuff.

Actually that device is a real invention dating back to the late
1970's and called the Image Photon Counting System. Cunning system
design and obvious limitations. Developed by Alan Boksenberg at
Imperial College London during the 1970's and derivatives are still
in use today at ING and a few other large observatories for
specialised low signal imaging. Obviously it is useless at high light
levels you have to be able to count each photon arrival and determine
the centroid of the spot.

http://www.ing.iac.es/Astronomy/obse...manuals/genera.
..

Smoking good stuff is not required. It was absolutely ground breaking
when it first came out and was nick-named Instant Paper Creation
System. Compared to film it was streets ahead in sensitivity and
noise floor and it was pretty good for a while after CCDs became
available to astronomers too. It still beats CCDs on noise floor for
some work.

Regards,
Martin Brown- Hide quoted text -

- Show quoted text -


I believe he was implying using his self endowed power to view
photons.


No powers endowed here. The last thing needed were the snooty replies.

I was merely talking about a concept:

1. Has to be a way to capture a picture of higher dynamic ranges without
resorting to combining two or more different sessions. Nobody seems to
understand that.

2. Since pictures are composed of the collective pixel bed of cells that
"collect" photons as discrete data storage vs. physical film analog
storage, I'da figured the mathematics of adding all the data from each
cell may increase the range of captured light intensity to help
distinguish between what is seen in the real world vs. what is seen in
camera world. Dynamic range is extended.

I know, for instance, that highlight detail compression is nothing more
than applying a curve to the highest intensity light, to recreate
differences between levels and keeping the dark tones from becoming black
at the same time. Hence, the highlight blowout is avoided by highlight
recovery (same difference in the process). The more range a pixel sensor
is allowed to store, the less needed to flush it. But seeing that a high
dynamic range picture doesn't work well with these kinds of sensor
usages, why not enable the read/store of data to a bank and then re-read
the next cycle to add to the prior read set of data.


The real range of RGB shouldn't have to restricted to a range of colors
(2^8 values per channel) and extrapolate those to a screen that by its
nature can only handle 8bit. They should be beyond that, but some seem
to think that the range is adequate.

I'd say, no, I want a picture where you can bump up the intensity to see
what's in the shadows without having (noise) show up badly, or tune down
the intensity to a point where highlited/detailed stuff is revealed
instead of lost in blown-out white pixels.

Just saying. When does that kind of camera processing come out?
Even our eyes have adaptive seeing= they don't blow out highlights when
seeing the shadows in the same field of vision. Our eyesight seems
rather logarithmic as far as compressing dark from light and seeing a
darkened area next to a well lit area. Cameras don't have the ability to
emulate that kind of seeing.


The dynamic range of human eyesight can't be compared to a camera
because your eye has an iris that is always adjusting to the lighting
conditions. As you look at the bright part of the landscape, your
iris closes down. Look at the shadows and it opens up. And even
though the iris can react rather quickly, you can't see the details in
the shadows at the exact same time as details in the bright areas. A
camera has to capture all that at the same instant. If you want to
take the iris out of the equation, you can determine the range of
brightness levels a human eye is capable of distingishing in adjacent
areas. That works out to only about 100:1, which is far less than
digital cameras are capable of.

In addition to the iris, the eye can also chemically adjust it's
sensitivity, or in camera-speak, it's ISO value. But that takes much
longer to do, which is why you can't see any details in a dark theater
for while after stepping in from sunlight. After the eye adjusts, you
can. But step back into the sunlight and you're temporarily blinded
until it adjusts. Chemically changing it's sensitivity and
continuously varying the iris makes you think the eye is capable of
HDR. But it's not capable of HDR in a single image, which is what you
want. The eye's dynamic range in a single image is very limited.

Getting back to HDR in a single camera image, in concept, it can be
much simpler than you're describing. In simplistic terms, with analog
to digital converters, more dynamic range = more bits in the sampling.
In photo sensors, more dynamic range = pixels with a higher max charge
capacity yet still able to pick out the charge of single photons.
I.e., BIG pixels. If you have a sensor like that, you need more bits
in the ADC that samples it to take advantage of the higher dynamic
range of the sensor. More bits means slower conversions, higher
costs, etc.

The problem is that today's sensors, taking all the sources of noise
into account, can't even approach the theoretical limits of the ADC's
used to sample them. A 14 bit ADC has a theoretical limit of 14
f-stops of dynamic range, 84dB, 16384:1 contrast ratio, etc., all ways
of expressing the same thing. But the real world cameras that use 14
bit ADCs don't come close to that. Even a D3 has less than 9 f-stops
of usuable dynamic range.

So what you need simply is a sensor with very large pixels, low noise,
and more bits in the ADC even if that means it takes a while to read
the sensor. Forget all the other mumbo jumbo. Get those 3 things and
you have HDR in a single image.

How much dynamic range do you actually want/need? Well, the range in
direct lighting illuminance (cd/m*m) you're likely to run into,
varying from direct starlight to direct sunlight, is about 8 orders of
magnitude, or 10^8. Reflections can increase that since you're
concentrating a larger area of direct light into a smaller area. But
sticking with 8, the contrast ratio is 100,000,000:1. That's approx
159.45 dB, 26.575 f-stops. You need a 27 bit ADC to sample that
assuming linear sampling.

Even if you forget about noise at the low end, all I have to say is...
good luck.

Steve
  #13  
Old September 23rd 10, 12:43 PM posted to rec.photo.digital
Wolfgang Weisselberg
external usenet poster
 
Posts: 5,285
Default Catching All The Details In High Dynamic Range Pictures W/OMultiple Exposures

SneakyP wrote:
Wolfgang Weisselberg wrote in news:r3g9m7-
:


Nice idea, won't work. First, reading the cell empties it
irrevocably.


put it into another storage medium to sum up the aggregrate readings.


Can't. You count the electrons as a current as they empty.
Therefore you need to have an infinite large electron storage,
e.g. a charged battery, otherwise you'll not empty the cell
properly and cause misreadings.

Second, how do you handle moving objects?


Hopefully the process will be quick enough to thwart blur.


It probably won't.


Third,
each reading causes read noise.


Too bad. Adding signal should supress the noise floor.


Signal means better filled cells.

Fourth, it's kinda hard to read
single cells ... for now you must live with a complete sensor read.


Complete censor states can be re-read after flushing them within a few
nanoseconds time right?


Flushing costs time, rereading costs time. Google 'rolling
shutter' or check the FPS of digital cameras to see how long
it takes. Hint: 11 frames with 1/250 or faster exposure time
shows that a full sensor read takes nearly 0.1 seconds.

-Wolfgang
  #14  
Old September 23rd 10, 12:49 PM posted to rec.photo.digital
Wolfgang Weisselberg
external usenet poster
 
Posts: 5,285
Default Catching All The Details In High Dynamic Range Pictures W/OMultiple Exposures

SneakyP wrote:

Does anybody else here think that camera abilities are being disabled on
purpose to milk the public buyer?


Nope.
They are sometimes disabled in lower models, true, but that is
to be able to offer a lower model at a lower price and still have
some incentive for buying a larger model at a higher price.

-Wolfgang
  #15  
Old September 23rd 10, 01:02 PM posted to rec.photo.digital
Wolfgang Weisselberg
external usenet poster
 
Posts: 5,285
Default Catching All The Details In High Dynamic Range Pictures W/OMultiple Exposures

SneakyP wrote:

1. Has to be a way to capture a picture of higher dynamic ranges without
resorting to combining two or more different sessions. Nobody seems to
understand that.


Oh, I understand that well. I have designed, in my head,
sensors that would have near infinite dynamic range.
Unfortunately, they have some important drawbacks, like only
working with static objects, if they could even be built.

That's why I know your idea won't fly.

The more range a pixel sensor
is allowed to store, the less needed to flush it.


Bigger pixels.

But seeing that a high
dynamic range picture doesn't work well with these kinds of sensor
usages, why not enable the read/store of data to a bank and then re-read
the next cycle to add to the prior read set of data.


Because that bank must stay completely empty during each reading
process.[1] No storing is possible that way. No rereading
allowed.

The real range of RGB shouldn't have to restricted to a range of colors
(2^8 values per channel)


What, you want more channels? Why? Your eye only sees 3
channels. Or do you want more values? Why? Your eye only
sees a bit less than that.

and extrapolate those to a screen that by its nature can only
handle 8bit.


Many screens can handle more than 8 bit.

They should be beyond that, but some seem to think that the
range is adequate.


The range is already better than your eye can see.

I'd say, no, I want a picture where you can bump up the intensity to see
what's in the shadows without having (noise) show up badly, or tune down
the intensity to a point where highlited/detailed stuff is revealed
instead of lost in blown-out white pixels.


You want a magic HDR image. Well, there are formats for that
out there. Just saying.

Just saying. When does that kind of camera processing come out?


Never.

Even our eyes have adaptive seeing= they don't blow out highlights when
seeing the shadows in the same field of vision.


You really think so? Your brain tricks you into thinking you
can see everything sharp at the same time, too.

Our eyesight seems
rather logarithmic as far as compressing dark from light and seeing a
darkened area next to a well lit area. Cameras don't have the ability to
emulate that kind of seeing.


Oh, really, and what is the gamma of JPEG? It's adjusted to
the way eyes see.

-Wolfgang

[1] Think about it. Electrons are read as the charge is released
to a known voltage level. The current generated is measured.
The voltage level of a storage bank would rise during the
read --- and if it stored something, it would be worse ---
and thus be unknown.
  #17  
Old September 23rd 10, 01:47 PM posted to rec.photo.digital
Superzooms Still Win
external usenet poster
 
Posts: 221
Default Catching All The Details In High Dynamic Range Pictures W/O Multiple Exposures

On Thu, 23 Sep 2010 06:45:19 -0400, TheRealSteve wrote:


On Wed, 22 Sep 2010 01:30:30 -0500, SneakyP
wrote:

ransley wrote in
:

On Sep 16, 8:53*am, Martin Brown
wrote:
On 16/09/2010 12:40, ransley wrote:





On Sep 16, 1:35 am,
wrote:
Here's a thought on processing those pixels of info that comprise
a pi
cture
(TAKEN with just one picture cycle) . *Integrate each time period
wi
th an
exposure of x seconds. take next picture in camera, in intervals
of readings between time x1 and time x2. *Continue on with
differential
s of
image gathering by watching the cells as they collect photons of
light
in
specified time periods. Make the sampling period vary according to
the dynamic range of the picture i.e. the more photons collected
should ki
ck in
a formula for desensitizing the sensor when a certain plateau of
brigh
ness
is reached. *It's like compressing the low and high ends to better
represent actual camera dynamic range with what is actually being
seen
. *I
don't know if monitors can represent the full range of colors and
intensities, but there should be some kind of tradeoff between
squeezi
ng
picture brightness/darkness towards a more palipable realistic
look an
d
getting a picture that actually looks like what it did when you
took i
t.

Pointers on photograpy tips appreciated.
* Thanks.
--
__
SneakyP
To email me, you know what to do.

Supernews, if you get a complaint from a Jamie Baillie, please
see:htt
p://www.canadianisp.ca/jamie_baillie.html

Yea, watch cells as they collect photons of light, you are smokin
some good stuff.

Actually that device is a real invention dating back to the late
1970's and called the Image Photon Counting System. Cunning system
design and obvious limitations. Developed by Alan Boksenberg at
Imperial College London during the 1970's and derivatives are still
in use today at ING and a few other large observatories for
specialised low signal imaging. Obviously it is useless at high light
levels you have to be able to count each photon arrival and determine
the centroid of the spot.

http://www.ing.iac.es/Astronomy/obse...manuals/genera.
..

Smoking good stuff is not required. It was absolutely ground breaking
when it first came out and was nick-named Instant Paper Creation
System. Compared to film it was streets ahead in sensitivity and
noise floor and it was pretty good for a while after CCDs became
available to astronomers too. It still beats CCDs on noise floor for
some work.

Regards,
Martin Brown- Hide quoted text -

- Show quoted text -

I believe he was implying using his self endowed power to view
photons.


No powers endowed here. The last thing needed were the snooty replies.

I was merely talking about a concept:

1. Has to be a way to capture a picture of higher dynamic ranges without
resorting to combining two or more different sessions. Nobody seems to
understand that.

2. Since pictures are composed of the collective pixel bed of cells that
"collect" photons as discrete data storage vs. physical film analog
storage, I'da figured the mathematics of adding all the data from each
cell may increase the range of captured light intensity to help
distinguish between what is seen in the real world vs. what is seen in
camera world. Dynamic range is extended.

I know, for instance, that highlight detail compression is nothing more
than applying a curve to the highest intensity light, to recreate
differences between levels and keeping the dark tones from becoming black
at the same time. Hence, the highlight blowout is avoided by highlight
recovery (same difference in the process). The more range a pixel sensor
is allowed to store, the less needed to flush it. But seeing that a high
dynamic range picture doesn't work well with these kinds of sensor
usages, why not enable the read/store of data to a bank and then re-read
the next cycle to add to the prior read set of data.


The real range of RGB shouldn't have to restricted to a range of colors
(2^8 values per channel) and extrapolate those to a screen that by its
nature can only handle 8bit. They should be beyond that, but some seem
to think that the range is adequate.

I'd say, no, I want a picture where you can bump up the intensity to see
what's in the shadows without having (noise) show up badly, or tune down
the intensity to a point where highlited/detailed stuff is revealed
instead of lost in blown-out white pixels.

Just saying. When does that kind of camera processing come out?
Even our eyes have adaptive seeing= they don't blow out highlights when
seeing the shadows in the same field of vision. Our eyesight seems
rather logarithmic as far as compressing dark from light and seeing a
darkened area next to a well lit area. Cameras don't have the ability to
emulate that kind of seeing.


The dynamic range of human eyesight can't be compared to a camera
because your eye has an iris that is always adjusting to the lighting
conditions. As you look at the bright part of the landscape, your
iris closes down. Look at the shadows and it opens up. And even
though the iris can react rather quickly, you can't see the details in
the shadows at the exact same time as details in the bright areas. A
camera has to capture all that at the same instant. If you want to
take the iris out of the equation, you can determine the range of
brightness levels a human eye is capable of distingishing in adjacent
areas. That works out to only about 100:1, which is far less than
digital cameras are capable of.

In addition to the iris, the eye can also chemically adjust it's
sensitivity, or in camera-speak, it's ISO value. But that takes much
longer to do, which is why you can't see any details in a dark theater
for while after stepping in from sunlight. After the eye adjusts, you
can. But step back into the sunlight and you're temporarily blinded
until it adjusts. Chemically changing it's sensitivity and
continuously varying the iris makes you think the eye is capable of
HDR. But it's not capable of HDR in a single image, which is what you
want. The eye's dynamic range in a single image is very limited.

Getting back to HDR in a single camera image, in concept, it can be
much simpler than you're describing. In simplistic terms, with analog
to digital converters, more dynamic range = more bits in the sampling.
In photo sensors, more dynamic range = pixels with a higher max charge
capacity yet still able to pick out the charge of single photons.
I.e., BIG pixels. If you have a sensor like that, you need more bits
in the ADC that samples it to take advantage of the higher dynamic
range of the sensor. More bits means slower conversions, higher
costs, etc.

The problem is that today's sensors, taking all the sources of noise
into account, can't even approach the theoretical limits of the ADC's
used to sample them. A 14 bit ADC has a theoretical limit of 14
f-stops of dynamic range, 84dB, 16384:1 contrast ratio, etc., all ways
of expressing the same thing. But the real world cameras that use 14
bit ADCs don't come close to that. Even a D3 has less than 9 f-stops
of usuable dynamic range.

So what you need simply is a sensor with very large pixels, low noise,
and more bits in the ADC even if that means it takes a while to read
the sensor. Forget all the other mumbo jumbo. Get those 3 things and
you have HDR in a single image.

How much dynamic range do you actually want/need? Well, the range in
direct lighting illuminance (cd/m*m) you're likely to run into,
varying from direct starlight to direct sunlight, is about 8 orders of
magnitude, or 10^8. Reflections can increase that since you're
concentrating a larger area of direct light into a smaller area. But
sticking with 8, the contrast ratio is 100,000,000:1. That's approx
159.45 dB, 26.575 f-stops. You need a 27 bit ADC to sample that
assuming linear sampling.

Even if you forget about noise at the low end, all I have to say is...
good luck.

Steve


Interesting to note:

Most CHDK enabled compact and superzoom cameras have about 30 to 31 EV
stops for automated bracketing with exposures up to 64 seconds; if using
all the aperture and shutter-speeds now available on those cameras. This
doesn't include the useful 10+ EV range of the sensor itself in many of
them. If we include the (sometimes useful) extended shutter speeds up to
2,147 seconds, tack on another 5 EV stops. Without using the extended
shutter speeds beyond 64 seconds, then we still have 4 more EV stops than
what is needed to go from starlight to direct sunlight if not even
considering the 10.3 EV range of the sensor too (in one of them that I
own). I'd say that pretty much covers all the bracketing needs required in
any lighting situation that nature can dish out for nearly all subjects
that one wants to take.

Setting a bracketing step of 4EV (available in the CHDK bracketing menu in
1/3 EV steps) would only require about 7 frames to fully capture and
properly expose for all light from starlight to direct sunlight. Keep in
mind too that any region of light-intensity that is properly exposed is
also free from noise at lower ISOs. I have no problems getting noise-free
images at 64 seconds at ISO 50 to 200 and this is without dark-frame noise
reduction.





  #18  
Old September 23rd 10, 01:51 PM posted to rec.photo.digital
LOL![_3_]
external usenet poster
 
Posts: 194
Default Catching All The Details In High Dynamic Range Pictures W/O Multiple Exposures

On Thu, 23 Sep 2010 13:49:43 +0200, Wolfgang Weisselberg
wrote:

SneakyP wrote:

Does anybody else here think that camera abilities are being disabled on
purpose to milk the public buyer?


Nope.
They are sometimes disabled in lower models, true, but that is
to be able to offer a lower model at a lower price and still have
some incentive for buying a larger model at a higher price.

-Wolfgang


Dear Puppygang Trollberg,

Please explain your interpretation of "milk the public buyer". It appears
to be identical to what was claimed by what you state but you first comment
of "Nope," declares otherwise.

LOL!

  #19  
Old September 23rd 10, 01:53 PM posted to rec.photo.digital
James Nagler
external usenet poster
 
Posts: 70
Default Catching All The Details In High Dynamic Range Pictures W/O Multiple Exposures

On Thu, 23 Sep 2010 05:43:10 -0700 (PDT), Whisky-dave
wrote:

On 23 Sep, 07:00, SneakyP
wrote:
Whisky-dave wrote in news:945d9cca-e33a-40e6-
:

I want free food and free sex.....
and a car[1] that drives and parks itself.


I want to sell you dirt (silica) at thousands of dollars an ounce.

*PLONK*

Does anybody else here think that camera abilities are being disabled on
purpose to milk the public buyer?


I think we'll soon see abilities being enabled as camera become more
software and firmware based.
A bit like the iPhone and how software updates give more such as auto
HDR on a phone !


Raise your hands.


The public will always be milked, so here' my hand ;-)



One need look no further than the CHDK project for compact and superzoom
cameras and the Magic Lantern project for dslrs to see how many features
have been INTENTIONALLY DISABLED on all models of cameras to trick the
buyer into thinking they need to spend more.

  #20  
Old September 24th 10, 12:04 AM posted to rec.photo.digital
TheRealSteve
external usenet poster
 
Posts: 325
Default Catching All The Details In High Dynamic Range Pictures W/O Multiple Exposures


On Thu, 23 Sep 2010 07:47:36 -0500, Superzooms Still Win
wrote:
[...]
Interesting to note:

Most CHDK enabled compact and superzoom cameras have about 30 to 31 EV
stops for automated bracketing with exposures up to 64 seconds; if using
all the aperture and shutter-speeds now available on those cameras. This
doesn't include the useful 10+ EV range of the sensor itself in many of
them. If we include the (sometimes useful) extended shutter speeds up to
2,147 seconds, tack on another 5 EV stops. Without using the extended
shutter speeds beyond 64 seconds, then we still have 4 more EV stops than
what is needed to go from starlight to direct sunlight if not even
considering the 10.3 EV range of the sensor too (in one of them that I
own). I'd say that pretty much covers all the bracketing needs required in
any lighting situation that nature can dish out for nearly all subjects
that one wants to take.

Setting a bracketing step of 4EV (available in the CHDK bracketing menu in
1/3 EV steps) would only require about 7 frames to fully capture and
properly expose for all light from starlight to direct sunlight. Keep in
mind too that any region of light-intensity that is properly exposed is
also free from noise at lower ISOs. I have no problems getting noise-free
images at 64 seconds at ISO 50 to 200 and this is without dark-frame noise
reduction.


That's still not HDR in a single exposure, which is what we're talking
about.

A 10EV range of a P&S or superzoom sensor doesn't compare with the 13+
EV range of high-end DSLR sensors or 12+ EV range of more common
DSLRs.

http://www.dxomark.com/index.php/en/...ensor-rankings

Steve
 




Thread Tools
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

vB code is On
Smilies are On
[IMG] code is Off
HTML code is Off
Forum Jump

Similar Threads
Thread Thread Starter Forum Replies Last Post
A question about digital cameras which can capture the most dynamic range of light exposures. [email protected] Digital Photography 0 October 14th 07 04:52 AM
A question about digital cameras which can capture the most dynamic range of light exposures. [email protected] Other Photographic Equipment 0 October 14th 07 04:52 AM
Multiple read technique extends dynamic range of small sensors Alfred Molon[_2_] Digital Photography 16 March 14th 07 02:49 AM
high dynamic range in P&S ?? minnesotti Digital Photography 4 October 27th 06 03:03 AM
How best to see very high dynamic range images [email protected] Digital Photography 6 August 5th 06 11:08 PM


All times are GMT +1. The time now is 09:25 AM.


Powered by vBulletin® Version 3.6.4
Copyright ©2000 - 2024, Jelsoft Enterprises Ltd.
Copyright ©2004-2024 PhotoBanter.com.
The comments are property of their posters.