A Photography forum. PhotoBanter.com

If this is your first visit, be sure to check out the FAQ by clicking the link above. You may have to register before you can post: click the register link above to proceed. To start viewing messages, select the forum that you want to visit from the selection below.

Go Back   Home » PhotoBanter.com forum » Digital Photography » Digital Photography
Site Map Home Register Authors List Search Today's Posts Mark Forums Read Web Partners

Could you actually see photos made from RAW files?



 
 
Thread Tools Display Modes
  #61  
Old June 3rd 09, 11:38 AM posted to rec.photo.digital,uk.rec.photo.misc
Floyd L. Davidson
external usenet poster
 
Posts: 5,138
Default Could you actually see photos made from RAW files?

Eric Stevens wrote:
On Tue, 02 Jun 2009 21:48:52 -0800, (Floyd L.
Davidson) wrote:

Eric Stevens wrote:
On Wed, 03 Jun 2009 11:37:23 +1000, Bob Larter
wrote:

Eric Stevens wrote:
Are you really saying that a given RAW data file can be created by
more than one image?

Yes. If you think carefully about how an image sensor works, it's obvious.

I may be missing something but its not obvious to me.

A lens directs light from a scene so as to form an image on the
camera's sensor.

Different parts of the image fall on individual sensels which, in the
time allowed to them, capture photons which generate electrons. The
accumulated electrons form an electrical charge in each sensel.


Is there any reason to believe that the same scene would
necessarily produce the same effect on the sensor every
time? In fact, the light is not represented by a
steady, consistent flow of photons. The photons arrive
at irregular intervals. It's called "photon noise".
The effect is that if the same image is projected onto a
sensor, each time the sensor is read the image will be
recorded with unique data that is not identical to the
other times that image is recorded.


This is certainly a problem at low light levels but, by and large,


No, it is a problem at higher light levels.

this is what I meant by "statistical error limitations". When you get
to this level you are in danger of introducing quantum theory.


Ha ha ha. That is hilarous. What exactly is the danger in the
introduction of quantum theory???

(I really like that, the way you have just two lines invoked both
quatum theory and statistical error limitation. Two phrases with
big words that mean absolutely nothing in the context in which
you have placed them! That's pretty good!)

According to the type of sensel, the charge is 'read' in one way or
another, and the quantity of charge converted to digital data.


Whoops, you just skipped over an awful lot of very fancy technology.
The data from the sensor is analog data.


The sensel counts photons which it converts to photons. The data is


I assume that is a typo and you meant "converts to electrons". If it
is something else, repeat it an I'll discuss what you did mean.

integer. This is converted to binary digital data.


That is not true (assuming you meant it as above). The data is analog.
It is not "integer", whatever it is you thing that means. It is indeed
converted to binary digital data, though the fact that it is binary is
of no significance at all. It could be quaternary and nothing would be
different.

You might say it is
encoded. I would say it is transformed. It doesn't matter which either
way.


It does matter. You can't seem to get the concept clear
that these terms have specific meanings when applied to
this technology. I am not using them the way I do just
because it is fun, I do it because it makes a difference
which term is used for someone who understands the
technology.

The process by which it
is converted to digital data is a one way process that cannot be
reversed with accuracy. I've gone into detail on that in another
article previously and will not repeat it at length here.


You should, if you want me to know what you are talking about.


It has been repeated enough times.

But of
course the 'transformation' can be run backwards, even if you don't
use the same hardware. Its the algorithm you have to reverse.


It cannot be run backwards. You cannot know from the
digital data value which part of the possible range of
analog values it came from, and hence you cannot
specifically reproduce it.

Get that clear. It is not simply my opinion of how the
technology works. It is a well known *fact* that you can discover
by reading up on it in any good serious text.

The digital value of the charge is saved in an array which enables the
value of the charge for each individual sensel to be mapped to the
position of the sensel.


So it is true that the position is relevant.


Whoever argued otherwise?


You did.

That original image on the sensor is characterised by by the raw data
array. Any change in the image gives rise to a different data array.


Not necessarily. How significant the change is is what
determines whether it changes the raw data. Some
changes simply are not great enough to cause any
difference in the data set.


Changes are quantized. A different image on the sensor gives a
different number of electrons which are transformed into different
digital data.


Stop being assinine.

First, changes are not what is quantized. Second, do to
the changing rate of arrival of photons (photon noise),
the number of actual electrons captured might not
change. Third, even if the number of electrons change,
that might not necessarily change the analog current
produced when the sensor is read (for a variety of
reasons, most of which are generally called "read
noise"). Forth, if the current is changed it has to be
changed enough to move it from one quantization range to
the next higher or lower (quantization distortion).

Clearly there are at least three ways in which a change
in the light projected onto the sensor may or may not
actually cause a change in the resulting digital data.

Each of those has been explaned to you previously.
These things are not simple opinions, they are well
known facts that you can research the details for any
time you wish.

All you have done is reiterate the claim. Can you give a step by step
explanation along the lines of the one I have just given?


You've been given step by step examples several times
now. Don't you think it is time to pay attention?
Here's a bullet list for your google searches:

Photon noise limited
read noise limited
Quantization distortion

--
Floyd L. Davidson http://www.apaflo.com/floyd_davidson
Ukpeagvik (Barrow, Alaska)

  #62  
Old June 3rd 09, 11:48 AM posted to rec.photo.digital,uk.rec.photo.misc
Floyd L. Davidson
external usenet poster
 
Posts: 5,138
Default Could you actually see photos made from RAW files?

Eric Stevens wrote:
On Tue, 02 Jun 2009 22:00:08 -0800, (Floyd L.
Davidson) wrote:
Chris H wrote:
In message , Floyd L. Davidson
writes
Everything is either one or the other and
nothing can be both.)

This is not correct. There are plenty of mixed devices about. Analog
Devices make a few of them.


You can have a complex device that has, for example, an
input that is one and an output that is another. But
you cannot have a signal that is both. "Everything"
means an atomic device, not a complex device. (My
apologies, I wrote the above expecting common sense
readers who understood the context.)

That is *not* part of the firmware. Firmware, other
than setting the ISO gain, has no part in any of that
other than turning it on and off.

You are referring to 'firmware' as though it was 'hardware'. Yet Nikon
can program the camera to behave differently so some software/firmware
must be involved.

Perfectly correct.


That does not cause hardware to become firmware.


Then what do you think firmware is? Its a mixture of hardware and
software.


Firmware is computer instructions in a ROM.

If something is not processed by a computer, it *cannot*
be done with firmware. Note that the entire data flow
from the sensor to the output of the ADC is hardware
based. It is only after the ADC that the computer can
even see the data, so nothing before that can be done
"in firmware".

How can you claim to know what all of this means when
you are missing on all cylinders when it comes to the
very basics on which these technologies are built?

I am referring to that as hardware because in fact it is
hardware. It is not done with software/firmware.

Yes it is. What is more I can supply the tools to write the firmware.


So you think the analog amplifiers and the ADC are
firmware and can be done with software tools??? That's
a bit of abject ignorance.


And that's a bit of dishonest argument, unless you insist on believing
that that the digitisation of the sensel charges is the entirety of
the process.


We were talking about the data flow from the sensor
through the ADC. That is where this silliness was
injected, hence that is what it reasonably is expected
to reference. If he meant that he can provide software
tools to work on the data *after* the area we were
discussing, then he should have indicated that his
discussion had no relationship to our discussion.

It's done with hardware.

(I can't tell you which but We have supplied software/firmware tools to
more than one OEM digital camera company (P&S variety)


Not for those functions you haven't.


But he never claimed it was just for those functions.


Then exactly what would be the point of injecting that
statement? He no doubt could have written COBOL
software for Nikon's or Sony's accounting department,
but it would be equally abjectly stupid to make the
above statement in the context that he did if that is
what he meant.

--
Floyd L. Davidson http://www.apaflo.com/floyd_davidson
Ukpeagvik (Barrow, Alaska)

  #63  
Old June 3rd 09, 12:47 PM posted to rec.photo.digital,uk.rec.photo.misc
Floyd L. Davidson
external usenet poster
 
Posts: 5,138
Default Could you actually see photos made from RAW files?

Eric Stevens wrote:
On Tue, 02 Jun 2009 23:17:16 -0800, (Floyd L.
Davidson) wrote:
"In a digital circuit, a signal is represented in discrete states or
logic levels." - but they don't have to be binary.


What's your point? Binary is necessarily digital, but
digital is not necessarily binary (though any value that
is digital can necessarily be encoded in a binary form).
Got that?

The point is still that while the number electrons on
the head of a pin might be discrete, the current
produces by the flow of those electrons is *not*
discrete, and therefore is analog.


Its not a question of current. Its a question of the number of
electrons.


The output signal from the sensor is not read in terms of
electrons, it is current.

Of course the output of an electronic sensor in a camera
is the analog current, not the discrete number of
electons capture.


The output of a sensor is electron charges, which is quantized.


No, the output of the sensor is current. That is discharged
through an impedance to generate a voltage. The voltage level
is what is quantized.

http://en.wikipedia.org/wiki/Analog_electronics

"Any change in the signal is meaningful, and each level of the signal
represents a different level of the phenomenon that it represents."


That is correct. Any change is meaningful on the
*input* to whereever the signal goes. It does *not*
necessarily mean it somehow is meaningful to anything
else (such as the number of electrons that cause the
signal to exist).


Who is trying to say it is meaningful to anything else?


You. (See below.)

This isn't the case with the output from a charge amplifier. 0.050,000
volts represents 50,000 electrons. 0.050,000,4 volts still represents
50,000 electrons. But 0.050,001 volts represents 50,001 electrons as
does 0.050,000,6 volts.


Is that supposed to make sense?


Yes, and it does.


It doesn't. (See above.)

The output from the amplifier is an analog signal.


So you keep saying. But if you can measure it with sufficient accuracy
and discard rounding errors (as can easily be done) you can read the
output as integer - which is digital.


No, I don't care how accurately you measure current, the
reading is *necesarily* analog. You seem to think the
current is a one to one relationship with the number of
photons that strike the sensor... which is not true.
The charge of electrons developed is proportional to the
number of photons, but it is not a one to one
relationship and is an analog transform.

Secondly you seem to think the current has a one to one
relationship with the number of electrons... which is
not true. Current is the *flow* of electrons, not the
number of them. Electrons do not flow in only one
direction nor do they go at all the same speed.
Therefore the absolute number of electrons in a given
charge does not equate to an absolute current. What
counts is the number of electrons passing a given point.
If you have 5 electrons and two move west while three
move east, it's the same current as if you had only 1
electron. The effect of course is that current
generated by even a specific count of electrons is
analog, not digital. The variations in direction and
speed of the electrons as they move causes current
variations to be continuously variable, not discretely
variable.

Anything you thing means otherwise is nonsense. If you
doubt that, explain how and why it is fed to an analog
amplifier and then to a device called an
Analog-to-Digital-Converter.


It is the A to D converter which precisely measures the voltage to it
to be able to read as integer numbers. That's what I was trying to
explain to you above when you asked "Is that supposed to make sense?"


That is hilarious. The ADC does not precisely read the
integral number of electrons. As you've noted, there
might be 50,000 or even more, electrons collected by a
single sensor. But even a 14 bit ADC cannot count
higher than 16384.

More nonsense.

In summary:
---
Analogue electronics (or analog in American English) are those
electronic systems with a continuously variable signal. In contrast, in
digital electronics signals usually take only two different levels. The


That isn't really true, about "signals usually take only
two ...". In fact the entire digital Public Switched
Telephone Network (PSTN), as well as virtually all of
the music and video that is digitally recorded, uses
what is called an m-ary level encoding. In most cases
that is a 255 level PCM digital signal.


At least that's got you away from insisting that they always have to
be binary.


I have never suggested that anything always has to be
binary. Why do you make up these silly excursions into
fantasy land? It is fairly easy to find where I've been
explaining what the relationship of binary to digital
is, and citing the definitions of both digital and
analog, on Usenet for many many years. Hence your
statement is patently silly on its face.

In the case of an image sensor, the output voltage is an analog to the
light level that impinges upon it.

It still has to be capable of accurate digitisation and to that extent
it is digital.


That statement is pure nonsense. It doesn't "have to be
capable of accurate digitisation", whatever it is that
you think that means. It is not digital in any way
until it *is* digitized.


Electrons. Integer number of electrons. Nothing analog about integer
numbers.


There is nothing integer about the current and voltages
outputted by the sensor either. What point did you
have?

Here's a good web site that you should read for awhile before
you come back with a few more appolgies for so many funny ideas
and so much obnoxious argument about stupidity:

http://micro.magnet.fsu.edu/primer/d.../concepts.html

Here's one of their pages in particular that you should read:

http://micro.magnet.fsu.edu/primer/d...cdanatomy.html

The title is "Anatomy of a Charge-Coupled Device", and here is the one
sentence you need to pay attention to:

"This produces an analog raster scan of the
photo-generated charge from the entire two-dimensional
array of photodiode sensor elements"

That is from the summation of how CCD's work, in the next to
the last paragraph of that article.

Here's another one that might help you:

http://www.microscopyu.com/articles/...italintro.html

"Because CCD chips, like all optical sensors, are
analog devices that produce a stream of varying
voltages, ... "

Here's more from the same paragraph, also interesting:

"Whether or not the output can actually be
resolved into 4096 discrete intensity levels (12
bits) depends on the camera noise. In order to
discriminate between individual intensity levels,
each gray level step should be about 2.7 times larger
than the camera noise. Otherwise, the difference
between steps 2982 and 2983, for example, cannot be
resolved with any degree of certainty. Some so-called
12-bit cameras have so much camera noise that 4096
discrete steps cannot be discriminated."


--
Floyd L. Davidson http://www.apaflo.com/floyd_davidson
Ukpeagvik (Barrow, Alaska)
  #64  
Old June 3rd 09, 02:34 PM posted to rec.photo.digital,uk.rec.photo.misc
Floyd L. Davidson
external usenet poster
 
Posts: 5,138
Default Could you actually see photos made from RAW files?

Eric Stevens wrote:
On Tue, 02 Jun 2009 23:00:27 -0800, (Floyd L.
Davidson) wrote:
You saw it on the Internet, so it must be true. Sheesh.
Cite an authoritative source. (Actually, I'll challenge you
to find anything at all from 20 years ago that says anything
like that.)


I mightn't find the rules but I can find plenty of examples where that
was what was done.


And I can find 1000 times more where it was not done.

Haw. You are now claiming that it is possible to have a continuously
variable number of electrons. I maintain the number of electrons can
only be represented by integers.


So what? That makes the number of electrons digital.


Can I quote that to the Floyd L. Davidson with whom I've just been
arguing?


Not too bright of you...

But we aren't actually measuring the number of electrons,
are we? We are measuring the current that flows as a
result of the charge that was stored (which is roughly
proportional to the number of electrons). The current
that flows is affected not only by the number of
electrons, but how fast they move and which direction
they go. Both of those characteristics are continuously
variable.


Thats why the actual measurement is of charge 'q'.


No, you seem to have missed an awful lot.

First, what actually is important is the amount of light
falling on the sensor. Photons. There is an analog
relationship between the number of photons and the
number of electrons. Which is to say that noise exists
because the exact same number of photons will not always
result in the exact same number of electrons.

Second the electrons aren't actually electrons at all, they
are electron holes. The whole thing is sometimes referred
to as a "photoelectron", meaning the amount of charge that
results from an average photon.

Regardless of all that (which makes your claim moot already),
the "actual measurement" is not of charge, it is of current.
And that current is analog.

And hence electric current is analog, and so is voltage.


And you are the guy who has just explained that 255 levels of current
or voltage can be used to carry tone signals. But no, that's not
digital or digitized. :-(


The 255 levels are *not* 255 unique individual current
or voltage levels. It is an infinitely variable analog
current, modulated by a digital signal, which produces
255 *ranges* of current that are encoded at discrete
values.

That can be compared to a binary signal using two voltage
levels, 0 and 1 to determine two values. In fact the trigger
points might be such than any voltage lower than 0.5 Volts
has a value of False, and any voltage higher than 0.5 Volts has
a value of True. True/False are the binary values. But the
voltages for each are analog and cover an infinite number of
voltages from whatever the minimum the driver can produce up
to 0.5 for one value and from 0.5 up to whatever the maximum
the driver can produce for the other value.

The difference with 8 bit PCM is that there are 255 valid
values. (And I'll leave it to you to figure out why the
number is not 256 values.)

Now, you may notice that the output of the sensor is a
voltage which is continuously variable over an infinite
number of values between 0 and 1 volts. That makes it
an *analog* device by definition.

The output of the sensor is an electrical charge.


Actually it is a current flowing into the input
impedance of an amplifier, and thus producing a voltage.


That's how the electrical charge is measured.


Two distinctly different things!

Charge is a quantized value, measured in Coulombs, which
are necessarily a multiple of "e", the unit of a single
charge. Charge is necessarily a digital parameter.

Current is the *movement* of charge, measure in Amperes,
which is an analog parameter because charges can move
faster or slower, and can move in an infinite number of
directions. Hence 5 electrons moving can result in an
infinite number of different current values.

For all practical purposes, the measurement of current
views movement direction as simply a variation of the
average speed of all of the electrons. Indeed, the
definition of "current" is

I = nAvQ

Whe

I is the current in Amperes,
n is the number of charged particles per unit volume
A is the Area across the conductor
v is the drift velosity of the particles
Q is the charge on each particle

It probably comes as a surprise to you that the actual velosity
of the charge, in copper wire, is probably down in the millimeter
per second range.

The amplifier is affected by the voltage, which is an
analog parameter.


If you measure it with sufficient precision you can use it to carry
digital data.


That statement is true, but it doesn't mean what you
think it does. Sufficient precision can be within 2
volts for example (the typical precision required for
TTL devices).

What you cannot do is measure individual charged
particles. So go by faster than others, and as a result
such an attempt would end up appearing to measure
fractions of a charged particle.

The electrical
charge is dumped into a charge amplifier and it is this which outputs
the voltage. This is the first step in transforming sensor image into
the RAW data file.


And it is clearly an all analog process.

That analog data is fed into a device that converts it
to digital data.

The output of the charge amplifier is then digitised. This is the


Actually it is a voltage amplifier.


Ew've done that bit already.


And you lost, so why not learn something and cease posting
nonsense?

In the case of the Nikon D300 the RAW file can be output as either 12
bit or 14 bit. It is likely that before it can be transformed into
either of those formats it is processed in the camera in some other
format.


Why is that likely? That is what the output of the ADC
is, and all that necessarily happens after that is the
data stream is read by the CPU so that it can be written
to the file.


But in what form is it whenit is read by the CPU - 12 bit or 14 bit or
something else again?


It is either 12 bit or it is 14 bit, that is what comes
from the ADC.

For several paragraphs now your comments have had
absolutely nothing to do with the text you are quoting.

What's your point?


I'm beginning to think you really don't know anything about the logic
of any of these processes.


That's why you have to have the definitions of so many
things cited for you? That's why you don't know where
the 12-bit/14-bit distinction is made? That's why you
think measuring current is measuring a charge? That's
why you think the sensor is digital?

That's why we go over and over the nonsense that you
continue to post????

Eric, get real. We *know* who doesn't understand it.
The question is why do you continue this charade?

The reason I've been stating that there are many
possible images which can produce the same digital data
set is because the analog signal to produce any single
one of those 4096 values has an infinite number of
possible values.

Not so.


Claim that all you like, but it is true.

The digital value of say 1612 corresponds with only one state
of the particular sensor element.


False. It corresponds to a range of values. Because
the range is analog, there are an infinite number of
possible values in that range.


Analog electronic charge. Haw!


Nobody said it was an analog charge Eric.

After the data is digitized, it has one value (1612) and
you cannot determine which of the infinite number of
analog values that could be 1612 it actually was to
start with.


You might be correct if they truly were analog, but they aren't.


So just exactly why is it that every single paper of any
repute at all on electronic sensors says it truly is
analog?

Can you find even one that says it isn't?

And why is it that all these devices used between the
"analog" part and the "digital" part are *always* called
Analog to Digital Converters? (Did you know that if it
actually was digital as you claim, it would be called a
CODEC, which is short for "coder/decoder", as in a
device that encode and decodes digital data?)

No, if you understand the nature of the transform then
you realize that exactly the opposite is true. The
digital value of 1612 might, for example, represent a
range of voltages between .25 and .30 volts. When you
look at the digital value of 1612, you cannot determine
if it was .26675382, or .28778391. All you know that is
was between .25 and .30 volts.


I think the design of sensels has progressed since you helped invent
them. Their charge can be read with much greater precision than you
seem to think.


Well then I suggest you find an example! The fact is that
sensors commonly in use today have 50,000 to 100,000 full well
electron counts, and with a 12 bit ADC that has to be reduce to
only 4096 levels at the most.

Do you want me to cite you a reference describing read noise,
photo numbers, electrons, quantum efficiency, and how it is all
related? Here's a short list for you:

http://micro.magnet.fsu.edu/primer/d...amicrange.html
http://theory.uchicago.edu/~ejm/pix/...300_40D_tests/
http://www.clarkvision.com/imagedeta...hotons.and.qe/

Basic information theory. (Ever heard of Claude E. Shannon??)

Of course I have. What exactly does he have to do with it?


Everything.

He more or less defined all of it with mathematics, and
analyzed what it meant. Because of his work it was
decided that the telecommunications industry would
benefit from moving to a digital network and abandon
the analog network. It was also clear that digital
imaging would be much much preferred to analog imaging.
That is why inventions which appeared to be useful for
digital photography were developed instead of ignored,
even though there was no market at all for them at the
time.


I know all that. But what does he have to do with this particular
argument?


Everything.

All this disccussion about what is analog and what is
digital, about why it is impossible to recreate the
exact analog side once a signal has be digitized. It
all relates to Shannon. If you actually did "know all
that", you wouldn't be saying all the hilarious things
you say...

It takes two to cooperate. Apart from that, I think we are approaching
this from two different directions. My background includes university
training in physics, electronics and mathematics. My terminology is
different from yours (e.g. transform) and so to is my approach to the
problem.


The problem is simply that you are discussing something that
you know virtually nothing about.


Nevertheless there
is software between the formation of the image on the sensor and the
writing of the data to the RAW file.


But that software is *not* image manipulation software.
All it necessarily does is encode the data so that it can
be written to a standard file format.


In the case of the D300 I do not think that is the case.


Then why don't you just cite some authoritative source
which describes it, eh?

That is why/how various problems
(e.g. vertical stripes) can be cured by a firmware upgrade.


That sounds more like the image manipulation that is done
to the JPEG conversion, not to raw data.


I'm talking of raw data. Then Nikons apply a characterisation curve to
the senso data before it is recorded as RAW file data. There is much
more.


Then you won't mind if I insist that you cite an
authoritative source which descibes it, eh?

The only "curve" I know of is the custom tone curves,
which are applied to the JPEG images but have nothing to
do with the raw data.

I don't have the information but I am sure the process is reversible
(Subject to statistical error limitations).


Then why don't you just cite an authoritative source which describes
it, eh?

See
http://www.clarkvision.com/imagedeta...mance.summary/
for a better indication of the number of electrons you can expect to
deal with: 50,000 or more.


The little guy standing there looking into the well and
counting those electrons might be digital, eh?


In his own way, he is.

But since what cameras do is discharge the device
through an impedance and amplify the voltage, we don't
have a digital count of electrons, we have an analog
signal indicating how much charge there was. Of course
it includes noise, so even if we did know the exact
analog voltage (which as has been show, we cannot), we
still wouldn't be able to determine how many electrons
were actually captured.


How do you think you get a digital display to X significant figures on
a digital volt meter? Thats the same way that the output of a charge
amplifier is digitised.


There ain't no such thing as a "charge amplifier". If
you had not noticed, DVM's generally only have 4 or 5
digits. They certainly are not counting charged particles!

Heh heh, what a hoot. The way that DVM's work is
generally a voltage controlled oscillator is counted.
There isn't actually an sample and hold ADC involved,
mostly because they don't have enough precision.

Boyle and Smith were the inventors of electronic imaging 40 years ago.
If you were working with electronic imaging 40 years ago you _must_
have been working with Boyle and Smith.


Bull**** sonny, none of that is true. They were working
with CCD's ca 1969, and that is not even close to the
beginnings of electronic imaging.

I seem to recall Television being invented back in the
1930's or so. That is electronic imaging that existed
long before I was born. But well before the CCD was
invented at Bell Labs, I was working with Television.

By 1969 or so I was also working with digital imaging,
though I'll grant that it was nothing near as high tech
as a CCD. (Ever see "TTY art". Digital imaging!)

--
Floyd L. Davidson http://www.apaflo.com/floyd_davidson
Ukpeagvik (Barrow, Alaska)
  #65  
Old June 3rd 09, 03:38 PM posted to rec.photo.digital,uk.rec.photo.misc
John McWilliams
external usenet poster
 
Posts: 6,945
Default Could you actually see photos made from RAW files?

Could the two of you, Messrs. Stevens and Davidson, take this offline?

There is some risk that this thread will become tedious.

--
lsmft
  #66  
Old June 3rd 09, 05:38 PM posted to rec.photo.digital,uk.rec.photo.misc
Chris H
external usenet poster
 
Posts: 2,283
Default Could you actually see photos made from RAW files?

In message , Floyd L. Davidson
writes
Eric Stevens wrote:
On Tue, 02 Jun 2009 22:00:08 -0800, (Floyd L.
Davidson) wrote:
You are referring to 'firmware' as though it was 'hardware'. Yet Nikon
can program the camera to behave differently so some software/firmware
must be involved.

Perfectly correct.

That does not cause hardware to become firmware.


Then what do you think firmware is? Its a mixture of hardware and
software.


Firmware is computer instructions in a ROM.


EEPROM, Flash etc often loaded into RAM at runtime. However often it is
in ASICS and FPGA's with softcores.

BTW ROMS are rarely used these days. You are 15 years out of date

Note that the entire data flow
from the sensor to the output of the ADC is hardware
based.


Yes... But this contains firmware. That is what is in these ASICS

It is only after the ADC that the computer can
even see the data, so nothing before that can be done
"in firmware".


Not true. You seem to be over a decade out of date.

How can you claim to know what all of this means when
you are missing on all cylinders when it comes to the
very basics on which these technologies are built?


He is correct. You are not.

If you would like a 101 on embedded systems I am free tomorrow afternoon
when I finish presenting to the UK Ministry of Defence on this topic in
the morning.

See
http://www.safety-club.org.uk/diary....t=detail&id=80

and scroll down to the speakers I am the first one after the welcome.

Floyd, what is your expertise in this field?

I am referring to that as hardware because in fact it is
hardware. It is not done with software/firmware.

Yes it is. What is more I can supply the tools to write the firmware.

So you think the analog amplifiers and the ADC are
firmware and can be done with software tools??? That's
a bit of abject ignorance.


And that's a bit of dishonest argument, unless you insist on believing
that that the digitisation of the sensel charges is the entirety of
the process.


We were talking about the data flow from the sensor
through the ADC. That is where this silliness was
injected, hence that is what it reasonably is expected
to reference. If he meant that he can provide software
tools to work on the data *after* the area we were
discussing, then he should have indicated that his
discussion had no relationship to our discussion.



Not at all. I can provide the tools for the software (firmware) in the
ASIC that takes the information from the sensor.

It's done with hardware.
(I can't tell you which but We have supplied software/firmware tools to
more than one OEM digital camera company (P&S variety)
Not for those functions you haven't.


But he never claimed it was just for those functions.


Then exactly what would be the point of injecting that
statement? He no doubt could have written COBOL
software for Nikon's or Sony's accounting department,


No. We only work in the embedded sector. Mainly high reliability
systems.




--
\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
\/\/\/\/\ Chris Hills Staffs England /\/\/\/\/
\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/



  #67  
Old June 3rd 09, 10:05 PM posted to rec.photo.digital,uk.rec.photo.misc
Eric Stevens
external usenet poster
 
Posts: 13,611
Default Could you actually see photos made from RAW files?

On Wed, 03 Jun 2009 02:38:19 -0800, (Floyd L.
Davidson) wrote:

Eric Stevens wrote:
On Tue, 02 Jun 2009 21:48:52 -0800,
(Floyd L.
Davidson) wrote:

Eric Stevens wrote:
On Wed, 03 Jun 2009 11:37:23 +1000, Bob Larter
wrote:

Eric Stevens wrote:
Are you really saying that a given RAW data file can be created by
more than one image?

Yes. If you think carefully about how an image sensor works, it's obvious.

I may be missing something but its not obvious to me.

A lens directs light from a scene so as to form an image on the
camera's sensor.

Different parts of the image fall on individual sensels which, in the
time allowed to them, capture photons which generate electrons. The
accumulated electrons form an electrical charge in each sensel.

Is there any reason to believe that the same scene would
necessarily produce the same effect on the sensor every
time? In fact, the light is not represented by a
steady, consistent flow of photons. The photons arrive
at irregular intervals. It's called "photon noise".
The effect is that if the same image is projected onto a
sensor, each time the sensor is read the image will be
recorded with unique data that is not identical to the
other times that image is recorded.


This is certainly a problem at low light levels but, by and large,


No, it is a problem at higher light levels.

this is what I meant by "statistical error limitations". When you get
to this level you are in danger of introducing quantum theory.


Ha ha ha. That is hilarous. What exactly is the danger in the
introduction of quantum theory???

(I really like that, the way you have just two lines invoked both
quatum theory and statistical error limitation. Two phrases with
big words that mean absolutely nothing in the context in which
you have placed them! That's pretty good!)


For this topic you have to understand the nature of the underlying
quantum mechanics. Quantum mechanics is probabalistic.

According to the type of sensel, the charge is 'read' in one way or
another, and the quantity of charge converted to digital data.

Whoops, you just skipped over an awful lot of very fancy technology.
The data from the sensor is analog data.


The sensel counts photons which it converts to photons. The data is


I assume that is a typo and you meant "converts to electrons". If it
is something else, repeat it an I'll discuss what you did mean.

integer. This is converted to binary digital data.


That is not true (assuming you meant it as above). The data is analog.
It is not "integer", whatever it is you thing that means. It is indeed
converted to binary digital data, though the fact that it is binary is
of no significance at all. It could be quaternary and nothing would be
different.


Imagine you are counting bricks and you only have an old fashioned
spring scale. For the sake of simplicity, lets say each brick weighs 1
lb. How do you count integer bricks with your analog scale?

1 lb on the scale means one brick. 12 lbs on the scale means 12
bricks. 12,2 lbs on the scale probably means 12 bricks. So too does
12.45 lbs or 11.6 lbs. You can never be entirely certain but the more
precise the scale more confident you can be. For this reason the
accuracy of your brick count is "subject to statistical error
limitations".

The same thing applies when electronics counts electrons. Electrical
measuring devices can have a level of accuracy beyond the
comprehension of people used to the mechanical world. Eight
significant figures is not unusual. I don't know what is employed in
camera sensors but I expect the better ones will be capable of
counting electrons to a high order of precision. Their data going in
is integer. Just like the brick counter, the data coming out will be
integer, even if it is obtained via what you call analog circuitry.

You might say it is
encoded. I would say it is transformed. It doesn't matter which either
way.


It does matter. You can't seem to get the concept clear
that these terms have specific meanings when applied to
this technology. I am not using them the way I do just
because it is fun, I do it because it makes a difference
which term is used for someone who understands the
technology.


What then is the significant difference in your mind?

The process by which it
is converted to digital data is a one way process that cannot be
reversed with accuracy. I've gone into detail on that in another
article previously and will not repeat it at length here.


You should, if you want me to know what you are talking about.


It has been repeated enough times.

But of
course the 'transformation' can be run backwards, even if you don't
use the same hardware. Its the algorithm you have to reverse.


It cannot be run backwards. You cannot know from the
digital data value which part of the possible range of
analog values it came from, and hence you cannot
specifically reproduce it.


Lets ride with your digital to analog for the moment (although I don't
entirely agree with it). Lets say 1 electron is converted by the
process to a decimal value of 1.2 (it doesn't matter 1.2 what). 2
electrons give 2.4. 3 electrons give 3.5. .... 6 electrons give 7.5
and so on. You can construct a table relating number of electrons in,
and the decimal value out.

Now, say you have an image which is presented as a RAW data file. Say
you also know all the details of the process by means of which RAW
data has been derived from the output of everything after the table
above. You use this to work back to determine that for a particular
sensel the output of the A to D process was 3.6. Using the table you
cconclude that that means that the sensel had probably captured 3
electrons. You can do this for every sensel on the sensor and by this
means you can reconstruct the original image which was projected onto
the sensor.

Get that clear. It is not simply my opinion of how the
technology works. It is a well known *fact* that you can discover
by reading up on it in any good serious text.


Find an example and quote from it.

The digital value of the charge is saved in an array which enables the
value of the charge for each individual sensel to be mapped to the
position of the sensel.

So it is true that the position is relevant.


Whoever argued otherwise?


You did.


I did? Please find where.

That original image on the sensor is characterised by by the raw data
array. Any change in the image gives rise to a different data array.

Not necessarily. How significant the change is is what
determines whether it changes the raw data. Some
changes simply are not great enough to cause any
difference in the data set.


Changes are quantized. A different image on the sensor gives a
different number of electrons which are transformed into different
digital data.


Stop being assinine.


What's assinine about that?

First, changes are not what is quantized.


What I am saying is that changes must be of constant magnitude.
Subject to the sensitivity of the sensor and its associated
electronics, you cant have a very small change which simply isn't
large enough to influence the data set. You either have a change or
you don't.

Second, do to
the changing rate of arrival of photons (photon noise),
the number of actual electrons captured might not
change.


That's one of the factors I had in mind when I wrote "subject to
statistical error limitations"

Third, even if the number of electrons change,
that might not necessarily change the analog current
produced when the sensor is read (for a variety of
reasons, most of which are generally called "read
noise").


That's another of the factors I had in mind when I wrote "subject to
statistical error limitations".

Forth, if the current is changed it has to be
changed enough to move it from one quantization range to
the next higher or lower (quantization distortion).


Think bricks. I'm glad to see you understand.

Clearly there are at least three ways in which a change
in the light projected onto the sensor may or may not
actually cause a change in the resulting digital data.


But so what? We are considering what light each sensel was actually
exposed to.

Each of those has been explaned to you previously.
These things are not simple opinions, they are well
known facts that you can research the details for any
time you wish.

All you have done is reiterate the claim. Can you give a step by step
explanation along the lines of the one I have just given?


You've been given step by step examples several times
now. Don't you think it is time to pay attention?
Here's a bullet list for your google searches:

Photon noise limited
read noise limited
Quantization distortion


"subject to statistical error limitations"



Eric Stevens
  #68  
Old June 3rd 09, 10:05 PM posted to rec.photo.digital,uk.rec.photo.misc
Eric Stevens
external usenet poster
 
Posts: 13,611
Default Could you actually see photos made from RAW files?

On Wed, 03 Jun 2009 07:38:32 -0700, John McWilliams
wrote:

Could the two of you, Messrs. Stevens and Davidson, take this offline?

There is some risk that this thread will become tedious.


A damned good idea. I'm taking it right off line.



Eric Stevens
  #69  
Old June 3rd 09, 11:09 PM posted to rec.photo.digital,uk.rec.photo.misc
Floyd L. Davidson
external usenet poster
 
Posts: 5,138
Default Could you actually see photos made from RAW files?

John McWilliams wrote:
Could the two of you, Messrs. Stevens and Davidson, take this offline?

There is some risk that this thread will become tedious.


You already are tedious, but you aren't offline. Hence there
is no requirement that someone else go offline at your request.

--
Floyd L. Davidson http://www.apaflo.com/floyd_davidson
Ukpeagvik (Barrow, Alaska)
  #70  
Old June 3rd 09, 11:28 PM posted to rec.photo.digital,uk.rec.photo.misc
Floyd L. Davidson
external usenet poster
 
Posts: 5,138
Default Could you actually see photos made from RAW files?

Chris H wrote:
In message , Floyd L. Davidson
writes
Eric Stevens wrote:
On Tue, 02 Jun 2009 22:00:08 -0800, (Floyd L.
Davidson) wrote:
You are referring to 'firmware' as though it was 'hardware'. Yet Nikon
can program the camera to behave differently so some software/firmware
must be involved.

Perfectly correct.

That does not cause hardware to become firmware.

Then what do you think firmware is? Its a mixture of hardware and
software.


Firmware is computer instructions in a ROM.


EEPROM, Flash etc often loaded into RAM at runtime. However often it is
in ASICS and FPGA's with softcores.

BTW ROMS are rarely used these days. You are 15 years out of date


ASIC's and FPGA's contain Read Only Memory (ROM), and
and EEPROM obviously does too (it *is* a ROM).

Note that the entire data flow
from the sensor to the output of the ADC is hardware
based.


Yes... But this contains firmware. That is what is in these ASICS


Are you sure of that? It certainly is possible, but I
don't know that any of the current crop of cameras are
using ASIC's that process raw sensor data with
instructions from firmware.

Please be specific, and cite a credible reference to
something that suggests it is commonly true.

It is only after the ADC that the computer can
even see the data, so nothing before that can be done
"in firmware".


Not true. You seem to be over a decade out of date.


So lets see you provide a cite to something which
verifies what your claim.

How can you claim to know what all of this means when
you are missing on all cylinders when it comes to the
very basics on which these technologies are built?


He is correct. You are not.

If you would like a 101 on embedded systems I am free tomorrow afternoon
when I finish presenting to the UK Ministry of Defence on this topic in
the morning.

See
http://www.safety-club.org.uk/diary....t=detail&id=80

and scroll down to the speakers I am the first one after the welcome.


Wonderful. But, errr, do you know *anything* about
cameras?

If you don't know what ROM is, how can you?

We were talking about the data flow from the sensor
through the ADC. That is where this silliness was
injected, hence that is what it reasonably is expected
to reference. If he meant that he can provide software
tools to work on the data *after* the area we were
discussing, then he should have indicated that his
discussion had no relationship to our discussion.


Not at all. I can provide the tools for the software (firmware) in the
ASIC that takes the information from the sensor.


Okay. Now, is that firmware just controlling the data
flow or is it manipulating the data? Is there a CPU
in the ASIC?

--
Floyd L. Davidson http://www.apaflo.com/floyd_davidson
Ukpeagvik (Barrow, Alaska)
 




Thread Tools
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

vB code is On
Smilies are On
[IMG] code is Off
HTML code is Off
Forum Jump

Similar Threads
Thread Thread Starter Forum Replies Last Post
Could you actually see photos made from RAW files? mcdonaldREMOVE TO ACTUALLY REACH [email protected] Digital Photography 33 June 3rd 09 07:32 AM
Could you actually see photos made from RAW files? Savageduck[_2_] Digital Photography 8 June 1st 09 04:22 AM
Could you actually see photos made from RAW files? Steven Green[_3_] Digital Photography 0 May 30th 09 09:27 PM
Could you actually see photos made from RAW files? nospam Digital Photography 0 May 30th 09 09:18 PM
Could you actually see photos made from RAW files? Trev Digital Photography 0 May 30th 09 09:18 PM


All times are GMT +1. The time now is 07:21 PM.


Powered by vBulletin® Version 3.6.4
Copyright ©2000 - 2024, Jelsoft Enterprises Ltd.
Copyright ©2004-2024 PhotoBanter.com.
The comments are property of their posters.