A Photography forum. PhotoBanter.com

If this is your first visit, be sure to check out the FAQ by clicking the link above. You may have to register before you can post: click the register link above to proceed. To start viewing messages, select the forum that you want to visit from the selection below.

Go Back   Home » PhotoBanter.com forum » Digital Photography » Digital SLR Cameras
Site Map Home Register Authors List Search Today's Posts Mark Forums Read Web Partners

Sigma, tired of LYING about resolution, admit they needed more



 
 
Thread Tools Display Modes
  #1  
Old October 14th 10, 01:52 PM posted to rec.photo.digital.slr-systems,rec.photo.digital
TheRealSteve
external usenet poster
 
Posts: 325
Default Sigma, tired of LYING about resolution, admit they needed more


On Wed, 13 Oct 2010 23:40:59 -0400, nospam
wrote:

In article , TheRealSteve
wrote:

The terminology used in Bryce Bayer's patent calls the green sensors
luminance-sensitive and the red and blue ones chrominance-sensitive.

true, but the patent is also 30 years old. modern bayer processing is
vastly more sophisticated than that.


The bayer sensor is the same. Only the processing has changed.


actually the sensor has changed in some cases. there have been
variations on the filters, including cmyg, rgbe (emerald) and even rgbw
(white - no filter) patterns and i think one sensor used two shades of
green but i don't remember which one off hand.


Changing the color filters doesn't mean the sensor is now doing
demosaicing. It's still done off the sensor. And if the pattern
stays the same so can the algorithm. Only the weighting from sensor
color to final image has to change. However, when you start talking
about things like cmyg, rgbe, it's no longer a "bayer" sensor because
you don't have the bayer pattern. There is no doubling the population
of one color vs. the other two. Those other patterns don't try to
match the physiology of the human eye the way the bayer sensor does.
With the cmyg pattern, all of the colors are on a rectangular grid.
With the bayer pattern, only 2 colors are a rectangular patter and the
3rd is a quincunx pattern. Change the pattern and it's no longer a
bayer sensor.

BTW, the one that uses two shades of green is a Nikon patent. The
green pixels alternate between darker shade and lighter shade. The
lower light sensitivity of the darker shaded green pixel means it can
capture image highlights better, allowing for greater dynamic range in
the final processed image. I don't think it actually is in a camera
available to the public yet.

However, how the bayer sensor works is really irrelavent. What
matters is how the results of the bayer sensor are demosaiced.

demosaicing *is* how the sensor works.


No. The sensor doesn't do demosaicing. That's up to post processing
software, either in the camera or on an external computer if you shoot
raw. No demosaicing is done in the camera if you shoot raw, which is
one of the main advantages of raw. You can use more complex
algorithms than a camera can do to demosaic the bayer sensor data into
an image. You can even reprocess the raw data to an image again if
better algorithms are invented in the future. But again, the sensor
does not do the demosaicing.


unless you are a sensor designer or a raw software developer, the
sensor and processing can be taken as a unit. *all* bayer sensors
*will* have demosaicing done to provide an image to the user.


It's not true that only a sensor designer or raw software developer
will treat the sensor and processing as a unit. Anyone shooting raw
has already made the choice to treat them seperately. And personally,
I do see differences in the results between difference software
packages even with no "tweaking". A raw image processed by RawShooter
Essentials 2006 doesn't look as good to me as one processed by Capture
NX 2. There could be all sorts of reasons for that. Even if the same
interpolation algorithm is used, Nikon may know better the exact color
of their filter so they can have a better set of interpolation weights
(if those are needed by the algorithm) to come up with a better
processed image.

There
are plenty of different algorithms that can be used with the usual
tradeoffs of processing speed vs. visual performance.

absolutely, and not surprisingly, the detractors pick the absolute
worst possible method, so bad it's not ever used.


You probably can't say for sure what methods are used or not used.


the exact algorithm, no, but there is quite a bit of available
information on bayer processing that *is* available to the public.


Obviously. But irrelavent because the different algorithms perform so
markedly differently.

Steve
  #2  
Old October 14th 10, 08:19 PM posted to rec.photo.digital.slr-systems,rec.photo.digital
nospam
external usenet poster
 
Posts: 24,165
Default Sigma, tired of LYING about resolution, admit they needed more

In article , TheRealSteve
wrote:

The bayer sensor is the same. Only the processing has changed.


actually the sensor has changed in some cases. there have been
variations on the filters, including cmyg, rgbe (emerald) and even rgbw
(white - no filter) patterns and i think one sensor used two shades of
green but i don't remember which one off hand.


Changing the color filters doesn't mean the sensor is now doing
demosaicing. It's still done off the sensor.


so what? pointless nitpicking.

And if the pattern
stays the same so can the algorithm.


nope. if you have rgbw, the algorithm is *not* the same. one pixel has
no colour filter. with cmyg, you need to convert to rgb at some point.

Only the weighting from sensor
color to final image has to change. However, when you start talking
about things like cmyg, rgbe, it's no longer a "bayer" sensor because
you don't have the bayer pattern.


it's not the same pattern as what bayer originally picked but it's
still considered a bayer sensor.

There is no doubling the population
of one color vs. the other two. Those other patterns don't try to
match the physiology of the human eye the way the bayer sensor does.
With the cmyg pattern, all of the colors are on a rectangular grid.


same grid as rggb, just cmyg.

With the bayer pattern, only 2 colors are a rectangular patter and the
3rd is a quincunx pattern. Change the pattern and it's no longer a
bayer sensor.


nope.

BTW, the one that uses two shades of green is a Nikon patent. The
green pixels alternate between darker shade and lighter shade. The
lower light sensitivity of the darker shaded green pixel means it can
capture image highlights better, allowing for greater dynamic range in
the final processed image. I don't think it actually is in a camera
available to the public yet.


it's not nikon and yes it does exist. i don't recall who uses it, i
think maybe sony but i'm not sure.

nikon has a patent on a full colour pixel using dichroic mirrors that
is probably impossible to manufacture at a competitive price.

However, how the bayer sensor works is really irrelavent. What
matters is how the results of the bayer sensor are demosaiced.

demosaicing *is* how the sensor works.

No. The sensor doesn't do demosaicing. That's up to post processing
software, either in the camera or on an external computer if you shoot
raw. No demosaicing is done in the camera if you shoot raw, which is
one of the main advantages of raw. You can use more complex
algorithms than a camera can do to demosaic the bayer sensor data into
an image. You can even reprocess the raw data to an image again if
better algorithms are invented in the future. But again, the sensor
does not do the demosaicing.


unless you are a sensor designer or a raw software developer, the
sensor and processing can be taken as a unit. *all* bayer sensors
*will* have demosaicing done to provide an image to the user.


It's not true that only a sensor designer or raw software developer
will treat the sensor and processing as a unit. Anyone shooting raw
has already made the choice to treat them seperately.


however, they still process the raw and look at the final photo, likely
a jpeg.

And personally,
I do see differences in the results between difference software
packages even with no "tweaking". A raw image processed by RawShooter
Essentials 2006 doesn't look as good to me as one processed by Capture
NX 2. There could be all sorts of reasons for that. Even if the same
interpolation algorithm is used, Nikon may know better the exact color
of their filter so they can have a better set of interpolation weights
(if those are needed by the algorithm) to come up with a better
processed image.


nikon definitely knows a whole lot more about their sensors and filters
than third parties. adobe does a lot of testing to determine the right
parameters for their raw converter. it's very good but it's not exactly
the same.
  #3  
Old October 16th 10, 04:07 AM posted to rec.photo.digital.slr-systems,rec.photo.digital
TheRealSteve
external usenet poster
 
Posts: 325
Default Sigma, tired of LYING about resolution, admit they needed more


On Thu, 14 Oct 2010 15:19:02 -0400, nospam
wrote:

In article , TheRealSteve
wrote:

The bayer sensor is the same. Only the processing has changed.

actually the sensor has changed in some cases. there have been
variations on the filters, including cmyg, rgbe (emerald) and even rgbw
(white - no filter) patterns and i think one sensor used two shades of
green but i don't remember which one off hand.


Changing the color filters doesn't mean the sensor is now doing
demosaicing. It's still done off the sensor.


so what? pointless nitpicking.


Maybe to you but not to anyone who shoots raw and processes off
camera. There are a lot of us who do that.

And if the pattern
stays the same so can the algorithm.


nope. if you have rgbw, the algorithm is *not* the same. one pixel has
no colour filter. with cmyg, you need to convert to rgb at some point.


Notice I said "And if the pattern stays the same"... rgbw does not
have the same pattern so it's not the same algorithm.

Only the weighting from sensor
color to final image has to change. However, when you start talking
about things like cmyg, rgbe, it's no longer a "bayer" sensor because
you don't have the bayer pattern.


it's not the same pattern as what bayer originally picked but it's
still considered a bayer sensor.


Maybe by you but it's not a bayer sensor because it doesn't have some
of the most important properties that bayer patented. I.e., takint
the physiology of the eye into consideration when determining the
spacing and desity of the various colors.

There is no doubling the population
of one color vs. the other two. Those other patterns don't try to
match the physiology of the human eye the way the bayer sensor does.
With the cmyg pattern, all of the colors are on a rectangular grid.


same grid as rggb, just cmyg.


No, cmyg is nothing like the grid of rggb. You don't have the
doubling of the pixel density of a single color, which gives the
higher definition luminance of the bayer sensor.

If you actually believe a sensor with 4 colors each with a rectangular
pattern is the same sensor pattern as one with 3 colors, 2 of which
are rectangular and 1 is quincunx, then there really is no point
continuing a discussion. You're obviously trolling when you discount
something so innately obvious.


With the bayer pattern, only 2 colors are a rectangular patter and the
3rd is a quincunx pattern. Change the pattern and it's no longer a
bayer sensor.


nope.


Yup.

BTW, the one that uses two shades of green is a Nikon patent. The
green pixels alternate between darker shade and lighter shade. The
lower light sensitivity of the darker shaded green pixel means it can
capture image highlights better, allowing for greater dynamic range in
the final processed image. I don't think it actually is in a camera
available to the public yet.


it's not nikon and yes it does exist. i don't recall who uses it, i
think maybe sony but i'm not sure.


Yes, it is Nikon. I dug up a link for you:
http://www.creativemayhem.com.au/?p=236


nikon has a patent on a full colour pixel using dichroic mirrors that
is probably impossible to manufacture at a competitive price.


Yes, they have that also. Here's another link for you on that one:
http://en.wikipedia.org/wiki/File:Ni...roicPatent.png


However, how the bayer sensor works is really irrelavent. What
matters is how the results of the bayer sensor are demosaiced.

demosaicing *is* how the sensor works.

No. The sensor doesn't do demosaicing. That's up to post processing
software, either in the camera or on an external computer if you shoot
raw. No demosaicing is done in the camera if you shoot raw, which is
one of the main advantages of raw. You can use more complex
algorithms than a camera can do to demosaic the bayer sensor data into
an image. You can even reprocess the raw data to an image again if
better algorithms are invented in the future. But again, the sensor
does not do the demosaicing.

unless you are a sensor designer or a raw software developer, the
sensor and processing can be taken as a unit. *all* bayer sensors
*will* have demosaicing done to provide an image to the user.


It's not true that only a sensor designer or raw software developer
will treat the sensor and processing as a unit. Anyone shooting raw
has already made the choice to treat them seperately.


however, they still process the raw and look at the final photo, likely
a jpeg.


Of course they process the raw. That's the point I was making. People
who shoot raw process the raw seperately, out of the camera, and do
not treat the sensor and the processing as a unit. They treat them as
seperately as they can possibly be.


And personally,
I do see differences in the results between difference software
packages even with no "tweaking". A raw image processed by RawShooter
Essentials 2006 doesn't look as good to me as one processed by Capture
NX 2. There could be all sorts of reasons for that. Even if the same
interpolation algorithm is used, Nikon may know better the exact color
of their filter so they can have a better set of interpolation weights
(if those are needed by the algorithm) to come up with a better
processed image.


nikon definitely knows a whole lot more about their sensors and filters
than third parties. adobe does a lot of testing to determine the right
parameters for their raw converter. it's very good but it's not exactly
the same.


Which is why I use now Capture NX 2 for my Nikon cameras.
  #4  
Old October 16th 10, 04:14 PM posted to rec.photo.digital.slr-systems,rec.photo.digital
nospam
external usenet poster
 
Posts: 24,165
Default Sigma, tired of LYING about resolution, admit they needed more

In article , TheRealSteve
wrote:

Changing the color filters doesn't mean the sensor is now doing
demosaicing. It's still done off the sensor.


so what? pointless nitpicking.


Maybe to you but not to anyone who shoots raw and processes off
camera. There are a lot of us who do that.


processing raw does not mean you know the chromaticity of the filters
on the sensor, nor do you actually examine the raw data directly.

you are looking at the final result and tweaking some of the parameters.

And if the pattern
stays the same so can the algorithm.


nope. if you have rgbw, the algorithm is *not* the same. one pixel has
no colour filter. with cmyg, you need to convert to rgb at some point.


Notice I said "And if the pattern stays the same"... rgbw does not
have the same pattern so it's not the same algorithm.


the algorithm can change even with the same pattern. there is no one
single way to process bayer, which is why different raw converters
produce different results.

Only the weighting from sensor
color to final image has to change. However, when you start talking
about things like cmyg, rgbe, it's no longer a "bayer" sensor because
you don't have the bayer pattern.


it's not the same pattern as what bayer originally picked but it's
still considered a bayer sensor.


Maybe by you but it's not a bayer sensor because it doesn't have some
of the most important properties that bayer patented. I.e., takint
the physiology of the eye into consideration when determining the
spacing and desity of the various colors.


it's still called a bayer sensor. more useless nitpicking.

There is no doubling the population
of one color vs. the other two. Those other patterns don't try to
match the physiology of the human eye the way the bayer sensor does.
With the cmyg pattern, all of the colors are on a rectangular grid.


same grid as rggb, just cmyg.


No, cmyg is nothing like the grid of rggb.


no ****.

You don't have the
doubling of the pixel density of a single color, which gives the
higher definition luminance of the bayer sensor.


doubling green doesn't do what you think it does.

If you actually believe a sensor with 4 colors each with a rectangular
pattern is the same sensor pattern as one with 3 colors, 2 of which
are rectangular and 1 is quincunx, then there really is no point
continuing a discussion. You're obviously trolling when you discount
something so innately obvious.


bayer doesn't work that way.

BTW, the one that uses two shades of green is a Nikon patent. The
green pixels alternate between darker shade and lighter shade. The
lower light sensitivity of the darker shaded green pixel means it can
capture image highlights better, allowing for greater dynamic range in
the final processed image. I don't think it actually is in a camera
available to the public yet.


it's not nikon and yes it does exist. i don't recall who uses it, i
think maybe sony but i'm not sure.


Yes, it is Nikon. I dug up a link for you:
http://www.creativemayhem.com.au/?p=236


thanks for the link. as i said i didn't recall who made it, but
apparently it is nikon.

nikon has a patent on a full colour pixel using dichroic mirrors that
is probably impossible to manufacture at a competitive price.


Yes, they have that also. Here's another link for you on that one:
http://en.wikipedia.org/wiki/File:Ni...roicPatent.png


yep, and that's going to be a royal bitch to manufacture at a
marketable price, let alone deal with noise.

unless you are a sensor designer or a raw software developer, the
sensor and processing can be taken as a unit. *all* bayer sensors
*will* have demosaicing done to provide an image to the user.

It's not true that only a sensor designer or raw software developer
will treat the sensor and processing as a unit. Anyone shooting raw
has already made the choice to treat them seperately.


however, they still process the raw and look at the final photo, likely
a jpeg.


Of course they process the raw. That's the point I was making. People
who shoot raw process the raw seperately, out of the camera, and do
not treat the sensor and the processing as a unit. They treat them as
seperately as they can possibly be.


as i said before they do *not* look at the raw data directly. they look
at the final image and it can be considered a unit, one which can be
adjusted.
  #5  
Old October 16th 10, 09:29 PM posted to rec.photo.digital.slr-systems,rec.photo.digital
TheRealSteve
external usenet poster
 
Posts: 325
Default Sigma, tired of LYING about resolution, admit they needed more


On Sat, 16 Oct 2010 11:14:45 -0400, nospam
wrote:

In article , TheRealSteve
wrote:

Changing the color filters doesn't mean the sensor is now doing
demosaicing. It's still done off the sensor.

so what? pointless nitpicking.


Maybe to you but not to anyone who shoots raw and processes off
camera. There are a lot of us who do that.


processing raw does not mean you know the chromaticity of the filters
on the sensor, nor do you actually examine the raw data directly.

you are looking at the final result and tweaking some of the parameters.


Exactly, which is why we treat the sensor and the processing
seperately. So we can do things like tweaking processing parameters
while working with the raw data as input. You can't do that if the
sensor and processing was a single unit.


And if the pattern
stays the same so can the algorithm.

nope. if you have rgbw, the algorithm is *not* the same. one pixel has
no colour filter. with cmyg, you need to convert to rgb at some point.


Notice I said "And if the pattern stays the same"... rgbw does not
have the same pattern so it's not the same algorithm.


the algorithm can change even with the same pattern. there is no one
single way to process bayer, which is why different raw converters
produce different results.


The point is that the algorithm doesn't *have* to change if it's the
same pattern. Of course it *can* change, but it doesn't have to.

Only the weighting from sensor
color to final image has to change. However, when you start talking
about things like cmyg, rgbe, it's no longer a "bayer" sensor because
you don't have the bayer pattern.

it's not the same pattern as what bayer originally picked but it's
still considered a bayer sensor.


Maybe by you but it's not a bayer sensor because it doesn't have some
of the most important properties that bayer patented. I.e., takint
the physiology of the eye into consideration when determining the
spacing and desity of the various colors.


it's still called a bayer sensor. more useless nitpicking.


No it's not called a bayer sensor. If you're going to define a bayer
sensor that loosely where any pattern of colors can be called a bayer
sensor, you might as well call a fovean sensor a bayer sensor. That's
the problem with too loose a definition... no one knows what anyone is
talking about. We all know what a bayer sensor is, and it's not one
that doesn't have the bayer pattern.

There is no doubling the population
of one color vs. the other two. Those other patterns don't try to
match the physiology of the human eye the way the bayer sensor does.
With the cmyg pattern, all of the colors are on a rectangular grid.

same grid as rggb, just cmyg.


No, cmyg is nothing like the grid of rggb.


no ****.


lol.. First you say, and I quote: "same grid as rggb, just cmyg." and
then when I say cmyg is nothing like the grid of rggb you say "no
****." Well, at least now you're agreeing with me and disagreeing
with yourself. You need to get your story straight.

You don't have the
doubling of the pixel density of a single color, which gives the
higher definition luminance of the bayer sensor.


doubling green doesn't do what you think it does.


I know what it does. So does Mr. Bayer. You apparently don't.

If you actually believe a sensor with 4 colors each with a rectangular
pattern is the same sensor pattern as one with 3 colors, 2 of which
are rectangular and 1 is quincunx, then there really is no point
continuing a discussion. You're obviously trolling when you discount
something so innately obvious.


bayer doesn't work that way.


Wow, so you really don't know what a bayer sensor is if you say that
"bayer doesn't work that way" to the basic definition of what it is.

BTW, the one that uses two shades of green is a Nikon patent. The
green pixels alternate between darker shade and lighter shade. The
lower light sensitivity of the darker shaded green pixel means it can
capture image highlights better, allowing for greater dynamic range in
the final processed image. I don't think it actually is in a camera
available to the public yet.

it's not nikon and yes it does exist. i don't recall who uses it, i
think maybe sony but i'm not sure.


Yes, it is Nikon. I dug up a link for you:
http://www.creativemayhem.com.au/?p=236


thanks for the link. as i said i didn't recall who made it, but
apparently it is nikon.

nikon has a patent on a full colour pixel using dichroic mirrors that
is probably impossible to manufacture at a competitive price.


Yes, they have that also. Here's another link for you on that one:
http://en.wikipedia.org/wiki/File:Ni...roicPatent.png


yep, and that's going to be a royal bitch to manufacture at a
marketable price, let alone deal with noise.


It's probably much easier just to stick with 3 chips, one for each
color, than make that thing.

unless you are a sensor designer or a raw software developer, the
sensor and processing can be taken as a unit. *all* bayer sensors
*will* have demosaicing done to provide an image to the user.

It's not true that only a sensor designer or raw software developer
will treat the sensor and processing as a unit. Anyone shooting raw
has already made the choice to treat them seperately.

however, they still process the raw and look at the final photo, likely
a jpeg.


Of course they process the raw. That's the point I was making. People
who shoot raw process the raw seperately, out of the camera, and do
not treat the sensor and the processing as a unit. They treat them as
seperately as they can possibly be.


as i said before they do *not* look at the raw data directly. they look
at the final image and it can be considered a unit, one which can be
adjusted.


What you actually said is: "unless you are a sensor designer or a raw
software developer, the sensor and processing can be taken as a unit."

That is just plain mistaken as anyone who shoots raw can tell you.
They don't take the sensor and the processing as a unit. Your
argument that you don't actually "look" at raw data directly is
specious. You don't actually "look" at jpeg data directly, or any
other image format either. If you tried, you would just see strings
of bits.

Steve
  #6  
Old October 16th 10, 11:49 PM posted to rec.photo.digital.slr-systems,rec.photo.digital
Wolfgang Weisselberg
external usenet poster
 
Posts: 5,285
Default Sigma, tired of LYING about resolution, admit they needed more

nospam wrote:
wrote:


Changing the color filters doesn't mean the sensor is now doing
demosaicing. It's still done off the sensor.


so what? pointless nitpicking.


Maybe to you but not to anyone who shoots raw and processes off
camera. There are a lot of us who do that.


processing raw does not mean you know the chromaticity of the filters
on the sensor,


It does not mean you don't know.

nor do you actually examine the raw data directly.


It does not mean you don't examine. In fact enough RAW
converters display the raw RAW values unter a pixel.

you are looking at the final result and tweaking some of the parameters.


Not in all cases. Not even in most cases.

the algorithm can change even with the same pattern. there is no one
single way to process bayer, which is why different raw converters
produce different results.


That's easily explained by different parameters.

Only the weighting from sensor
color to final image has to change. However, when you start talking
about things like cmyg, rgbe, it's no longer a "bayer" sensor because
you don't have the bayer pattern.


it's not the same pattern as what bayer originally picked but it's
still considered a bayer sensor.


Maybe by you but it's not a bayer sensor because it doesn't have some
of the most important properties that bayer patented. I.e., takint
the physiology of the eye into consideration when determining the
spacing and desity of the various colors.


it's still called a bayer sensor. more useless nitpicking.


By you, by maybe 3 more people and by nobody else.
Nitpicking like that is relevant for RAW conversion, else you
decode a cmyg as rggb and get really strange results. Of course
you'd rather not see any difference.

There is no doubling the population
of one color vs. the other two. Those other patterns don't try to
match the physiology of the human eye the way the bayer sensor does.
With the cmyg pattern, all of the colors are on a rectangular grid.


same grid as rggb, just cmyg.


No, cmyg is nothing like the grid of rggb.


no ****.


Nice of you to admit it.

You don't have the
doubling of the pixel density of a single color, which gives the
higher definition luminance of the bayer sensor.


doubling green doesn't do what you think it does.


Prove it.
Show URLs.

If you actually believe a sensor with 4 colors each with a rectangular
pattern is the same sensor pattern as one with 3 colors, 2 of which
are rectangular and 1 is quincunx, then there really is no point
continuing a discussion. You're obviously trolling when you discount
something so innately obvious.


bayer doesn't work that way.


The way Bayer works is described in the patent.

BTW, the one that uses two shades of green is a Nikon patent. The
green pixels alternate between darker shade and lighter shade. The
lower light sensitivity of the darker shaded green pixel means it can
capture image highlights better, allowing for greater dynamic range in
the final processed image. I don't think it actually is in a camera
available to the public yet.


it's not nikon and yes it does exist. i don't recall who uses it, i
think maybe sony but i'm not sure.


Yes, it is Nikon. I dug up a link for you:
http://www.creativemayhem.com.au/?p=236


thanks for the link. as i said i didn't recall who made it, but
apparently it is nikon.


Sheesh, you don't even read your own quoted posts. "i don't recall
who *USES* it" (emphasis mine) is different from "i didn't recall
who made it".

And you DID write "it's not nikon".

You are wrong on trivially to check facts about your own writing
--- how can we then believe a single word you say about more
complicated, intricate things?

Of course they process the raw. That's the point I was making. People
who shoot raw process the raw seperately, out of the camera, and do
not treat the sensor and the processing as a unit. They treat them as
seperately as they can possibly be.


as i said before they do *not* look at the raw data directly.


They also do not look at the JPEG directly. Nobody does (except
JPEG developers). Everyone looks at the computer's interpretation
of JPEG.

they look at the final image


In the end they usually do, however they first look at the
computer's interpretation of RAW.

and it can be considered a unit, one which can be
adjusted.


JPEG can also be adjusted. So what?

Fact is that RAW shooting separates the sensor from the processing.
The simple factoid that there are several RAW processors to choose
from breaks any unit of sensor and processing.

-Wolfgang
 




Thread Tools
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

vB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Forum Jump

Similar Threads
Thread Thread Starter Forum Replies Last Post
Sigma, tired of LYING about resolution, admit they needed more Rich[_6_] Digital Photography 126 October 19th 10 01:28 PM
Sigma, tired of LYING about resolution, admit they needed more Gary Eickmeier Digital SLR Cameras 4 October 9th 10 07:38 PM
Sigma DP-1 review resolution claim RichA Digital SLR Cameras 16 April 18th 08 12:12 AM
film scanner resolution needed for ISO 200 Alan Browne 35mm Photo Equipment 1 August 30th 04 06:15 PM
film scanner resolution needed for ISO 200 Monte Castleman 35mm Photo Equipment 3 August 30th 04 06:15 PM


All times are GMT +1. The time now is 12:25 AM.


Powered by vBulletin® Version 3.6.4
Copyright ©2000 - 2024, Jelsoft Enterprises Ltd.
Copyright ©2004-2024 PhotoBanter.com.
The comments are property of their posters.