View Single Post
  #5  
Old October 16th 10, 09:29 PM posted to rec.photo.digital.slr-systems,rec.photo.digital
TheRealSteve
external usenet poster
 
Posts: 325
Default Sigma, tired of LYING about resolution, admit they needed more


On Sat, 16 Oct 2010 11:14:45 -0400, nospam
wrote:

In article , TheRealSteve
wrote:

Changing the color filters doesn't mean the sensor is now doing
demosaicing. It's still done off the sensor.

so what? pointless nitpicking.


Maybe to you but not to anyone who shoots raw and processes off
camera. There are a lot of us who do that.


processing raw does not mean you know the chromaticity of the filters
on the sensor, nor do you actually examine the raw data directly.

you are looking at the final result and tweaking some of the parameters.


Exactly, which is why we treat the sensor and the processing
seperately. So we can do things like tweaking processing parameters
while working with the raw data as input. You can't do that if the
sensor and processing was a single unit.


And if the pattern
stays the same so can the algorithm.

nope. if you have rgbw, the algorithm is *not* the same. one pixel has
no colour filter. with cmyg, you need to convert to rgb at some point.


Notice I said "And if the pattern stays the same"... rgbw does not
have the same pattern so it's not the same algorithm.


the algorithm can change even with the same pattern. there is no one
single way to process bayer, which is why different raw converters
produce different results.


The point is that the algorithm doesn't *have* to change if it's the
same pattern. Of course it *can* change, but it doesn't have to.

Only the weighting from sensor
color to final image has to change. However, when you start talking
about things like cmyg, rgbe, it's no longer a "bayer" sensor because
you don't have the bayer pattern.

it's not the same pattern as what bayer originally picked but it's
still considered a bayer sensor.


Maybe by you but it's not a bayer sensor because it doesn't have some
of the most important properties that bayer patented. I.e., takint
the physiology of the eye into consideration when determining the
spacing and desity of the various colors.


it's still called a bayer sensor. more useless nitpicking.


No it's not called a bayer sensor. If you're going to define a bayer
sensor that loosely where any pattern of colors can be called a bayer
sensor, you might as well call a fovean sensor a bayer sensor. That's
the problem with too loose a definition... no one knows what anyone is
talking about. We all know what a bayer sensor is, and it's not one
that doesn't have the bayer pattern.

There is no doubling the population
of one color vs. the other two. Those other patterns don't try to
match the physiology of the human eye the way the bayer sensor does.
With the cmyg pattern, all of the colors are on a rectangular grid.

same grid as rggb, just cmyg.


No, cmyg is nothing like the grid of rggb.


no ****.


lol.. First you say, and I quote: "same grid as rggb, just cmyg." and
then when I say cmyg is nothing like the grid of rggb you say "no
****." Well, at least now you're agreeing with me and disagreeing
with yourself. You need to get your story straight.

You don't have the
doubling of the pixel density of a single color, which gives the
higher definition luminance of the bayer sensor.


doubling green doesn't do what you think it does.


I know what it does. So does Mr. Bayer. You apparently don't.

If you actually believe a sensor with 4 colors each with a rectangular
pattern is the same sensor pattern as one with 3 colors, 2 of which
are rectangular and 1 is quincunx, then there really is no point
continuing a discussion. You're obviously trolling when you discount
something so innately obvious.


bayer doesn't work that way.


Wow, so you really don't know what a bayer sensor is if you say that
"bayer doesn't work that way" to the basic definition of what it is.

BTW, the one that uses two shades of green is a Nikon patent. The
green pixels alternate between darker shade and lighter shade. The
lower light sensitivity of the darker shaded green pixel means it can
capture image highlights better, allowing for greater dynamic range in
the final processed image. I don't think it actually is in a camera
available to the public yet.

it's not nikon and yes it does exist. i don't recall who uses it, i
think maybe sony but i'm not sure.


Yes, it is Nikon. I dug up a link for you:
http://www.creativemayhem.com.au/?p=236


thanks for the link. as i said i didn't recall who made it, but
apparently it is nikon.

nikon has a patent on a full colour pixel using dichroic mirrors that
is probably impossible to manufacture at a competitive price.


Yes, they have that also. Here's another link for you on that one:
http://en.wikipedia.org/wiki/File:Ni...roicPatent.png


yep, and that's going to be a royal bitch to manufacture at a
marketable price, let alone deal with noise.


It's probably much easier just to stick with 3 chips, one for each
color, than make that thing.

unless you are a sensor designer or a raw software developer, the
sensor and processing can be taken as a unit. *all* bayer sensors
*will* have demosaicing done to provide an image to the user.

It's not true that only a sensor designer or raw software developer
will treat the sensor and processing as a unit. Anyone shooting raw
has already made the choice to treat them seperately.

however, they still process the raw and look at the final photo, likely
a jpeg.


Of course they process the raw. That's the point I was making. People
who shoot raw process the raw seperately, out of the camera, and do
not treat the sensor and the processing as a unit. They treat them as
seperately as they can possibly be.


as i said before they do *not* look at the raw data directly. they look
at the final image and it can be considered a unit, one which can be
adjusted.


What you actually said is: "unless you are a sensor designer or a raw
software developer, the sensor and processing can be taken as a unit."

That is just plain mistaken as anyone who shoots raw can tell you.
They don't take the sensor and the processing as a unit. Your
argument that you don't actually "look" at raw data directly is
specious. You don't actually "look" at jpeg data directly, or any
other image format either. If you tried, you would just see strings
of bits.

Steve