View Single Post
  #6  
Old October 16th 10, 11:49 PM posted to rec.photo.digital.slr-systems,rec.photo.digital
Wolfgang Weisselberg
external usenet poster
 
Posts: 5,285
Default Sigma, tired of LYING about resolution, admit they needed more

nospam wrote:
wrote:


Changing the color filters doesn't mean the sensor is now doing
demosaicing. It's still done off the sensor.


so what? pointless nitpicking.


Maybe to you but not to anyone who shoots raw and processes off
camera. There are a lot of us who do that.


processing raw does not mean you know the chromaticity of the filters
on the sensor,


It does not mean you don't know.

nor do you actually examine the raw data directly.


It does not mean you don't examine. In fact enough RAW
converters display the raw RAW values unter a pixel.

you are looking at the final result and tweaking some of the parameters.


Not in all cases. Not even in most cases.

the algorithm can change even with the same pattern. there is no one
single way to process bayer, which is why different raw converters
produce different results.


That's easily explained by different parameters.

Only the weighting from sensor
color to final image has to change. However, when you start talking
about things like cmyg, rgbe, it's no longer a "bayer" sensor because
you don't have the bayer pattern.


it's not the same pattern as what bayer originally picked but it's
still considered a bayer sensor.


Maybe by you but it's not a bayer sensor because it doesn't have some
of the most important properties that bayer patented. I.e., takint
the physiology of the eye into consideration when determining the
spacing and desity of the various colors.


it's still called a bayer sensor. more useless nitpicking.


By you, by maybe 3 more people and by nobody else.
Nitpicking like that is relevant for RAW conversion, else you
decode a cmyg as rggb and get really strange results. Of course
you'd rather not see any difference.

There is no doubling the population
of one color vs. the other two. Those other patterns don't try to
match the physiology of the human eye the way the bayer sensor does.
With the cmyg pattern, all of the colors are on a rectangular grid.


same grid as rggb, just cmyg.


No, cmyg is nothing like the grid of rggb.


no ****.


Nice of you to admit it.

You don't have the
doubling of the pixel density of a single color, which gives the
higher definition luminance of the bayer sensor.


doubling green doesn't do what you think it does.


Prove it.
Show URLs.

If you actually believe a sensor with 4 colors each with a rectangular
pattern is the same sensor pattern as one with 3 colors, 2 of which
are rectangular and 1 is quincunx, then there really is no point
continuing a discussion. You're obviously trolling when you discount
something so innately obvious.


bayer doesn't work that way.


The way Bayer works is described in the patent.

BTW, the one that uses two shades of green is a Nikon patent. The
green pixels alternate between darker shade and lighter shade. The
lower light sensitivity of the darker shaded green pixel means it can
capture image highlights better, allowing for greater dynamic range in
the final processed image. I don't think it actually is in a camera
available to the public yet.


it's not nikon and yes it does exist. i don't recall who uses it, i
think maybe sony but i'm not sure.


Yes, it is Nikon. I dug up a link for you:
http://www.creativemayhem.com.au/?p=236


thanks for the link. as i said i didn't recall who made it, but
apparently it is nikon.


Sheesh, you don't even read your own quoted posts. "i don't recall
who *USES* it" (emphasis mine) is different from "i didn't recall
who made it".

And you DID write "it's not nikon".

You are wrong on trivially to check facts about your own writing
--- how can we then believe a single word you say about more
complicated, intricate things?

Of course they process the raw. That's the point I was making. People
who shoot raw process the raw seperately, out of the camera, and do
not treat the sensor and the processing as a unit. They treat them as
seperately as they can possibly be.


as i said before they do *not* look at the raw data directly.


They also do not look at the JPEG directly. Nobody does (except
JPEG developers). Everyone looks at the computer's interpretation
of JPEG.

they look at the final image


In the end they usually do, however they first look at the
computer's interpretation of RAW.

and it can be considered a unit, one which can be
adjusted.


JPEG can also be adjusted. So what?

Fact is that RAW shooting separates the sensor from the processing.
The simple factoid that there are several RAW processors to choose
from breaks any unit of sensor and processing.

-Wolfgang