A Photography forum. PhotoBanter.com

If this is your first visit, be sure to check out the FAQ by clicking the link above. You may have to register before you can post: click the register link above to proceed. To start viewing messages, select the forum that you want to visit from the selection below.

Go Back   Home » PhotoBanter.com forum » Digital Photography » Digital Photography
Site Map Home Register Authors List Search Today's Posts Mark Forums Read Web Partners

Bayer Megapixels



 
 
Thread Tools Display Modes
  #21  
Old July 13th 04, 11:50 PM
scott
external usenet poster
 
Posts: n/a
Default Bayer Megapixels



"Brian C. Baird" wrote in message

In article kMUIc.80249$Oq2.41211@attbi_s52, says...
Lets look at this in another way, Just like your computer monitor
it takes three dots to produce a color , could you imagine how
much sharper it would be , or how many less color dots on the
screen it would take for the same res if each point could produce
any color even white.


Not much. You get past a certain point and your eye doesn't care
about the density. That's why you can get very good color from
magazines printed with four inks.


But the question is, what reduction in resolution would appear "similar" if
the printing process could use 16 million different inks.

I have 1024x768 pixels on my monitor, each pixel can display one of 16
million colours. Another way to look at it is that I have 3076x678 pixels,
where each pixel can do either red green or blue. Far enough away, your eye
can't tell the difference and these two situations actually look the same.

So if a camera can see any color from one point then it would take
less pixels compared to a multi filtered system for a given
sharpness.


Apples and oranges. Detail and color are two separate things.
Your eye and brain don't put a high value on color accuracy. Your
eyes have far more rods than cones, and better power to resolve
detail rather than color.

So, while I'm sure a not-yet-invented 6 MP, 18 Megasensor chip MIGHT
have better detail and color than a 6 MP Bayer sensor, I wouldn't
bet real money on it. I sure as hell wouldn't put money on a 3.4
MP sensor out-resolving a 6 MP Bayer sensor.


Would you rather have a 3.4 MP LCD monitor where each pixel could do any
colour (ie was made up of three sub-pixels), or a 6 mega-subpixel monitor
(effective 2 mega-pixels)? I know which I would rather have.

Simple logic.


Flawed, due to a lack of knowledge of the subject, I'd say.



  #22  
Old July 13th 04, 11:53 PM
scott
external usenet poster
 
Posts: n/a
Default Bayer Megapixels



"Crownfield" wrote in message

Arte Phacting wrote:

Make way - here comes Artie

But first, the database adage: rubbish in, rubbish out

Hokay dokay - how does that affect the topic and this thread in
particular?

I'll try to explain

Suppose pixel count is just a partitioning. A set of horizontal
and vertical markers with no mass and no area. In other words a
notional addressing system just like those graphs peeps do at
school.

The addressing system requires data - usually in the from of RGB
values - the bigger the number the bigger the photon count.

That partitioning system porportions data from photosites I am
going to use easy numbers for this example coz I can't be assed
with awkward ones)

A 3.4M sensor with 3 photodetectors per site gives 3.4M times 3 =
(erm)
10.2M data values


and a single super duper foveon prototype detector, ( 1x1 6,000)
with 1 pixel which samples 6,000 different colors, gives 6mp?

right.

photograph a newspaper page with that imager,
and let me know if you can read the type.


No, there is an upper limit to the amount of colour data you can sample at
each point. Last I heard the eye could distinguish between about 16
millions colours or so.


  #23  
Old July 13th 04, 11:53 PM
scott
external usenet poster
 
Posts: n/a
Default Bayer Megapixels



"Crownfield" wrote in message

Arte Phacting wrote:

Make way - here comes Artie

But first, the database adage: rubbish in, rubbish out

Hokay dokay - how does that affect the topic and this thread in
particular?

I'll try to explain

Suppose pixel count is just a partitioning. A set of horizontal
and vertical markers with no mass and no area. In other words a
notional addressing system just like those graphs peeps do at
school.

The addressing system requires data - usually in the from of RGB
values - the bigger the number the bigger the photon count.

That partitioning system porportions data from photosites I am
going to use easy numbers for this example coz I can't be assed
with awkward ones)

A 3.4M sensor with 3 photodetectors per site gives 3.4M times 3 =
(erm)
10.2M data values


and a single super duper foveon prototype detector, ( 1x1 6,000)
with 1 pixel which samples 6,000 different colors, gives 6mp?

right.

photograph a newspaper page with that imager,
and let me know if you can read the type.


No, there is an upper limit to the amount of colour data you can sample at
each point. Last I heard the eye could distinguish between about 16
millions colours or so.


  #24  
Old July 14th 04, 12:20 AM
Arte Phacting
external usenet poster
 
Posts: n/a
Default Bayer Megapixels

ah - I see now

So there will be no large sensors at all?

Artie

"DavidG" wrote in message
m...
"Arte Phacting" wrote in message

...
I am trying to give the view that a sensor has a job to do - it is a

data
accumulater-sensing device That's all, no more & no less


Arte,

Your concept of tying data quantity to image quality is a bit naive.
You are assuming that just because data is output from a sensor (or
from anywhere, for that matter), that the data adds to the overall
information. Information in a data theoretic sense is more than just
the quantity of numbers that you possess. For example, the character
sequence "eeeeeeeeee" contains 10 characters but possesses less
information than "mkthjnbfpo" because the characters are predictable.
You could write that same string in a new notation as 10e, for
example. Similar data correlation exists in computer files. Perform
lossless compression on one file using someline like LZW compression
and you might see no change. On another file, you might experience a
2x reduction in file size. Generally, correlation exists when knowing
some of the characters in a sequence allows you to narrow down your
choices for the remaining. Correlation always reduces the total
information content compared with a truly random sequence.

Take your Foveon sensor image, with pixels described initially as RGB
triplets, and recast your representation as LAB (one channel
representing luminance and two channels representing color content).
For the overwhelmingly vast majority of images found in the natural
world, you'd find that the luminance data varied rapidly and with fine
detail but that the color channels varied much more slowly. This means
that the color values associated with adjacent pixels are highly
correlated and the overall information content is much less than
expected from just the total number of initial RGB values.

Bayer demosaicing algorithms take advantage of this property. It is
important to realize that the R, G and B filters over the pixels are
not brick-wall, narrow band filters but actually admit a wide spectrum
(with greatest response in the R, G or B). So, a red-filtered pixel
can actually have a few percent of green information in it, as can a
blue-filtered pixel. Now, as the image is reconstructed, the algorithm
can rely on its assumption of slowly varying color to first guess at
the color of a pixel based on neighboring pixels and then use the
small amount of luminance information in even the non-green pixels to
guess at what the luminance signal would have been even at
non-green-filter locations. This is grossly simplified, but makes the
point that the assumption of slowly varying color is a key input to
the demosaicing process, allowing the inherently mixed color and
luminance information in the raw Bayer data to be picked apart.

David



  #25  
Old July 14th 04, 12:20 AM
Arte Phacting
external usenet poster
 
Posts: n/a
Default Bayer Megapixels

ah - I see now

So there will be no large sensors at all?

Artie

"DavidG" wrote in message
m...
"Arte Phacting" wrote in message

...
I am trying to give the view that a sensor has a job to do - it is a

data
accumulater-sensing device That's all, no more & no less


Arte,

Your concept of tying data quantity to image quality is a bit naive.
You are assuming that just because data is output from a sensor (or
from anywhere, for that matter), that the data adds to the overall
information. Information in a data theoretic sense is more than just
the quantity of numbers that you possess. For example, the character
sequence "eeeeeeeeee" contains 10 characters but possesses less
information than "mkthjnbfpo" because the characters are predictable.
You could write that same string in a new notation as 10e, for
example. Similar data correlation exists in computer files. Perform
lossless compression on one file using someline like LZW compression
and you might see no change. On another file, you might experience a
2x reduction in file size. Generally, correlation exists when knowing
some of the characters in a sequence allows you to narrow down your
choices for the remaining. Correlation always reduces the total
information content compared with a truly random sequence.

Take your Foveon sensor image, with pixels described initially as RGB
triplets, and recast your representation as LAB (one channel
representing luminance and two channels representing color content).
For the overwhelmingly vast majority of images found in the natural
world, you'd find that the luminance data varied rapidly and with fine
detail but that the color channels varied much more slowly. This means
that the color values associated with adjacent pixels are highly
correlated and the overall information content is much less than
expected from just the total number of initial RGB values.

Bayer demosaicing algorithms take advantage of this property. It is
important to realize that the R, G and B filters over the pixels are
not brick-wall, narrow band filters but actually admit a wide spectrum
(with greatest response in the R, G or B). So, a red-filtered pixel
can actually have a few percent of green information in it, as can a
blue-filtered pixel. Now, as the image is reconstructed, the algorithm
can rely on its assumption of slowly varying color to first guess at
the color of a pixel based on neighboring pixels and then use the
small amount of luminance information in even the non-green pixels to
guess at what the luminance signal would have been even at
non-green-filter locations. This is grossly simplified, but makes the
point that the assumption of slowly varying color is a key input to
the demosaicing process, allowing the inherently mixed color and
luminance information in the raw Bayer data to be picked apart.

David



  #26  
Old July 14th 04, 12:52 AM
scott
external usenet poster
 
Posts: n/a
Default Bayer Megapixels



"Crownfield" wrote in message

scott wrote:

"Crownfield" wrote in message

Arte Phacting wrote:

Make way - here comes Artie

But first, the database adage: rubbish in, rubbish out

Hokay dokay - how does that affect the topic and this thread in
particular?

I'll try to explain

Suppose pixel count is just a partitioning. A set of horizontal
and vertical markers with no mass and no area. In other words a
notional addressing system just like those graphs peeps do at
school.

The addressing system requires data - usually in the from of RGB
values - the bigger the number the bigger the photon count.

That partitioning system porportions data from photosites I am
going to use easy numbers for this example coz I can't be assed
with awkward ones)

A 3.4M sensor with 3 photodetectors per site gives 3.4M times 3 =
(erm)
10.2M data values

and a single super duper foveon prototype detector, ( 1x1 6,000)
with 1 pixel which samples 6,000 different colors, gives 6mp?

right.

photograph a newspaper page with that imager,
and let me know if you can read the type.


No, there is an upper limit to the amount of colour data you can
sample at each point. Last I heard the eye could distinguish
between about 16 millions colours or so.


whoooooosssssshhhh
the sound of the point going overhead...


Your point seemed to be that you didn't think the amount of data per pixel
could be used to increase the quality of the image. *Up to a point* it can
be, and that point is certainly greater than 8 or maybe even 24 bits per
point, and certainly less than 6000 :-)


  #27  
Old July 14th 04, 12:52 AM
scott
external usenet poster
 
Posts: n/a
Default Bayer Megapixels



"Crownfield" wrote in message

scott wrote:

"Crownfield" wrote in message

Arte Phacting wrote:

Make way - here comes Artie

But first, the database adage: rubbish in, rubbish out

Hokay dokay - how does that affect the topic and this thread in
particular?

I'll try to explain

Suppose pixel count is just a partitioning. A set of horizontal
and vertical markers with no mass and no area. In other words a
notional addressing system just like those graphs peeps do at
school.

The addressing system requires data - usually in the from of RGB
values - the bigger the number the bigger the photon count.

That partitioning system porportions data from photosites I am
going to use easy numbers for this example coz I can't be assed
with awkward ones)

A 3.4M sensor with 3 photodetectors per site gives 3.4M times 3 =
(erm)
10.2M data values

and a single super duper foveon prototype detector, ( 1x1 6,000)
with 1 pixel which samples 6,000 different colors, gives 6mp?

right.

photograph a newspaper page with that imager,
and let me know if you can read the type.


No, there is an upper limit to the amount of colour data you can
sample at each point. Last I heard the eye could distinguish
between about 16 millions colours or so.


whoooooosssssshhhh
the sound of the point going overhead...


Your point seemed to be that you didn't think the amount of data per pixel
could be used to increase the quality of the image. *Up to a point* it can
be, and that point is certainly greater than 8 or maybe even 24 bits per
point, and certainly less than 6000 :-)


  #28  
Old July 14th 04, 12:59 AM
fs
external usenet poster
 
Posts: n/a
Default Bayer Megapixels

Give me the 3.4 Pixel Monitor
Your not too smart are you.


"scott" wrote in message
...


"Brian C. Baird" wrote in message

In article kMUIc.80249$Oq2.41211@attbi_s52, says...
Lets look at this in another way, Just like your computer monitor
it takes three dots to produce a color , could you imagine how
much sharper it would be , or how many less color dots on the
screen it would take for the same res if each point could produce
any color even white.


Not much. You get past a certain point and your eye doesn't care
about the density. That's why you can get very good color from
magazines printed with four inks.


But the question is, what reduction in resolution would appear "similar"

if
the printing process could use 16 million different inks.

I have 1024x768 pixels on my monitor, each pixel can display one of 16
million colours. Another way to look at it is that I have 3076x678

pixels,
where each pixel can do either red green or blue. Far enough away, your

eye
can't tell the difference and these two situations actually look the same.

So if a camera can see any color from one point then it would take
less pixels compared to a multi filtered system for a given
sharpness.


Apples and oranges. Detail and color are two separate things.
Your eye and brain don't put a high value on color accuracy. Your
eyes have far more rods than cones, and better power to resolve
detail rather than color.

So, while I'm sure a not-yet-invented 6 MP, 18 Megasensor chip MIGHT
have better detail and color than a 6 MP Bayer sensor, I wouldn't
bet real money on it. I sure as hell wouldn't put money on a 3.4
MP sensor out-resolving a 6 MP Bayer sensor.


Would you rather have a 3.4 MP LCD monitor where each pixel could do any
colour (ie was made up of three sub-pixels), or a 6 mega-subpixel monitor
(effective 2 mega-pixels)? I know which I would rather have.

Simple logic.


Flawed, due to a lack of knowledge of the subject, I'd say.





  #29  
Old July 14th 04, 01:53 AM
Crownfield
external usenet poster
 
Posts: n/a
Default Bayer Megapixels

scott wrote:

"Crownfield" wrote in message

scott wrote:

"Crownfield" wrote in message

Arte Phacting wrote:

Make way - here comes Artie

But first, the database adage: rubbish in, rubbish out

Hokay dokay - how does that affect the topic and this thread in
particular?

I'll try to explain

Suppose pixel count is just a partitioning. A set of horizontal
and vertical markers with no mass and no area. In other words a
notional addressing system just like those graphs peeps do at
school.

The addressing system requires data - usually in the from of RGB
values - the bigger the number the bigger the photon count.

That partitioning system porportions data from photosites I am
going to use easy numbers for this example coz I can't be assed
with awkward ones)

A 3.4M sensor with 3 photodetectors per site gives 3.4M times 3 =
(erm)
10.2M data values

and a single super duper foveon prototype detector, ( 1x1 6,000)
with 1 pixel which samples 6,000 different colors, gives 6mp?

right.

photograph a newspaper page with that imager,
and let me know if you can read the type.

No, there is an upper limit to the amount of colour data you can
sample at each point. Last I heard the eye could distinguish
between about 16 millions colours or so.


whoooooosssssshhhh
the sound of the point going overhead...


Your point seemed to be that you didn't think the amount of data per pixel
could be used to increase the quality of the image. *Up to a point* it can
be, and that point is certainly greater than 8 or maybe even 24 bits per
point, and certainly less than 6000 :-)


if depth can increase the resolution of a picture, then

and a single super duper foveon prototype detector, ( 1x1 6,000)
with 1 pixel which samples 6,000 different colors, gives 6mp?


the example and the concept are obvious.

extra colocated sensors for color data
may increase the color accuracy.
they will not increase the image size or resolution.

1 x 1 x 6,000 = 1 pixel.
the image is 1x1 pixel.
it is a very color acurate pixel,
but it is only one pixel.
  #30  
Old July 14th 04, 07:28 AM
David Dyer-Bennet
external usenet poster
 
Posts: n/a
Default Bayer Megapixels

(Georgette Preddy) writes:

"Arte Phacting" wrote in message ...

I am trying to figure how a 3.4M sensor can render an image comparable with
a 6M or 8M sensor - its just gorra be connected with the quantity of data
collected at the sensor.


It's extrememly simple:

3.4M full color pixels carry more optical info than 6M monochrome pixels.


Bayer collects more spatial information at the expense of some color
information. Foveon collects fully detailed color information at the
expense of some spatial information.

The human eye is more sensitive to spatial information than to color
(which is best demonstrated by appropriate visual acuity tests, but is
also suggested by the count of rods vs. cones in the retina).

So a Foveon of x sites (3x separate sensors stacked in triplets) is
not as much better than a Bayer of x sites (x separate senors, each
filtered for one color) as you might expect.

This is only true if the standard of quality is "makes pictures that
look good to human beings", of course. Perhaps some other race with
different visual equipment would find things very different.
Currently, even though I'm a long-time science fiction fan and
SETI@Home participant, I'm not yet worrying about optimizing my
pictures for these hypothetical other creatures. Until we meet them
and know how *their* eyes work, there's really not much point to it.
--
David Dyer-Bennet, , http://www.dd-b.net/dd-b/
RKBA: http://noguns-nomoney.com/ http://www.dd-b.net/carry/
Pics: http://dd-b.lighthunters.net/ http://www.dd-b.net/dd-b/SnapshotAlbum/
Dragaera/Steven Brust: http://dragaera.info/
 




Thread Tools
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

vB code is On
Smilies are On
[IMG] code is Off
HTML code is Off
Forum Jump

Similar Threads
Thread Thread Starter Forum Replies Last Post
Megapixels? FredG Digital Photography 25 July 10th 04 09:36 PM
Sony's DSC-F828 Cyber-shot Camera 8 megapixels for $999 Man with a camera In The Darkroom 2 March 4th 04 10:01 AM
MegaPixels and Inches Explained PR General Photography Techniques 0 February 12th 04 06:40 AM
Foveon has the most megapixels in its mid-level priced cameras [email protected] Film & Labs 7 January 24th 04 10:37 PM
5 Megapixels vs Velvia vs Kodachrome + Microscope Views Roger and Cathy Musgrove Film & Labs 0 October 12th 03 02:16 AM


All times are GMT +1. The time now is 02:43 PM.


Powered by vBulletin® Version 3.6.4
Copyright ©2000 - 2024, Jelsoft Enterprises Ltd.
Copyright ©2004-2024 PhotoBanter.com.
The comments are property of their posters.