If this is your first visit, be sure to check out the FAQ by clicking the link above. You may have to register before you can post: click the register link above to proceed. To start viewing messages, select the forum that you want to visit from the selection below. |
|
|
Thread Tools | Display Modes |
#21
|
|||
|
|||
Bayer Megapixels
"Brian C. Baird" wrote in message In article kMUIc.80249$Oq2.41211@attbi_s52, says... Lets look at this in another way, Just like your computer monitor it takes three dots to produce a color , could you imagine how much sharper it would be , or how many less color dots on the screen it would take for the same res if each point could produce any color even white. Not much. You get past a certain point and your eye doesn't care about the density. That's why you can get very good color from magazines printed with four inks. But the question is, what reduction in resolution would appear "similar" if the printing process could use 16 million different inks. I have 1024x768 pixels on my monitor, each pixel can display one of 16 million colours. Another way to look at it is that I have 3076x678 pixels, where each pixel can do either red green or blue. Far enough away, your eye can't tell the difference and these two situations actually look the same. So if a camera can see any color from one point then it would take less pixels compared to a multi filtered system for a given sharpness. Apples and oranges. Detail and color are two separate things. Your eye and brain don't put a high value on color accuracy. Your eyes have far more rods than cones, and better power to resolve detail rather than color. So, while I'm sure a not-yet-invented 6 MP, 18 Megasensor chip MIGHT have better detail and color than a 6 MP Bayer sensor, I wouldn't bet real money on it. I sure as hell wouldn't put money on a 3.4 MP sensor out-resolving a 6 MP Bayer sensor. Would you rather have a 3.4 MP LCD monitor where each pixel could do any colour (ie was made up of three sub-pixels), or a 6 mega-subpixel monitor (effective 2 mega-pixels)? I know which I would rather have. Simple logic. Flawed, due to a lack of knowledge of the subject, I'd say. |
#22
|
|||
|
|||
Bayer Megapixels
"Crownfield" wrote in message Arte Phacting wrote: Make way - here comes Artie But first, the database adage: rubbish in, rubbish out Hokay dokay - how does that affect the topic and this thread in particular? I'll try to explain Suppose pixel count is just a partitioning. A set of horizontal and vertical markers with no mass and no area. In other words a notional addressing system just like those graphs peeps do at school. The addressing system requires data - usually in the from of RGB values - the bigger the number the bigger the photon count. That partitioning system porportions data from photosites I am going to use easy numbers for this example coz I can't be assed with awkward ones) A 3.4M sensor with 3 photodetectors per site gives 3.4M times 3 = (erm) 10.2M data values and a single super duper foveon prototype detector, ( 1x1 6,000) with 1 pixel which samples 6,000 different colors, gives 6mp? right. photograph a newspaper page with that imager, and let me know if you can read the type. No, there is an upper limit to the amount of colour data you can sample at each point. Last I heard the eye could distinguish between about 16 millions colours or so. |
#23
|
|||
|
|||
Bayer Megapixels
"Crownfield" wrote in message Arte Phacting wrote: Make way - here comes Artie But first, the database adage: rubbish in, rubbish out Hokay dokay - how does that affect the topic and this thread in particular? I'll try to explain Suppose pixel count is just a partitioning. A set of horizontal and vertical markers with no mass and no area. In other words a notional addressing system just like those graphs peeps do at school. The addressing system requires data - usually in the from of RGB values - the bigger the number the bigger the photon count. That partitioning system porportions data from photosites I am going to use easy numbers for this example coz I can't be assed with awkward ones) A 3.4M sensor with 3 photodetectors per site gives 3.4M times 3 = (erm) 10.2M data values and a single super duper foveon prototype detector, ( 1x1 6,000) with 1 pixel which samples 6,000 different colors, gives 6mp? right. photograph a newspaper page with that imager, and let me know if you can read the type. No, there is an upper limit to the amount of colour data you can sample at each point. Last I heard the eye could distinguish between about 16 millions colours or so. |
#24
|
|||
|
|||
Bayer Megapixels
ah - I see now
So there will be no large sensors at all? Artie "DavidG" wrote in message m... "Arte Phacting" wrote in message ... I am trying to give the view that a sensor has a job to do - it is a data accumulater-sensing device That's all, no more & no less Arte, Your concept of tying data quantity to image quality is a bit naive. You are assuming that just because data is output from a sensor (or from anywhere, for that matter), that the data adds to the overall information. Information in a data theoretic sense is more than just the quantity of numbers that you possess. For example, the character sequence "eeeeeeeeee" contains 10 characters but possesses less information than "mkthjnbfpo" because the characters are predictable. You could write that same string in a new notation as 10e, for example. Similar data correlation exists in computer files. Perform lossless compression on one file using someline like LZW compression and you might see no change. On another file, you might experience a 2x reduction in file size. Generally, correlation exists when knowing some of the characters in a sequence allows you to narrow down your choices for the remaining. Correlation always reduces the total information content compared with a truly random sequence. Take your Foveon sensor image, with pixels described initially as RGB triplets, and recast your representation as LAB (one channel representing luminance and two channels representing color content). For the overwhelmingly vast majority of images found in the natural world, you'd find that the luminance data varied rapidly and with fine detail but that the color channels varied much more slowly. This means that the color values associated with adjacent pixels are highly correlated and the overall information content is much less than expected from just the total number of initial RGB values. Bayer demosaicing algorithms take advantage of this property. It is important to realize that the R, G and B filters over the pixels are not brick-wall, narrow band filters but actually admit a wide spectrum (with greatest response in the R, G or B). So, a red-filtered pixel can actually have a few percent of green information in it, as can a blue-filtered pixel. Now, as the image is reconstructed, the algorithm can rely on its assumption of slowly varying color to first guess at the color of a pixel based on neighboring pixels and then use the small amount of luminance information in even the non-green pixels to guess at what the luminance signal would have been even at non-green-filter locations. This is grossly simplified, but makes the point that the assumption of slowly varying color is a key input to the demosaicing process, allowing the inherently mixed color and luminance information in the raw Bayer data to be picked apart. David |
#25
|
|||
|
|||
Bayer Megapixels
ah - I see now
So there will be no large sensors at all? Artie "DavidG" wrote in message m... "Arte Phacting" wrote in message ... I am trying to give the view that a sensor has a job to do - it is a data accumulater-sensing device That's all, no more & no less Arte, Your concept of tying data quantity to image quality is a bit naive. You are assuming that just because data is output from a sensor (or from anywhere, for that matter), that the data adds to the overall information. Information in a data theoretic sense is more than just the quantity of numbers that you possess. For example, the character sequence "eeeeeeeeee" contains 10 characters but possesses less information than "mkthjnbfpo" because the characters are predictable. You could write that same string in a new notation as 10e, for example. Similar data correlation exists in computer files. Perform lossless compression on one file using someline like LZW compression and you might see no change. On another file, you might experience a 2x reduction in file size. Generally, correlation exists when knowing some of the characters in a sequence allows you to narrow down your choices for the remaining. Correlation always reduces the total information content compared with a truly random sequence. Take your Foveon sensor image, with pixels described initially as RGB triplets, and recast your representation as LAB (one channel representing luminance and two channels representing color content). For the overwhelmingly vast majority of images found in the natural world, you'd find that the luminance data varied rapidly and with fine detail but that the color channels varied much more slowly. This means that the color values associated with adjacent pixels are highly correlated and the overall information content is much less than expected from just the total number of initial RGB values. Bayer demosaicing algorithms take advantage of this property. It is important to realize that the R, G and B filters over the pixels are not brick-wall, narrow band filters but actually admit a wide spectrum (with greatest response in the R, G or B). So, a red-filtered pixel can actually have a few percent of green information in it, as can a blue-filtered pixel. Now, as the image is reconstructed, the algorithm can rely on its assumption of slowly varying color to first guess at the color of a pixel based on neighboring pixels and then use the small amount of luminance information in even the non-green pixels to guess at what the luminance signal would have been even at non-green-filter locations. This is grossly simplified, but makes the point that the assumption of slowly varying color is a key input to the demosaicing process, allowing the inherently mixed color and luminance information in the raw Bayer data to be picked apart. David |
#26
|
|||
|
|||
Bayer Megapixels
"Crownfield" wrote in message scott wrote: "Crownfield" wrote in message Arte Phacting wrote: Make way - here comes Artie But first, the database adage: rubbish in, rubbish out Hokay dokay - how does that affect the topic and this thread in particular? I'll try to explain Suppose pixel count is just a partitioning. A set of horizontal and vertical markers with no mass and no area. In other words a notional addressing system just like those graphs peeps do at school. The addressing system requires data - usually in the from of RGB values - the bigger the number the bigger the photon count. That partitioning system porportions data from photosites I am going to use easy numbers for this example coz I can't be assed with awkward ones) A 3.4M sensor with 3 photodetectors per site gives 3.4M times 3 = (erm) 10.2M data values and a single super duper foveon prototype detector, ( 1x1 6,000) with 1 pixel which samples 6,000 different colors, gives 6mp? right. photograph a newspaper page with that imager, and let me know if you can read the type. No, there is an upper limit to the amount of colour data you can sample at each point. Last I heard the eye could distinguish between about 16 millions colours or so. whoooooosssssshhhh the sound of the point going overhead... Your point seemed to be that you didn't think the amount of data per pixel could be used to increase the quality of the image. *Up to a point* it can be, and that point is certainly greater than 8 or maybe even 24 bits per point, and certainly less than 6000 :-) |
#27
|
|||
|
|||
Bayer Megapixels
"Crownfield" wrote in message scott wrote: "Crownfield" wrote in message Arte Phacting wrote: Make way - here comes Artie But first, the database adage: rubbish in, rubbish out Hokay dokay - how does that affect the topic and this thread in particular? I'll try to explain Suppose pixel count is just a partitioning. A set of horizontal and vertical markers with no mass and no area. In other words a notional addressing system just like those graphs peeps do at school. The addressing system requires data - usually in the from of RGB values - the bigger the number the bigger the photon count. That partitioning system porportions data from photosites I am going to use easy numbers for this example coz I can't be assed with awkward ones) A 3.4M sensor with 3 photodetectors per site gives 3.4M times 3 = (erm) 10.2M data values and a single super duper foveon prototype detector, ( 1x1 6,000) with 1 pixel which samples 6,000 different colors, gives 6mp? right. photograph a newspaper page with that imager, and let me know if you can read the type. No, there is an upper limit to the amount of colour data you can sample at each point. Last I heard the eye could distinguish between about 16 millions colours or so. whoooooosssssshhhh the sound of the point going overhead... Your point seemed to be that you didn't think the amount of data per pixel could be used to increase the quality of the image. *Up to a point* it can be, and that point is certainly greater than 8 or maybe even 24 bits per point, and certainly less than 6000 :-) |
#28
|
|||
|
|||
Bayer Megapixels
Give me the 3.4 Pixel Monitor
Your not too smart are you. "scott" wrote in message ... "Brian C. Baird" wrote in message In article kMUIc.80249$Oq2.41211@attbi_s52, says... Lets look at this in another way, Just like your computer monitor it takes three dots to produce a color , could you imagine how much sharper it would be , or how many less color dots on the screen it would take for the same res if each point could produce any color even white. Not much. You get past a certain point and your eye doesn't care about the density. That's why you can get very good color from magazines printed with four inks. But the question is, what reduction in resolution would appear "similar" if the printing process could use 16 million different inks. I have 1024x768 pixels on my monitor, each pixel can display one of 16 million colours. Another way to look at it is that I have 3076x678 pixels, where each pixel can do either red green or blue. Far enough away, your eye can't tell the difference and these two situations actually look the same. So if a camera can see any color from one point then it would take less pixels compared to a multi filtered system for a given sharpness. Apples and oranges. Detail and color are two separate things. Your eye and brain don't put a high value on color accuracy. Your eyes have far more rods than cones, and better power to resolve detail rather than color. So, while I'm sure a not-yet-invented 6 MP, 18 Megasensor chip MIGHT have better detail and color than a 6 MP Bayer sensor, I wouldn't bet real money on it. I sure as hell wouldn't put money on a 3.4 MP sensor out-resolving a 6 MP Bayer sensor. Would you rather have a 3.4 MP LCD monitor where each pixel could do any colour (ie was made up of three sub-pixels), or a 6 mega-subpixel monitor (effective 2 mega-pixels)? I know which I would rather have. Simple logic. Flawed, due to a lack of knowledge of the subject, I'd say. |
#29
|
|||
|
|||
Bayer Megapixels
scott wrote:
"Crownfield" wrote in message scott wrote: "Crownfield" wrote in message Arte Phacting wrote: Make way - here comes Artie But first, the database adage: rubbish in, rubbish out Hokay dokay - how does that affect the topic and this thread in particular? I'll try to explain Suppose pixel count is just a partitioning. A set of horizontal and vertical markers with no mass and no area. In other words a notional addressing system just like those graphs peeps do at school. The addressing system requires data - usually in the from of RGB values - the bigger the number the bigger the photon count. That partitioning system porportions data from photosites I am going to use easy numbers for this example coz I can't be assed with awkward ones) A 3.4M sensor with 3 photodetectors per site gives 3.4M times 3 = (erm) 10.2M data values and a single super duper foveon prototype detector, ( 1x1 6,000) with 1 pixel which samples 6,000 different colors, gives 6mp? right. photograph a newspaper page with that imager, and let me know if you can read the type. No, there is an upper limit to the amount of colour data you can sample at each point. Last I heard the eye could distinguish between about 16 millions colours or so. whoooooosssssshhhh the sound of the point going overhead... Your point seemed to be that you didn't think the amount of data per pixel could be used to increase the quality of the image. *Up to a point* it can be, and that point is certainly greater than 8 or maybe even 24 bits per point, and certainly less than 6000 :-) if depth can increase the resolution of a picture, then and a single super duper foveon prototype detector, ( 1x1 6,000) with 1 pixel which samples 6,000 different colors, gives 6mp? the example and the concept are obvious. extra colocated sensors for color data may increase the color accuracy. they will not increase the image size or resolution. 1 x 1 x 6,000 = 1 pixel. the image is 1x1 pixel. it is a very color acurate pixel, but it is only one pixel. |
#30
|
|||
|
|||
Bayer Megapixels
|
Thread Tools | |
Display Modes | |
|
|
Similar Threads | ||||
Thread | Thread Starter | Forum | Replies | Last Post |
Megapixels? | FredG | Digital Photography | 25 | July 10th 04 09:36 PM |
Sony's DSC-F828 Cyber-shot Camera 8 megapixels for $999 | Man with a camera | In The Darkroom | 2 | March 4th 04 10:01 AM |
MegaPixels and Inches Explained | PR | General Photography Techniques | 0 | February 12th 04 06:40 AM |
Foveon has the most megapixels in its mid-level priced cameras | [email protected] | Film & Labs | 7 | January 24th 04 10:37 PM |
5 Megapixels vs Velvia vs Kodachrome + Microscope Views | Roger and Cathy Musgrove | Film & Labs | 0 | October 12th 03 02:16 AM |