A Photography forum. PhotoBanter.com

If this is your first visit, be sure to check out the FAQ by clicking the link above. You may have to register before you can post: click the register link above to proceed. To start viewing messages, select the forum that you want to visit from the selection below.

Go Back   Home » PhotoBanter.com forum » Digital Photography » Digital Photography
Site Map Home Register Authors List Search Today's Posts Mark Forums Read Web Partners

Bayer Megapixels



 
 
Thread Tools Display Modes
  #1  
Old July 13th 04, 05:06 PM
Dave Martindale
external usenet poster
 
Posts: n/a
Default Bayer Megapixels

"Arte Phacting" writes:

A 3.4M sensor with 3 photodetectors per site gives 3.4M times 3 = (erm)
10.2M data values


A 6M with 1 photodetector per site gives (I can do this!) 6M times 1 = 6M
data values


I am trying to figure how a 3.4M sensor can render an image comparable with
a 6M or 8M sensor - its just gorra be connected with the quantity of data
collected at the sensor.


What say y'all?


It has been explained repeatedly. Are you not reading?

By objective measurements, the 3.4 MP sensor does *not* render an image
comparable to a 6 MP or 8 MP sensor - the actual resolution of the 3.4 M
sensor is on par with other 3 MP cameras, and clearly inferior to that
of 6 MP and higher cameras.

The 3.4 MP Sigma cameras only *appear* to have image sharpness
comparable to a 6 MP camera because the anti-aliasing filter was omitted
from the Sigma camera. This gives images that look like they have more
fine detail, but that extra detail is not correctly reproducing
information from the original scene. It has nothing to do with making 3
measurements per pixel. Take any 6 MP camera and remove the AA filter,
and you'll get extra false detail.

Measuring 3 colours at each location instead of 1 *does* improve chroma
resolution, but with normal image sizes for screen display or printing
the human eye can't see the difference. It also avoids demosaicing
mistakes that Bayer images occasionally suffer from. But it doesn't
help resolve fine detail at all (unless the fine detail is
carefully-constructed colour-only worst-case resolution test charts).

Dave
  #3  
Old July 13th 04, 05:41 PM
Arte Phacting
external usenet poster
 
Posts: n/a
Default Bayer Megapixels

Make way - here comes Artie

But first, the database adage: rubbish in, rubbish out

Hokay dokay - how does that affect the topic and this thread in particular?

I'll try to explain

Suppose pixel count is just a partitioning. A set of horizontal and
vertical markers with no mass and no area. In other words a notional
addressing system just like those graphs peeps do at school.

The addressing system requires data - usually in the from of RGB values -
the bigger the number the bigger the photon count.

That partitioning system porportions data from photosites I am going to use
easy numbers for this example coz I can't be assed with awkward ones)

A 3.4M sensor with 3 photodetectors per site gives 3.4M times 3 = (erm)
10.2M data values

A 6M with 1 photodetector per site gives (I can do this!) 6M times 1 = 6M
data values

I am trying to figure how a 3.4M sensor can render an image comparable with
a 6M or 8M sensor - its just gorra be connected with the quantity of data
collected at the sensor.

What say y'all?

Artie

"Georgette Preddy" wrote in message
om...
wrote in message

. ..
In message ,
(Georgette Preddy) wrote:

wrote in message

. ..
In message ,
(Georgette Preddy) wrote:

http://www.pbase.com/image/23420444

http://www.pbase.com/image/31205398

So according to you, a Red sensor can "spatially witness" Blue or
Green optical features.

How interesting. Not.


That's not what I wrote at all.


Yes it is, unfortunately. You think a Bayer sensor can "spatially
witness" red features with green sensors, and blue features with red
sensors, and green features with blur I mean blue sensors.

Obviously that's bunk, do I really need to explain why?

You learn more about the image by having more sensors in unique places
in the focal plane, even if they each record only one color, because
their main job is to perceive luminance at a high resolution, and color
is of a secondary resolution priority.


A green color filter doesn't pass a blue feature's luminance. That's
the whole point.

Jeeze.

Pity that your mental facilities are too weak to visualize how full-RGB
samples are totally unnecessary for photography.


Right.



  #4  
Old July 13th 04, 06:14 PM
fs
external usenet poster
 
Posts: n/a
Default Bayer Megapixels

Lets look at this in another way, Just like your computer monitor it takes
three dots to produce a color , could you imagine how much sharper it would
be , or how many less color dots on the screen it would take for the same
res if each point could produce any color even white.
So if a camera can see any color from one point then it would take less
pixels compared to a multi filtered system for a given sharpness.
Simple logic.





"Brian C. Baird" wrote in message
.. .
In article ,
says...
A 3.4M sensor with 3 photodetectors per site gives 3.4M times 3 = (erm)
10.2M data values


No. The 10.2M values are interpolated to form 3.4M usable values.

A 6M with 1 photodetector per site gives (I can do this!) 6M times 1 =

6M
data values


Not quite. Due to the mosaic process, you lose a few pixels to give you
a clean edge. That's why you read about "effective megapixels" being
slightly smaller than the total number of photodetectors.

I am trying to figure how a 3.4M sensor can render an image comparable

with
a 6M or 8M sensor - its just gorra be connected with the quantity of

data
collected at the sensor.


Quick answer: it can't.

What say y'all?

Artie


Go read a book, Artie.



  #5  
Old July 13th 04, 06:22 PM
David J. Littleboy
external usenet poster
 
Posts: n/a
Default Bayer Megapixels


"Arte Phacting" wrote:

I beg to differ Dave :-)


Then you are wrong.

rendering an image is based on data - the data picked up by a sensor


Only when that data is from unique points in space.

Consider a 3.4 MP camera that divides the spectrum into 9 narrow bands by
taking 9 readings at each point. Would such a camera have three times the
resolution of the SD9? Of course not. It would have completely meaningless
spectral reproduction that no human could see as being in any way different.
It would be able to do a few cutesy tricks, like emulating B&W film response
curves better.

The more data, the greater integrity and fidelity of the data the more
output devices and DSP have to work on.


Data costs. At the pixel, you have to store three charges instead of one.
That means higher noise/lower dynamic range. You need three times as many
A/D conversions. More battery drain. You need 3 times as much storage for
RAW images.

For the same amount of data, Bayer cameras provide three times the image
quality. The Foveon concept is incredibly bad engineering.

David J. Littleboy
Tokyo, Japan



  #8  
Old July 13th 04, 07:08 PM
Arte Phacting
external usenet poster
 
Posts: n/a
Default Bayer Megapixels

I beg to differ Dave :-)

rendering an image is based on data - the data picked up by a sensor

The more data, the greater integrity and fidelity of the data the more
output devices and DSP have to work on.

Imagine a 2 sensel sensor. Only so much can be done with the data whether
it is a stacked or one layer sensor.

Quantity and quality count.

The more of a beasting the data experience the more it's gonna shift in
terms of integrity and fidelity to the original image

Artie
"Dave Martindale" wrote in message
...
"Arte Phacting" writes:

A 3.4M sensor with 3 photodetectors per site gives 3.4M times 3 = (erm)
10.2M data values


A 6M with 1 photodetector per site gives (I can do this!) 6M times 1 = 6M
data values


I am trying to figure how a 3.4M sensor can render an image comparable

with
a 6M or 8M sensor - its just gorra be connected with the quantity of data
collected at the sensor.


What say y'all?


It has been explained repeatedly. Are you not reading?

By objective measurements, the 3.4 MP sensor does *not* render an image
comparable to a 6 MP or 8 MP sensor - the actual resolution of the 3.4 M
sensor is on par with other 3 MP cameras, and clearly inferior to that
of 6 MP and higher cameras.

The 3.4 MP Sigma cameras only *appear* to have image sharpness
comparable to a 6 MP camera because the anti-aliasing filter was omitted
from the Sigma camera. This gives images that look like they have more
fine detail, but that extra detail is not correctly reproducing
information from the original scene. It has nothing to do with making 3
measurements per pixel. Take any 6 MP camera and remove the AA filter,
and you'll get extra false detail.

Measuring 3 colours at each location instead of 1 *does* improve chroma
resolution, but with normal image sizes for screen display or printing
the human eye can't see the difference. It also avoids demosaicing
mistakes that Bayer images occasionally suffer from. But it doesn't
help resolve fine detail at all (unless the fine detail is
carefully-constructed colour-only worst-case resolution test charts).

Dave



  #9  
Old July 13th 04, 07:15 PM
Arte Phacting
external usenet poster
 
Posts: n/a
Default Bayer Megapixels

cheers Brian - I love u 2 blush

Nah then, I think the diary read for this week will be: rubbish in, rubbish
out

I think different issues are being melded together without clarity of
vision.

I am trying to give the view that a sensor has a job to do - it is a data
accumulater-sensing device That's all, no more & no less

Image processing and image rendering are separate parts of the process.

To make up for the shorfall in data some fantastic DSP happens - and very
good it is too

But all it does is just dress the data prior to processing on or in output
devices Don't you agree?

This is getting way back to the original point in a thread somewhere that a
sensor outputs an image.

It doesn't - a sensor needs a heck of a lot of supporting kit before an
image may be observed.

Artie
"Brian C. Baird" wrote in message
.. .
In article ,
says...
A 3.4M sensor with 3 photodetectors per site gives 3.4M times 3 = (erm)
10.2M data values


No. The 10.2M values are interpolated to form 3.4M usable values.

A 6M with 1 photodetector per site gives (I can do this!) 6M times 1 =

6M
data values


Not quite. Due to the mosaic process, you lose a few pixels to give you
a clean edge. That's why you read about "effective megapixels" being
slightly smaller than the total number of photodetectors.

I am trying to figure how a 3.4M sensor can render an image comparable

with
a 6M or 8M sensor - its just gorra be connected with the quantity of

data
collected at the sensor.


Quick answer: it can't.

What say y'all?

Artie


Go read a book, Artie.



  #10  
Old July 13th 04, 07:15 PM
Arte Phacting
external usenet poster
 
Posts: n/a
Default Bayer Megapixels

cheers Brian - I love u 2 blush

Nah then, I think the diary read for this week will be: rubbish in, rubbish
out

I think different issues are being melded together without clarity of
vision.

I am trying to give the view that a sensor has a job to do - it is a data
accumulater-sensing device That's all, no more & no less

Image processing and image rendering are separate parts of the process.

To make up for the shorfall in data some fantastic DSP happens - and very
good it is too

But all it does is just dress the data prior to processing on or in output
devices Don't you agree?

This is getting way back to the original point in a thread somewhere that a
sensor outputs an image.

It doesn't - a sensor needs a heck of a lot of supporting kit before an
image may be observed.

Artie
"Brian C. Baird" wrote in message
.. .
In article ,
says...
A 3.4M sensor with 3 photodetectors per site gives 3.4M times 3 = (erm)
10.2M data values


No. The 10.2M values are interpolated to form 3.4M usable values.

A 6M with 1 photodetector per site gives (I can do this!) 6M times 1 =

6M
data values


Not quite. Due to the mosaic process, you lose a few pixels to give you
a clean edge. That's why you read about "effective megapixels" being
slightly smaller than the total number of photodetectors.

I am trying to figure how a 3.4M sensor can render an image comparable

with
a 6M or 8M sensor - its just gorra be connected with the quantity of

data
collected at the sensor.


Quick answer: it can't.

What say y'all?

Artie


Go read a book, Artie.



 




Thread Tools
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

vB code is On
Smilies are On
[IMG] code is Off
HTML code is Off
Forum Jump

Similar Threads
Thread Thread Starter Forum Replies Last Post
Megapixels? FredG Digital Photography 25 July 10th 04 09:36 PM
Sony's DSC-F828 Cyber-shot Camera 8 megapixels for $999 Man with a camera In The Darkroom 2 March 4th 04 10:01 AM
MegaPixels and Inches Explained PR General Photography Techniques 0 February 12th 04 06:40 AM
Foveon has the most megapixels in its mid-level priced cameras [email protected] Film & Labs 7 January 24th 04 10:37 PM
5 Megapixels vs Velvia vs Kodachrome + Microscope Views Roger and Cathy Musgrove Film & Labs 0 October 12th 03 02:16 AM


All times are GMT +1. The time now is 11:47 PM.


Powered by vBulletin® Version 3.6.4
Copyright ©2000 - 2024, Jelsoft Enterprises Ltd.
Copyright ©2004-2024 PhotoBanter.com.
The comments are property of their posters.