A Photography forum. PhotoBanter.com

If this is your first visit, be sure to check out the FAQ by clicking the link above. You may have to register before you can post: click the register link above to proceed. To start viewing messages, select the forum that you want to visit from the selection below.

Go Back   Home » PhotoBanter.com forum » Digital Photography » Digital SLR Cameras
Site Map Home Register Authors List Search Today's Posts Mark Forums Read Web Partners

Bayer sensor and MX



 
 
Thread Tools Display Modes
  #1  
Old February 8th 08, 02:16 PM posted to rec.photo.digital.slr-systems
Sosumi
external usenet poster
 
Posts: 461
Default Bayer sensor and MX

The way the Bayer sensor works, it "makes up" color information depending on
the surrounding pixels. But if you do multiple exposures with exactly the
same scene, the picture gets more sharp according to the amount of
exposures.
Much is due to the fact that noise is distributed randomly and hence, the
same place where a "piece" of noise was before, it's over written with data
the next exposure.
What I was wondering: do the pixel interpolation always work the same way?
Or is this also slightly random (shifting)? So do you, in fact, get more
information or only less noise? It just seems hard to believe, that every
take of a picture would give exactly (I mean pixel deep peeping) the same
result.

And also: do all Bayer sensor work exactly the same, or is there a
difference in Nikon, Canon, older, newer models?


--
Sosumi


  #2  
Old February 8th 08, 02:51 PM posted to rec.photo.digital.slr-systems
acl
external usenet poster
 
Posts: 1,389
Default Bayer sensor and MX

On Feb 8, 5:16 pm, "Sosumi" wrote:
The way the Bayer sensor works, it "makes up" color information depending on
the surrounding pixels. But if you do multiple exposures with exactly the
same scene, the picture gets more sharp according to the amount of
exposures.
Much is due to the fact that noise is distributed randomly and hence, the
same place where a "piece" of noise was before, it's over written with data
the next exposure.



What I was wondering: do the pixel interpolation always work the same way?
Or is this also slightly random (shifting)?


I haven't heard of algorithms using such an approach. Here's a survey
of the field:
http://citeseer.ist.psu.edu/cache/pa...mosaicking.pdf
or if you prefer
http://preview.tinyurl.com/2qo63e
(you can just skip to the photographs if you want, there's a good
range of the various errors that can be introduced by demosaicing
algorithms).

So do you, in fact, get more
information or only less noise? It just seems hard to believe, that every
take of a picture would give exactly (I mean pixel deep peeping) the same
result.


Aside from various kinds of noise, why is it surprising?


  #3  
Old February 8th 08, 06:10 PM posted to rec.photo.digital.slr-systems
Fred
external usenet poster
 
Posts: 11
Default Bayer sensor and MX

Sosumi wrote:
What I was wondering: do the pixel interpolation always work the same way?


Ask Mr.G about Drizzle Algorithm, such as
http://www.astrosurf.com/buil/iris/t...2/doc12_us.htm
Don't forget that
Drizzle is adapted only to undersampled images



--
Frédéric
  #4  
Old February 8th 08, 07:24 PM posted to rec.photo.digital.slr-systems
Ben Brugman
external usenet poster
 
Posts: 271
Default Bayer sensor and MX


"Sosumi" schreef in bericht
...
The way the Bayer sensor works, it "makes up" color information depending
on the surrounding pixels. But if you do multiple exposures with exactly
the same scene, the picture gets more sharp according to the amount of
exposures.


No the picture does not get more sharp. If there is a lot of noise in one
normal
picture, the definition of the picture get's a littlebit better, but for
most normal
pictures (in normal light) taking more pictures of the same scene does not
make
it sharper.

In theory small movements of the camera and adjusting for these movements
when adding the pictures, will give a better picture and less risc of moire.
In theory this method can even make pictures sharper, but then you need
a very large amount of the 'same' pictures to get the pictures noticeble
sharper.


Much is due to the fact that noise is distributed randomly and hence, the
same place where a "piece" of noise was before, it's over written with
data the next exposure.

Multiple pictures will reduce the noice, but with correctly exposed pictures
there is not a lot of noise. (Long exposure pictures will benefit from
multiple
exposure, but not normally exposed pictures).




What I was wondering: do the pixel interpolation always work the same way?
Or is this also slightly random (shifting)? So do you, in fact, get more
information or only less noise? It just seems hard to believe, that every
take of a picture would give exactly (I mean pixel deep peeping) the same
result.

And also: do all Bayer sensor work exactly the same, or is there a
difference in Nikon, Canon, older, newer models?


There are some variations.
Most sensors have 2 greens, 1 red and 1 blue for every four cells in a
square.
There are sensors where the second green is replace by another color.
There are sensors which are not RGB, but YMC, but I haven's seen them them
lately
in camera's.

Normal Bayer sensors

GRGRGRGR
BGBGBGBG
GRGRGRGR
BGBGBGBG

Interpolation which is most simple is that, on each pixel

Green pixel (position).
Green = G (from the pixel)
Red= (Rl+Rr)/2 from the left and the right
Bleu = (Bu+Bd)/2 from the up and down bleu.

Red pixel (position).
Green = (Gl+Gr+Gu+Gd)/4
Red = R (from the pixel)
Bleu = (B+B+B+B)/4 from the 4 corners of R

Bleu pixel (position).
Green = (Gl+Gr+Gu+Gd)/4
Red = (R+R+R+R) /4 from the 4 corners of B
Bleu = B (from the pixel)

This is without sharpening or any other fancy stuf.
(In general sharpening subtracts a little bit of pixels laying further
away).

When using multiple exposure with shifted pixels, something similar is done,
with far smaller pixels, where the 'large' pixels are shifted over the small
pixels.
Using sharpening this will result in a slightly sharper picture.

As you can see the Red and Bleu pixels (positions) get the information of
the
Bleu and Red colors from quite some distance.
The resolution for red only or bleu only is significant less than for black
and white.

ben





--
Sosumi


  #5  
Old February 8th 08, 08:02 PM posted to rec.photo.digital.slr-systems
acl
external usenet poster
 
Posts: 1,389
Default Bayer sensor and MX

On Feb 8, 9:10 pm, Fred wrote:
Sosumi wrote:
What I was wondering: do the pixel interpolation always work the same way?


Ask Mr.G about Drizzle Algorithm, such ashttp://www.astrosurf.com/buil/iris/tutorial2/doc12_us.htm
Don't forget that

Drizzle is adapted only to undersampled images



But that is quite different: it exploits shifts in the input images to
increase the spatial sampling rate, as opposed to using an
interpolation that has a stochastic element. No?
  #6  
Old February 8th 08, 08:21 PM posted to rec.photo.digital.slr-systems
nospam
external usenet poster
 
Posts: 24,165
Default Bayer sensor and MX

In article , Sosumi
wrote:

The way the Bayer sensor works, it "makes up" color information depending on
the surrounding pixels.


it doesn't 'make up' colour information, it uses multiple pixels to
calculate it.

And also: do all Bayer sensor work exactly the same, or is there a
difference in Nikon, Canon, older, newer models?


the specific demosaic algorithm used will be different, just as there
are differences among different raw converters. also, newer cameras
may have a better algorithms than older ones.
  #7  
Old February 8th 08, 10:56 PM posted to rec.photo.digital.slr-systems
Sosumi
external usenet poster
 
Posts: 461
Default Bayer sensor and MX


"ben brugman" wrote in message
bel.net...

"Sosumi" schreef in bericht
...
The way the Bayer sensor works, it "makes up" color information depending
on the surrounding pixels. But if you do multiple exposures with exactly
the same scene, the picture gets more sharp according to the amount of
exposures.


No the picture does not get more sharp. If there is a lot of noise in one
normal
picture, the definition of the picture get's a littlebit better, but for
most normal
pictures (in normal light) taking more pictures of the same scene does not
make
it sharper.


You're wrong. I took several pictures and the more MX times, the sharper the
picture. Less noise is sharper. If you blow the pictures up to 100% you
clearly see normal noise, grain or pixels, while with the MX pixtures,
pixels are hardly visible: it's much smoother.

In theory small movements of the camera and adjusting for these movements
when adding the pictures, will give a better picture and less risc of
moire.
In theory this method can even make pictures sharper, but then you need
a very large amount of the 'same' pictures to get the pictures noticeble
sharper.


With 5 -10 exposures on one picture the differense is already very
noticeable.

Much is due to the fact that noise is distributed randomly and hence, the
same place where a "piece" of noise was before, it's over written with
data the next exposure.

Multiple pictures will reduce the noice, but with correctly exposed
pictures
there is not a lot of noise. (Long exposure pictures will benefit from
multiple
exposure, but not normally exposed pictures).


No, you're wrong. See above.


What I was wondering: do the pixel interpolation always work the same
way? Or is this also slightly random (shifting)? So do you, in fact, get
more information or only less noise? It just seems hard to believe, that
every take of a picture would give exactly (I mean pixel deep peeping)
the same result.

And also: do all Bayer sensor work exactly the same, or is there a
difference in Nikon, Canon, older, newer models?


There are some variations.
Most sensors have 2 greens, 1 red and 1 blue for every four cells in a
square.
There are sensors where the second green is replace by another color.
There are sensors which are not RGB, but YMC, but I haven's seen them them
lately
in camera's.

Normal Bayer sensors

GRGRGRGR
BGBGBGBG
GRGRGRGR
BGBGBGBG

Interpolation which is most simple is that, on each pixel

Green pixel (position).
Green = G (from the pixel)
Red= (Rl+Rr)/2 from the left and the right
Bleu = (Bu+Bd)/2 from the up and down bleu.

Red pixel (position).
Green = (Gl+Gr+Gu+Gd)/4
Red = R (from the pixel)
Bleu = (B+B+B+B)/4 from the 4 corners of R

Bleu pixel (position).
Green = (Gl+Gr+Gu+Gd)/4
Red = (R+R+R+R) /4 from the 4 corners of B
Bleu = B (from the pixel)

This is without sharpening or any other fancy stuf.
(In general sharpening subtracts a little bit of pixels laying further
away).

When using multiple exposure with shifted pixels, something similar is
done,
with far smaller pixels, where the 'large' pixels are shifted over the
small pixels.
Using sharpening this will result in a slightly sharper picture.

As you can see the Red and Bleu pixels (positions) get the information of
the
Bleu and Red colors from quite some distance.
The resolution for red only or bleu only is significant less than for
black and white.

ben

--
Sosumi


  #8  
Old February 9th 08, 12:07 AM posted to rec.photo.digital.slr-systems
Jeremy Nixon[_2_]
external usenet poster
 
Posts: 137
Default Bayer sensor and MX

Sosumi wrote:

You're wrong. I took several pictures and the more MX times, the sharper
the picture. Less noise is sharper. If you blow the pictures up to 100%
you clearly see normal noise, grain or pixels, while with the MX
pixtures, pixels are hardly visible: it's much smoother.


No... you're wrong. It doesn't get sharper, it simply reduces the noise.
Less noise isn't "sharper", it's "less noise". You are increasing the
signal to noise ratio.

With 5 -10 exposures on one picture the differense is already very
noticeable.


Multiple exposure blending reduces noise by a known and mathematically
demonstrable amount. Three exposures doubles the signal-to-noise ratio.
Nine exposures triples it.

--
Jeremy Nixon | address in header is valid
(formerly )
http://www.flickr.com/photos/100mph/
  #9  
Old February 9th 08, 01:33 AM posted to rec.photo.digital.slr-systems
acl
external usenet poster
 
Posts: 1,389
Default Bayer sensor and MX

On Feb 9, 3:07 am, Jeremy Nixon ~$!~( )@( )u.defocus.net wrote:
Sosumi wrote:
You're wrong. I took several pictures and the more MX times, the sharper
the picture. Less noise is sharper. If you blow the pictures up to 100%
you clearly see normal noise, grain or pixels, while with the MX
pixtures, pixels are hardly visible: it's much smoother.


No... you're wrong. It doesn't get sharper, it simply reduces the noise.
Less noise isn't "sharper", it's "less noise". You are increasing the
signal to noise ratio.

With 5 -10 exposures on one picture the differense is already very
noticeable.


Multiple exposure blending reduces noise by a known and mathematically
demonstrable amount. Three exposures doubles the signal-to-noise ratio.
Nine exposures triples it.


I'd say that four exposures double S/N, etc. No?

Anyway I think we're all wasting our time (unless someone reading this
thread later finds some of the replies interesting). It looks like the
OP is just stupidly attacking (and not for the first time).
  #10  
Old February 9th 08, 06:14 AM posted to rec.photo.digital.slr-systems
Paul Furman
external usenet poster
 
Posts: 7,367
Default Bayer sensor and MX

Thanks Ben for a nice clear explanation. This can be really effective
for astronomy but that is pushing the limits so far beyond normal
photography. For astro work they make amazing cameras for astounding
prices, extra high performance, but I have *never* heard of anyone using
those cameras for conventional photography because it just doesn't
matter. If it mattered, you would see someone using those $13,000 cooled
high performance scientific cameras for advertising diamonds or sports
or Hollywood movies or something but nobody does.

ben brugman wrote:

"Sosumi" schreef in bericht
...
The way the Bayer sensor works, it "makes up" color information
depending on the surrounding pixels. But if you do multiple exposures
with exactly the same scene, the picture gets more sharp according to
the amount of exposures.


No the picture does not get more sharp. If there is a lot of noise in one normal
picture, the definition of the picture get's a littlebit better, but for most normal
pictures (in normal light) taking more pictures of the same scene does not make
it sharper.

In theory small movements of the camera and adjusting for these movements
when adding the pictures, will give a better picture and less risc of moire.
In theory this method can even make pictures sharper, but then you need
a very large amount of the 'same' pictures to get the pictures noticeble sharper.

Much is due to the fact that noise is distributed randomly and hence,
the same place where a "piece" of noise was before, it's over written
with data the next exposure.


Multiple pictures will reduce the noice, but with correctly exposed pictures
there is not a lot of noise. (Long exposure pictures will benefit from multiple
exposure, but not normally exposed pictures).

What I was wondering: do the pixel interpolation always work the same
way? Or is this also slightly random (shifting)? So do you, in fact,
get more information or only less noise? It just seems hard to
believe, that every take of a picture would give exactly (I mean pixel
deep peeping) the same result.

And also: do all Bayer sensor work exactly the same, or is there a
difference in Nikon, Canon, older, newer models?


There are some variations.
Most sensors have 2 greens, 1 red and 1 blue for every four cells in a square.
There are sensors where the second green is replace by another color.
There are sensors which are not RGB, but YMC, but I haven's seen them them lately
in camera's.

Normal Bayer sensors

GRGRGRGR
BGBGBGBG
GRGRGRGR
BGBGBGBG

Interpolation which is most simple is that, on each pixel

Green pixel (position).
Green = G (from the pixel)
Red= (Rl+Rr)/2 from the left and the right
Bleu = (Bu+Bd)/2 from the up and down bleu.

Red pixel (position).
Green = (Gl+Gr+Gu+Gd)/4
Red = R (from the pixel)
Bleu = (B+B+B+B)/4 from the 4 corners of R

Bleu pixel (position).
Green = (Gl+Gr+Gu+Gd)/4
Red = (R+R+R+R) /4 from the 4 corners of B
Bleu = B (from the pixel)

This is without sharpening or any other fancy stuf.
(In general sharpening subtracts a little bit of pixels laying further away).

When using multiple exposure with shifted pixels, something similar is done,
with far smaller pixels, where the 'large' pixels are shifted over the small pixels.
Using sharpening this will result in a slightly sharper picture.

As you can see the Red and Bleu pixels (positions) get the information of the
Bleu and Red colors from quite some distance.
The resolution for red only or bleu only is significant less than for black and white.

 




Thread Tools
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

vB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Forum Jump

Similar Threads
Thread Thread Starter Forum Replies Last Post
Bayer Filter obsolescence? Eric Miller Digital SLR Cameras 14 June 20th 07 06:38 PM
Bayer Filter obsolescence? RichA Digital Photography 0 June 14th 07 06:49 PM
Bayer aliasing on the 5D Alfred Molon Digital Photography 43 February 3rd 06 04:00 PM
Bayer artifacts frederick Digital SLR Cameras 51 July 7th 05 01:09 PM
Bayer Megapixels Dave Martindale Digital Photography 128 July 24th 04 10:40 PM


All times are GMT +1. The time now is 10:12 PM.


Powered by vBulletin® Version 3.6.4
Copyright ©2000 - 2024, Jelsoft Enterprises Ltd.
Copyright ©2004-2024 PhotoBanter.com.
The comments are property of their posters.