A Photography forum. PhotoBanter.com

If this is your first visit, be sure to check out the FAQ by clicking the link above. You may have to register before you can post: click the register link above to proceed. To start viewing messages, select the forum that you want to visit from the selection below.

Go Back   Home » PhotoBanter.com forum » Digital Photography » Digital Photography
Site Map Home Register Authors List Search Today's Posts Mark Forums Read Web Partners

Why don't Sony and Pentax have this problem? Dead pixels, defectivepixels



 
 
Thread Tools Display Modes
  #11  
Old April 12th 11, 09:54 AM posted to rec.photo.digital.slr-systems,rec.photo.digital
Me
external usenet poster
 
Posts: 796
Default Nikon sensor sizes

On 12/04/2011 8:45 p.m., me wrote:
On Mon, 11 Apr 2011 21:16:42 -0700,
wrote:

In , Neil
wrote:

I'm pretty sure that the size stated is for the effective pixels /
imaging area.

You may be right. On the other hand, if that's so then why do they give the
total Mpixels too?


it's a bigger number, so why not use it?

I have never really understood the reason for that. Do
the other 0.6 Mpixels not do anything *at all*?


it needs pixels around the periphery for black level, among other
things.


Through the years (D70/D200/D300) I've seen different raw converters
also come up with different image sizes for a given camera.

Likewise, but I suspect that's just where they cut off the edges of the
RGBG matrix in demosaicing.
  #12  
Old April 12th 11, 12:47 PM posted to rec.photo.digital.slr-systems,rec.photo.digital
Better Info[_6_]
external usenet poster
 
Posts: 242
Default Nikon sensor sizes

On Tue, 12 Apr 2011 04:45:30 -0400, me wrote:

On Mon, 11 Apr 2011 21:16:42 -0700, nospam
wrote:

In article , Neil
Harrington wrote:

I'm pretty sure that the size stated is for the effective pixels /
imaging area.

You may be right. On the other hand, if that's so then why do they give the
total Mpixels too?


it's a bigger number, so why not use it?

I have never really understood the reason for that. Do
the other 0.6 Mpixels not do anything *at all*?


it needs pixels around the periphery for black level, among other
things.


Through the years (D70/D200/D300) I've seen different raw converters
also come up with different image sizes for a given camera.


You have 3 different issues involved here.

The actual number of photosites includes all of them on the sensor. This
includes those outside of the imaging area (the total "effective pixels").
These non-imaging areas are used for setting black-levels, reading thermal
noise, etc. Control-groups of photosites that are used to test all others
against These large borders of black and white rectangular blocks (and in
one case I recall seeing a purple region) not usually seen in any of your
images can be viewed by converting RAW files with DCRAW's command line
options.

If I recall, this whole-sensor image is only available in the PGM format
output. I did it years ago using the 100% hardware RAW data from CHDK
cameras just to see what it looked like, so don't ask me today which
command-line switches allowed this. It might have just been the -D switch,
for "document mode". I don't really recall now. Other cameras may
automatically truncate these out-of-bounds regions in the RAW file it spits
out for you. This is not the case with CHDK RAW, where it is every
photosite on the sensor that is recorded in the hardware RAW file (though
not if using CHDK's DNG file-format option).

The size of any final JPG, TIF, or other images from the lesser total of
"effective pixels" (photosites) depends on the interpolation algorithms
being used.

The one in the camera is generally fast and discards large areas of the
imaging photosites on the borders. The reason for this is that each RGGB
set of photosites is being interpolated into each adjoining RGB photo
pixel. This requires that each photosite be surrounded by a given number of
them for the interpolation process. Edge and corner photosites do not have
these surrounding supporting photosites on all sides to make a judgment
value of their intended colors in the resulting RGB file. To simplify and
hasten the conversion process they are often discarded for the final image.
But not completely. Their values are still used to create the colors for
the pixels further away from the edges and corners. Their values being
interpolated into pixels up to 4 or more photosites away from them--again
this being interpolation-method dependent.

Now we get to interpolation methods. Better interpolation algorithms can
deal with these edge and corner pixels and will then include them in the
resulting JPG or TIF file. Their colors may not be as precise because they
lack the surrounding photosites on all sides to determine their intended
colors, but some find the additional resolution and FOV for the actual
photographic detail contained in these edges to be of greater importance
than color problems, for B&W images especially--where luminance detail is
the only thing that's most important.

 




Thread Tools
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

vB code is On
Smilies are On
[IMG] code is Off
HTML code is Off
Forum Jump

Similar Threads
Thread Thread Starter Forum Replies Last Post
Hot & Dead pixels Maggie Digital Photography 12 January 9th 07 08:07 PM
Can hot pixels become dead pixels? kl_tom Digital Photography 4 October 5th 06 07:52 PM
D200 dead pixels Toby Digital SLR Cameras 20 May 14th 06 12:57 AM
hot/dead pixels bp Digital SLR Cameras 8 June 5th 05 12:30 PM
concerned about dead pixels bulge Digital Photography 24 November 22nd 04 09:52 PM


All times are GMT +1. The time now is 01:05 AM.


Powered by vBulletin® Version 3.6.4
Copyright ©2000 - 2024, Jelsoft Enterprises Ltd.
Copyright ©2004-2024 PhotoBanter.com.
The comments are property of their posters.