A Photography forum. PhotoBanter.com

If this is your first visit, be sure to check out the FAQ by clicking the link above. You may have to register before you can post: click the register link above to proceed. To start viewing messages, select the forum that you want to visit from the selection below.

Go Back   Home » PhotoBanter.com forum » Photo Equipment » 35mm Photo Equipment
Site Map Home Register Authors List Search Today's Posts Mark Forums Read Web Partners

Digital quality (vs 35mm): Any real answers?



 
 
Thread Tools Display Modes
  #71  
Old July 23rd 04, 08:08 PM
Roland Karlsson
external usenet poster
 
Posts: n/a
Default Digital quality (vs 35mm): Any real answers?

Toralf wrote in
:

Hi.


Hi self - now - it is obvious from your post that you
have not been hanging around here so long. This is the
favourite topic that pops up now and then. From that
perspective - your questions are more than valid.

I'm still wondering about how good the image quality of modern digital
cameras (especially SLRs) really is, in particular how it compares
with 35mm film. I've seen many articles on the subject on the Net, but
few of them seem to give you a lot of tangible information (I want to
see the numbers, please), and I can't help feeling that tests they
refer to are usually done to prove a point, i.e. that digital cameras
are as good as 35mm, which is not the way you do proper research.


Proper research? hmmm .... I don't think it is reasonable to expect
that anyone starts some kind of proper research on this topic. Film
is the old medium - digital is the new - gradually one will diminish
and the other take over. If any comparisons is biased or not does not
matter the slightest. Oh - I understand that you want to know, and I
want to know and lots of people wants to know. But proper research is
not just something done because some wants to know.

On the other hand I think you have missed that some really nice
comparisons have been made. Lots of references can be found in old
topics in this forum.

1. What is the resolution of a 35mm film anyway? I think I read
somewhere that a colour negative is at least 3000dpi. Is that
correct? How about black&white? (Yeah I know, a film doesn't have
pixels in exactly the same sense as a digital image, but it *is* made
up of discrete elements after all.)


I think it is rather safe to say that 35 mm film is at most 20 Mpixels
with regards to resolution. But ... before that you start to see grain.
If you like the grainy look, thats just perfect. If you don't, then maybe
6 Mpixels or so gives you grain free pictures from the very best 35 mm
films.

I also think it is rather safe to say that at higher ISO, film just
have to give away for DSLR cameras.

2. What about the print? 300dpi?


Hmm ... yes what about it?

It all depends on the viewing distance and your demands.

3. I know that the most common sensors are made up of individual
elements for the read, green and blue channels, arranged in a special
pattern, whose data is somehow interpolated into RGB pixels. But what
exactly does e.g. 6 megapixels mean in that context? Does it mean that
the sensor has (just) 6 million elements, or that data from a higher
number (like 18 or 24 million) is combined into 6 million RGB pixels?


A 6 Mpixel camera (using Bayer mosaic filter) has 6 Million sensels,
3 Million green, 1.5 Million red and 1.5 Million blue. Advanced
algorithms makes a picture with a resolution of (nearly) 6 Mpixels.
The color resolution is lower though - just as for your TV. And the
same kind of problems exist if you have equal luminosity color patterns.
Fortunately, your eye cannot resolve such patterns either.

The same question more bluntly put: When Canon/Nikon/Pentax is talking
about 6MP, is that just a big a lie as the one about 10MP on Sigma
cameras? (I'm hoping not, as I think the Sigma/Foveon way of counting
really takes the cake.)


The Sigma counting is a lie IMHO. Most here agree - some don't. Lots
of fun here to read in old posts

The Canon/Nikon/Pentax/all_other_except_sigma Bayer sensor counting
is not a lie IMHO. Or at least it is a white lie. A Bayer sensor with
6 Milion sensors is capable of resolving 6 Mpixel (except for strangly
colored patterns that the eye cannot resolve either).

4. Can the inaccuracy associated with the above mentioned
interpolation be quantified and/or measured against e.g. the error
introduced by scanning a negative with a film-scanner? And how does it
compare with pixel interpolation in the scanning sense?


No comparison possible IMHO. The Bayer computation is not equal
to interpolation. Interpolation is only used to extract color
information. The direct values are used for luminosity - no
interpolation is made for luminosity.

5. And how about those other parameters I mentioned briefly above -
like different kinds of geometric distortions, noise, flat field bias
etc.? Can those be compared with the ones of plain old film?


Yes - but are those not lens properties?

6. And the chromic aberration effects? How serious are they these
days? And are the full-frame sensors that are actually found in some
high-end cameras now, in any way comparable to film in that respect?


Same here - those are lens properties.

Well, maybe some people will say I have a somewhat critical or
conservative attitude towards digital cameras, but I actually think
you ought to be a bit sceptical when something "new and wonderful"
comes a long; new technology is too often introduced for technology's
own sake, IMO.


Ahhh ... you missed that also Digital is not something new and
wonderful. It has been here for a while. And - it is rather wonderful.
It has improved life for photographers (IMHO) a lot.



/Roland
  #72  
Old July 23rd 04, 08:16 PM
Gordon Moat
external usenet poster
 
Posts: n/a
Default Digital quality (vs 35mm): Any real answers?

Toralf wrote:

[ ... ]


3. I know that the most common sensors are made up of individual
elements for the read, green and blue channels, arranged in a special
pattern, whose data is somehow interpolated into RGB pixels. But what
exactly does e.g. 6 megapixels mean in that context? Does it mean that
the sensor has (just) 6 million elements, or that data from a higher
number (like 18 or 24 million) is combined into 6 million RGB pixels?

The same question more bluntly put: When Canon/Nikon/Pentax is talking
about 6MP, is that just a big a lie as the one about 10MP on Sigma
cameras? (I'm hoping not, as I think the Sigma/Foveon way of counting
really takes the cake.)

OK. I've been thinking a bit about the "luminance" argument that has
popped up a number of times (but not a lot, so I may have overlooked
something), and I'm not sure I'm convinced - although it depends a bit
on how you see it.


I am getting an error for r.p.d. news group from my news server, so you
might not be seeing my posts. Your luminance question is somewhat addressed
in the article link I posted in my other message, the specific link to the
Adam Wilt web site. Basically, if you look at CCD film scanners, they
usually have a trilinear array, and an extra white array. That extra white
array is specifically to address the problem you are finding.

If someone can get r.p.d. and wants to try to cross post from r.p.e.35mm to
r.p.d., please do so. It seems that the original poster might only be using
the r.p.d. news group, and thus missing some of my posts.



It seems to me that one of the key weaknesses of the standard "pattern"
sensor is actually that it cannot capture luminance.


Not entirely true, since the Green channel, and Green filtered areas, are
used somewhat for that purpose. Of course, those can be fooled in some
lighting situations.

When you're looking
through a filter, there is simply no way you can tell white light apart
from light of the colour associated with that filter! Furthermore, the
real problem seems not to be photographing a large area in one of the
primary colours like someone suggested, but rather to capture tiny spec
of a distinct colour - especially if that colour is such that one or two
of the components are different from the surrounding data, and the
other(s) the same. For instance, imagine taking a picture of a small
read dot against a completely white background. Now, assume that the
size of that dot's projection onto the sensor matrix is one pixel, and
that it falls on a red pixel sensor. Surely, this will mean that the dot
won't show up at all on the picture - as there is no way the red sensor
can distinguish it from all the white points surrounding it, and the
adjacent green and blue sensors won't "see" it at all. If, on the other
hand, it falls on a e.g. green sensor, it should indeed show up, as that
sensor won't register any light at all, but there is no way of telling
it's exact colour - all you know is that it's not green or colour that
has a green component. (But it may be black or any shade of blue, read
or purple.)


There is an article at the Fill Factory web site on interpolation of Bayer
patterns. The article deals with various types of arrays that have been
used, and advantages and disadvantages of each. There have been
quasi-random Bayer patterns, and even variations on the usual, though each
method requires a different interpolation computation to avoid errors. Some
types work better for certain scenes with larger areas of colour, and some
types work better for edge definition. The main Bayer pattern in use by
most manufacturers is the best compromise, and gives the highest accuracy
interpolation.



For an exact representation of a dot do be guaranteed, the dot would
have to be of a size equivalent to at least 4 sensor elements (when the
matrix is made up of sets of one red, one blue and two green) - or more
than 3, anyway. By this argument, you could say that the real resolution
is actually the equivalent of 1/4 of the number of pixel sensors, or
1.5MP for a "6MP sensor".


An easy explanation of interpolation is that each adjacent pixel is
calculated in order to determine a value for one pixel. While this
oversimplification doesn't really explain the process, I think you can
imagine that larger than a 4 pixel spot is the computational basis of
interpolation. If your theoretical red spot fell across mostly red pixel
filtering in the Bayer pattern, then it would look more true to the real
red than if it fell across mostly the other colours in the Bayer pattern.
However, that is at a small area that is near the limits of resolution for
the system, so it might not be noticeable.



However, I admit that if the projection "touches" two elements instead
of just one - and it has to be larger than one pixel to do that - the
chance of getting the value right is much greater, and if it touches at
least tree, chances are that there is at least one of each colour
channel, in which case a correct colour should be guaranteed. In other
words, it's fair to assume that a size of "two point something" pixels
is sufficient for a correct representation of our little spec.


I think the University of Pennsylvania also had an article from their
electrical engineering department that discussed interpolation issues. I
have a copy of that article in PDF, though I would need to search a bit to
pull that one up. Some of these issues you are bringing up have been
discussed on rec.photo.equipment.medium-format recently between David
Littleboy, myself, and a few other posters. The discussion was about the
medium format Phase One digital back, though it is relevant to your
questions.


So, I'd say that for purposes of quality comparisons, a 6MP array has
neither 6 million pixels or the very conservative 1.5, but something in
between. It's actually tempting to guess that the "effective" number of
pixels is somewhere near the 3M the foveon has if you count the way
they'll teach you at most schools except the Sigma marketing academy.
(Which would mean that the sensors are equally good, but that Sigma are
the greatest liers...)

Furthermore, the luminance phenomenon is indeed useful in one sense,
namely that it allows for a good interpolation algorithm - and it does
so *because it isn't registered*. It didn't have to be the luminance, of
course, but I think an essential requirement for reasonable
interpolation is being able to make good assumptions about some
parameter that isn't actually captured.


An argument could be made that replacing one of the Green pixels on the
Bayer pattern with a white pixel might improve luminance accuracy.
Unfortunately, I think that might actually reduce colour accuracy, and with
interpolation still needed for adjacent pixels, it might reduce the overall
saturation of colour, or complicate the interpolation computations.



So, if the 6MP data is generated base on the assumption that the
luminance always makes up most of the light, and that turns out to be
true in a certain case, then maybe you can say that you *really* have
(close to) 6MP *in that particular situation.* Also, the general quality
of data produced by the camera is to a large extent determined by the
probability that the assumption will hold.

By the same token, you may make some kind of assumption about the data
from a foveon sensor or a colour scan that makes you able to interpolate
extra pixels. For instance, you can make the assumption that all pixels
have a value "right between" the adjacent pixels, and get a type of
linear interpolation algorithm. That way you can at least double the
number of pixels and get decent results in most cases, I think.


A CCD film scanner is actually scanning three or four times in one pass. If
you think of how a trilinear array works, each colour is captured across
the height of the film. While there have been scanning digital cameras, and
digital backs, that work that way, they are unfortunately slow in usage.



Based on that, the comparison between the foveon and the other sensors
really boil down to comparing the quality of the interpolation for the
6MP data with the (potential) quality of the interpolating necessary to
get the same number of pixels based on the foveon output - or if you
like, the merits of the assumptions that lie behind the different
interpolations.


Fair assessment of the technologies. I think that the Foveon might improve
in the future, though only if they continue the R&D efforts. It might be
the situation that some other competing technology comes along soon that
changes the direction of digital imaging. Currently, the CCD chips are
quite good in comparison to CMOS, though the CMOS are much easier to
produce at larger sizes, and somewhat more economical.

Ciao!

Gordon Moat
A G Studio
http://www.allgstudio.com Updated!

  #75  
Old July 23rd 04, 08:44 PM
Roland Karlsson
external usenet poster
 
Posts: n/a
Default Digital quality (vs 35mm): Any real answers?

Toralf wrote in news:cdrkv0$545
:

OK. I've been thinking a bit about the "luminance" argument that has
popped up a number of times (but not a lot, so I may have overlooked
something), and I'm not sure I'm convinced - although it depends a bit
on how you see it.
... snipped away the rest ...


Your analysis is correct. You cannot detect the color of a small
dot that is just one pixel large with a Bayer sensor. If you have
a Foveon sensor you can. The hue resolution is much higher for
a Foveon (or any other senor that detects all color at each point)
than for Bayer sensor.

But ... that is not as important as it first sounds. To understand
why not there are two things you have to consider:

1. Sampling theory
2. Human vision

Sampling theory states that you must filter away all frequencies
at half the sampling frequency and higher to be able to make an
accurate reproduction of the incoming signal. This filter is
called an anti alias filter and it smooths the incoming signal over
nearby detectors, thus removing the problem with single pixel input.
You simply don't have any single pixel input to detect in the first
place. Some choose to call this a blur filter And - in some sense
it is - but it is neccessary to avoid strange artefacts in the picture.
If you have a sharp lens that is well focussed, you se lots of strange
things in a picture taken without an anti alias filter, e.g. a Sigma
camera.

Human vision has very poor color resolution. So - your example with
single dots of another color is not relevant for photographs.
If you want to make abstract pictures, where hue is translated into
luminosity - then it matters though.


/Roland
  #76  
Old July 23rd 04, 08:44 PM
Roland Karlsson
external usenet poster
 
Posts: n/a
Default Digital quality (vs 35mm): Any real answers?

Toralf wrote in news:cdrkv0$545
:

OK. I've been thinking a bit about the "luminance" argument that has
popped up a number of times (but not a lot, so I may have overlooked
something), and I'm not sure I'm convinced - although it depends a bit
on how you see it.
... snipped away the rest ...


Your analysis is correct. You cannot detect the color of a small
dot that is just one pixel large with a Bayer sensor. If you have
a Foveon sensor you can. The hue resolution is much higher for
a Foveon (or any other senor that detects all color at each point)
than for Bayer sensor.

But ... that is not as important as it first sounds. To understand
why not there are two things you have to consider:

1. Sampling theory
2. Human vision

Sampling theory states that you must filter away all frequencies
at half the sampling frequency and higher to be able to make an
accurate reproduction of the incoming signal. This filter is
called an anti alias filter and it smooths the incoming signal over
nearby detectors, thus removing the problem with single pixel input.
You simply don't have any single pixel input to detect in the first
place. Some choose to call this a blur filter And - in some sense
it is - but it is neccessary to avoid strange artefacts in the picture.
If you have a sharp lens that is well focussed, you se lots of strange
things in a picture taken without an anti alias filter, e.g. a Sigma
camera.

Human vision has very poor color resolution. So - your example with
single dots of another color is not relevant for photographs.
If you want to make abstract pictures, where hue is translated into
luminosity - then it matters though.


/Roland
  #77  
Old July 23rd 04, 09:28 PM
Stephen H. Westin
external usenet poster
 
Posts: n/a
Default Digital quality (vs 35mm): Any real answers?

Toralf writes:

Stephen H. Westin wrote:
"William Graham" writes:
snip

That means a 24 x 36 mm sensing plane would need about 12 megapixels to have
the same resolution as film. Digital cameras are not too far from that
now....Perhaps in another couple of years..........

Huh? The Kodak almost-14MP DCS Pro 14n shipped over a year ago. And
the DCS Pro SLR/n has replaced it, using an improved sensor.

I think he meant *affordable* cameras with that many pixels.


That's a problem. If you want a full 23x36mm sensor, it will be
expensive. Producing that size of chip is just plain expensive. Not
only do you not get many from each silicon wafer, but yield is low. A
Kodak guy said a few years back that the yield is on the order of one
over some power of the area, and the exponent was greater than 2. So a
chip twice as big will probably have a yield of less than a quarter
that of the smaller chip.

I've actually been thinking when that it's when we get there that
I'll buy a digital SLR.


Yeah, but with pros, the savings on film and processing can pay for a
good digital camera pretty quickly.

Also, *maybe* somewhere around that range the
"megapixel" race will slow down a bit, and perhaps then a new camera
won't be obsolete after about two months...

BTW. Do you know more about this sensor? It is full-frame, right? I'm
really interested in knowing if they have resolved the problems that
have lead to the use of smaller sensors so far.


The basic problem is cost. Kodak makes a 22MP CCD that's 38x50mm. It
just costs a lot, and requires a lot of power.

The Kodak full-frame cameras have been plagued with a sensitivity to
incident angle, but that seems to be corrected in firmware pretty well
these days. They explicitly do *not* use microlenses; the thought is
that microlenses will aggravate any angle-of-incidence sensitivity.

These cameras actually don't use a Kodak sensor; instead, they buy
from a Belgian company called FillFactory. It's a CMOS sensor, but one
with tricks to increase the effective fill factor for less aliasing,
better sensitivity, etc. See http://www.fillfactory.com/ for more
about the company. They don't fabricate the chips. The first
generation was made in Israel (!), but now they come from the U.K.

It uses
Nikon-mount lenses, and there is a Canon-mount sibling, the
SLR/c. Several people are using these in lieu of medium-format film
equipment, as they feel the image quality is better.
And medium-format backs reached 16MP some years ago; the best
current
single-shot backs have 22MP.

Oh, and I'm waiting for that that, too, on a 35mm-format camera (as
I've mentioned already),
or at least dreaming about it. A replaceable back, that is. Not
necessarily a system that
would allow you to switch between digital and film, but something that
would give you more flexibility in the sensor department somehow.


Ah, but the sensor needs electronics to keep up with it (data paths,
DSP, etc.) So your replaceable back now costs far more than the body
it goes on. Not to mention the packaging and reliability challenges.

--
-Stephen H. Westin
Any information or opinions in this message are mine: they do not
represent the position of Cornell University or any of its sponsors.
  #78  
Old July 23rd 04, 09:28 PM
Stephen H. Westin
external usenet poster
 
Posts: n/a
Default Digital quality (vs 35mm): Any real answers?

Toralf writes:

Stephen H. Westin wrote:
"William Graham" writes:
snip

That means a 24 x 36 mm sensing plane would need about 12 megapixels to have
the same resolution as film. Digital cameras are not too far from that
now....Perhaps in another couple of years..........

Huh? The Kodak almost-14MP DCS Pro 14n shipped over a year ago. And
the DCS Pro SLR/n has replaced it, using an improved sensor.

I think he meant *affordable* cameras with that many pixels.


That's a problem. If you want a full 23x36mm sensor, it will be
expensive. Producing that size of chip is just plain expensive. Not
only do you not get many from each silicon wafer, but yield is low. A
Kodak guy said a few years back that the yield is on the order of one
over some power of the area, and the exponent was greater than 2. So a
chip twice as big will probably have a yield of less than a quarter
that of the smaller chip.

I've actually been thinking when that it's when we get there that
I'll buy a digital SLR.


Yeah, but with pros, the savings on film and processing can pay for a
good digital camera pretty quickly.

Also, *maybe* somewhere around that range the
"megapixel" race will slow down a bit, and perhaps then a new camera
won't be obsolete after about two months...

BTW. Do you know more about this sensor? It is full-frame, right? I'm
really interested in knowing if they have resolved the problems that
have lead to the use of smaller sensors so far.


The basic problem is cost. Kodak makes a 22MP CCD that's 38x50mm. It
just costs a lot, and requires a lot of power.

The Kodak full-frame cameras have been plagued with a sensitivity to
incident angle, but that seems to be corrected in firmware pretty well
these days. They explicitly do *not* use microlenses; the thought is
that microlenses will aggravate any angle-of-incidence sensitivity.

These cameras actually don't use a Kodak sensor; instead, they buy
from a Belgian company called FillFactory. It's a CMOS sensor, but one
with tricks to increase the effective fill factor for less aliasing,
better sensitivity, etc. See http://www.fillfactory.com/ for more
about the company. They don't fabricate the chips. The first
generation was made in Israel (!), but now they come from the U.K.

It uses
Nikon-mount lenses, and there is a Canon-mount sibling, the
SLR/c. Several people are using these in lieu of medium-format film
equipment, as they feel the image quality is better.
And medium-format backs reached 16MP some years ago; the best
current
single-shot backs have 22MP.

Oh, and I'm waiting for that that, too, on a 35mm-format camera (as
I've mentioned already),
or at least dreaming about it. A replaceable back, that is. Not
necessarily a system that
would allow you to switch between digital and film, but something that
would give you more flexibility in the sensor department somehow.


Ah, but the sensor needs electronics to keep up with it (data paths,
DSP, etc.) So your replaceable back now costs far more than the body
it goes on. Not to mention the packaging and reliability challenges.

--
-Stephen H. Westin
Any information or opinions in this message are mine: they do not
represent the position of Cornell University or any of its sponsors.
  #80  
Old July 23rd 04, 11:42 PM
David Dyer-Bennet
external usenet poster
 
Posts: n/a
Default Digital quality (vs 35mm): Any real answers?

(Stephen H. Westin) writes:

Toralf writes:

Stephen H. Westin wrote:
"William Graham" writes:
snip

That means a 24 x 36 mm sensing plane would need about 12 megapixels to have
the same resolution as film. Digital cameras are not too far from that
now....Perhaps in another couple of years..........
Huh? The Kodak almost-14MP DCS Pro 14n shipped over a year ago. And
the DCS Pro SLR/n has replaced it, using an improved sensor.

I think he meant *affordable* cameras with that many pixels.


That's a problem. If you want a full 23x36mm sensor, it will be
expensive. Producing that size of chip is just plain expensive. Not
only do you not get many from each silicon wafer, but yield is low. A
Kodak guy said a few years back that the yield is on the order of one
over some power of the area, and the exponent was greater than 2. So a
chip twice as big will probably have a yield of less than a quarter
that of the smaller chip.


Yep, the bigger sensor gets *lots* more expensive fast.

Looking at the Kodak DCS Pro 14n, I seem to remember it costing $4k a
year or so ago. That's pretty affordable. Let's see, I'd call it
equivalent in price to 200 rolls of film in professional use (i.e. not
1-hour lab develop-only processing). Which is to say that, for
professional use, it's *free*.
--
David Dyer-Bennet, , http://www.dd-b.net/dd-b/
RKBA: http://noguns-nomoney.com/ http://www.dd-b.net/carry/
Pics: http://dd-b.lighthunters.net/ http://www.dd-b.net/dd-b/SnapshotAlbum/
Dragaera/Steven Brust: http://dragaera.info/
 




Thread Tools
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

vB code is On
Smilies are On
[IMG] code is Off
HTML code is Off
Forum Jump

Similar Threads
Thread Thread Starter Forum Replies Last Post
Digital quality (vs 35mm): Any real answers? Toralf Digital Photography 213 July 28th 04 06:30 PM
Will digital photography ever stabilize? Alfred Molon Digital Photography 37 June 30th 04 08:11 PM
New Leica digital back info.... Barney 35mm Photo Equipment 19 June 30th 04 12:45 AM
Digital Imaging vs. (Digital and Film) Photography Bob Monaghan Medium Format Photography Equipment 9 June 19th 04 05:48 PM


All times are GMT +1. The time now is 11:36 AM.


Powered by vBulletin® Version 3.6.4
Copyright ©2000 - 2024, Jelsoft Enterprises Ltd.
Copyright ©2004-2024 PhotoBanter.com.
The comments are property of their posters.