A Photography forum. PhotoBanter.com

If this is your first visit, be sure to check out the FAQ by clicking the link above. You may have to register before you can post: click the register link above to proceed. To start viewing messages, select the forum that you want to visit from the selection below.

Go Back   Home » PhotoBanter.com forum » General Photography » Film & Labs
Site Map Home Register Authors List Search Today's Posts Mark Forums Read Web Partners

difficulty drum scanning negatives



 
 
Thread Tools Display Modes
  #21  
Old April 4th 04, 07:25 PM
Neil Gould
external usenet poster
 
Posts: n/a
Default difficulty drum scanning negatives

Hi,

Recently, Kennedy McEwen posted:
Neil Gould writes
Recently, Kennedy McEwen posted:
That is simply untrue although it is a very popular misconception -
*NO* reconstruction has taken place at the point that sampling
occurs.

Oh? Then, are you under the impression that the sample data and the
subject are in identity?


No, however the sampled data is in identity with the subject *after*
it has been correctly filtered at the input stage.

In which case, I disagree with your usage of the term "identity".

This principle is
the entire foundation of the sampling process. No information can
get past the correct input filter which cannot be accurately and
unambiguously captured by the sampling system.

"Accurately and unambiguously" = "No distortion".

The principle is not where the problem lies. It is in the implementation.

From your own response to an earlier post:
"With a drum scanner the spot size (and it's shape) is the anti-alias
filter, and the only one that is needed. One of the most useful features
of most drum scanners is that the spot size can be adjusted independently
of the sampling density to obtain the optimum trade-off between resolution
and aliasing..."
^^^^^^^^^^^^^^^^^^^^^^^^^
In another post, you reported:
"then the photomultiplier in the scanner produces a signal which is
proportional to the average illumination over
the area of the spot."

Sounds (and looks) like distortion to me, given that the "area of the
spot" may have more than one illumination level, and the recorded value is
averaged. ;-)

If properly filtered prior to sampling then the sampled data is a
*perfect* representation of the filtered subject. In short, there may

be
*less* information in the properly sampled and reconstructed subject
than in the original, but there can never be more.

Which only further reinforces my disagreement with your usage of
"identitiy". I've not heard the term used in such a way that it includes a
"less than" clause. ;-)

However imperfect
reconstruction will result in artefacts and distortion which are not
present in the original subject - false additional information, and
jaggies fall into this category, they are not aliasing artefacts.

I didn't suggest that jaggies are aliasing artifacts. They are clearly
output representation artifacts, as are "lumpies" or other kinds of
distortions dependent on the representation of the pixels identified in
the numeric data resulting from sampling. My claim is that the numeric
data contains various distortions of the subject, and while some may be
assignable to the input filtering (including those you mentioned), but
others are assignable to the practical limitations of math operations, and
that these errors are inextricable.

Each sample
represents a measure of the subject at an infinitesimally small point
in space (or an infinitesimally small point in time).

As you present in another post, the issue relevent to the topic appears to
be:
"However, since the grain is random and smaller than the spot size, each
aliased grain only extends over a single pixel in the image - but this can
be many times larger than the actual grain on the original. "

IOW, the measure of the subject is not "infinitesimally small", and by
your own admission, some aspects of the subject (e.g. minimum grain sizes)
can be smaller than the sample size.

However, more to the point, distortion is inextricably
inherent in the sampled data, and thus relevant to the "difficulty
drum scanning negatives".


Sorry Neil, but that is completely wrong.

Not according to your own posts (as excerpted, above).

I agree with those statements in your posts, even if you don't! ;-)

That, most certainly, is *NOT* a fact! Whilst I am referring to an
interpretation of the sampled data, the correct interpretation does
*not* introduce distortion. You appear to be hung up on the false
notion that every step introduces distortion - it does not.

I see. And, just what kind of system are you using that avoids such
artifacts as rounding errors, for example?

An excellent example of this occurs in the development of the audio
CD. The original specification defined two channels sampled at
44.1kHz with 16-bit precision and this is indeed how standard CDs
are recorded.

No, that's how CDs are duplicated or replicated.


No, that is the Red Book specification - I suggest you look it up -
how yo get to that sampled data is irrelevant to the discussion on the
reconstruction filter.

Our disagreement boils down to whether artifacts are introduced by
real-world recording processes. The reason that I stressed how _audio_ is
recorded -- as opposed to the burning of the end result onto a CD
master -- is that the first stages of the recording process is somewhat
more analogous to scanning than "recording a CD".

MANY artifacts are introduced because of the lack of, as you have put it,
an adequate input filter. There is not a microphone made that will capture
actual acoustic events due to many factors, not the least of which is that
those events are typically not two dimensional in nature, but the
processes of the capturing devices (microphones) are. The rest of the
recording process is one of manipulation and error correction to create an
acceptable representation of the original acoustic events. I've not run
into anyone "in the biz" that would claim that these two are "in
identity", or that it would be possible to reconstruct the original
acoustic events from the sampled data (recording).

Finally, the process of reducing the recorded data to the 44.1/16 standard
introduces MORE errors by virtue of whether dithering is used, and if so,
which dithering algorithms one chooses. By the time a CD is ready for
purchase, it's much more akin to a painting than a scanned photograph,
which is why I think it was a poor choice as an example for this topic.

Of course, this approach assumes that the entire image can be
adequately represented in 3000 or 2000ppi, which may not be the case,
just as many audiophiles clamour for HD-CD media to met their higher
representation requirements.

And, is in fact, one of the issues at the root of my perspective. ;-)

Your assertion that the sampled data is inherently distorted and that
this inevitably passes into the reproduction is in complete
disagreement with Claude Shannon's 1949 proof. I suggest that you
will need much more backup than a simple statement of disagreement
before many people will take much notice of such an unfounded
allegation.

The crux of the matter is that I'm only interested in what happens in real
world implementations, as film in hand represents just that. I don't have
a problem with the theory, and not only understand it, but agree that *in
theory* the math behind sampling can lack distortion. However, I don't
live in theory, and have little real-world use for theoretical "solutions"
that can't be (or at least, aren't) realized. ;-)

To that end, I think I'll just rely on the results I've been able to
obtain. I, as I presume the OP, am interested in understanding the
limitations of the process. Your own posts have provided excellent bases
for the understanding of such limitations. What puzzles me is that you
don't see the "trade offs" that you spoke of as distortions of the
original subject. What, exactly, are you "trading off" that doesn't result
in a reduction of the available data in the subject?

As has already been pointed out, the smallest spot size available to
"commonplace" drum scanners is still larger than the smallest grains
in "commonplace" films. Other consequences of "real world" dot
shapes were discussed, as well. How can those *not* result in
distortions of the orignal subject? (the quotes are to suggest that
one may not consider a US$100k device to be "commonplace", yet it
will have these limitations).


Good God, I think he's finally got it, Watson! The spot is part of
the input filter of the sampling system, just as the MTF of the
imaging optics are!

I "had it" long before your first posts on the subject. However, I see
every stage of the real-world process as introducing errors, and thus
distortions of the subject.

Indeed these components (optics, spot etc.) can be used without
sampling in the signal path at all, as in conventional analogue TV,
and will result in exactly the same distortions that you are
referring to. If this is not proof that sampling itself does not
introduce an inherent distortion then I do not know what is!

As, to my knowledge, there is no system available that implements perfect
input filtering and flawlessly applies sampling algorithms, all that is
left is to expand my knowledge by being presented with such a system. ;-)

Just in case you haven't noticed, you have in the above statement
made a complete "about-face" from your previous statements - you are
now ascribing the distortions, correctly, to the input filter not the
sampling process itself, which introduces *no* distortion, or the
reconstructon filter which can introduce distortion (eg. jaggies) if
improperly designed.

I'm not terribly concerned about sampling (e.g. the math) without input
filters (e.g. the implementation). I'm only concerned about systems. So
there's no "about face" involved, we're just interested in different
things, it seems. ;-)

Regards,

--
Neil Gould
--------------------------------------
Terra Tu AV - www.terratu.com
Technical Graphics & Media


  #22  
Old April 5th 04, 01:44 AM
Kennedy McEwen
external usenet poster
 
Posts: n/a
Default difficulty drum scanning negatives

In article .net, Neil
Gould writes
Hi,


From your own response to an earlier post:
"With a drum scanner the spot size (and it's shape) is the anti-alias
filter, and the only one that is needed. One of the most useful features
of most drum scanners is that the spot size can be adjusted independently
of the sampling density to obtain the optimum trade-off between resolution
and aliasing..."
^^^^^^^^^^^^^^^^^^^^^^^^^
In another post, you reported:
"then the photomultiplier in the scanner produces a signal which is
proportional to the average illumination over
the area of the spot."

Sounds (and looks) like distortion to me, given that the "area of the
spot" may have more than one illumination level, and the recorded value is
averaged. ;-)

Ok - I give up! I thought I was discussing the subject with someone who
understood a little of what they were talking about and merely required
some additional explanatory information. That comment indicates that
you simply do not have a clue what you are talking about at all, since
you are clearly incapable of understanding either the action of a
spatial filter or the difference between the action of the filter and
the action of sampling.

Please learn the basics of the topic before wasting people's time with
such drivel.

If properly filtered prior to sampling then the sampled data is a
*perfect* representation of the filtered subject. In short, there may

be
*less* information in the properly sampled and reconstructed subject
than in the original, but there can never be more.

Which only further reinforces my disagreement with your usage of
"identitiy". I've not heard the term used in such a way that it includes a
"less than" clause. ;-)

Try *reading*! The identity is with the filtered subject which, having
been filtered is less than the subject!

More obfuscation and/or deliberate misrepresentation!

However imperfect
reconstruction will result in artefacts and distortion which are not
present in the original subject - false additional information, and
jaggies fall into this category, they are not aliasing artefacts.

I didn't suggest that jaggies are aliasing artifacts.


No? I didn't suggest you did, however you did defend the suggestion
made by a third party that they were. Try reading your opening input
into this thread again and stop the obfuscation.

My claim is that the numeric
data contains various distortions of the subject, and while some may be
assignable to the input filtering (including those you mentioned), but
others are assignable to the practical limitations of math operations, and
that these errors are inextricable.

And this is precisely where you depart company from the very principles
of the Sampling Theorem, which is hardly surprising given your previous
statements indicating your total confusion of the topic!

Let me explain it one more time, finally. There are two filters, an
input (antialiasing) filter and an output (reconstruction) filter
between which is placed a sampling system. The performance of the
system is totally independent of whether the sampling system is actually
present or not providing that the filters are matched to the dimensions
of the sampling system. In short, it is impossible to determine whether
the information has been sampled or not simply by examining the output
of the reconstruction filter, because the sampling process itself does
not introduce any distortion or limitation of the signal at all.

Since you clearly do not understand this fundamental concept on which
the entire science of information technology is based, I suggest you
acquaint yourself in detail with its scientific proof, presented clearly
in Claude Shannon's 1948 paper "A Mathematical Theory of Communication"
and desist from arguing the case against something which is a proven
mathematical fact, as relevant to audio communication as it is to
scanning images.

Each sample
represents a measure of the subject at an infinitesimally small point
in space (or an infinitesimally small point in time).

As you present in another post, the issue relevent to the topic appears to
be:
"However, since the grain is random and smaller than the spot size, each
aliased grain only extends over a single pixel in the image - but this can
be many times larger than the actual grain on the original. "

IOW, the measure of the subject is not "infinitesimally small", and by
your own admission, some aspects of the subject (e.g. minimum grain sizes)
can be smaller than the sample size.

Indeed - and they would only reach the sampling system if the input
filter, in this case the optic MTF and the spot size and shape, permit
them to. With an adequate input filter, the grain is not sampled and
grain aliasing does not occur.

Snipped the rest of this tripe, you really haven't a clue what you are
talking about. Before posting anything else, read up on the topic -
specifically the texts I have suggested. They may not be the most
comprehensive, but they are the most readable explanations of the topic
for even a layman to understand.
--
Kennedy
Yes, Socrates himself is particularly missed;
A lovely little thinker, but a bugger when he's ****ed.
Python Philosophers (replace 'nospam' with 'kennedym' when replying)
  #23  
Old April 5th 04, 05:05 AM
Neil Gould
external usenet poster
 
Posts: n/a
Default difficulty drum scanning negatives

Recently, Kennedy McEwen posted:

In article .net,
Neil Gould writes

From your own response to an earlier post:
"With a drum scanner the spot size (and it's shape) is the anti-alias
filter, and the only one that is needed. One of the most useful
features of most drum scanners is that the spot size can be adjusted
independently of the sampling density to obtain the optimum
trade-off between resolution and aliasing..."
^^^^^^^^^^^^^^^^^^^^^^^^^
In another post, you reported:
"then the photomultiplier in the scanner produces a signal which is
proportional to the average illumination over
the area of the spot."

Sounds (and looks) like distortion to me, given that the "area of the
spot" may have more than one illumination level, and the recorded
value is averaged. ;-)

Ok - I give up! I thought I was discussing the subject with someone
who understood a little of what they were talking about and merely
required some additional explanatory information. That comment
indicates that you simply do not have a clue what you are talking
about at all, since you are clearly incapable of understanding either
the action of a spatial filter or the difference between the action
of the filter and the action of sampling.

What *should* be clear to you is that I have repeatedly stated that I am
referring to real-world implementations, and not simply sampling theory. I
have repeatedly asked you to suggest a system (to make it clearer that is
HARDWARE I'm talking about) capable of performing near the levels of
accuracy that sampling theories implied. Your response is to point once
again at -- usually the same -- theoretical sources, and you've NOT ONCE
indicated the existance of such hardware. If you think that such exists,
that is where we part in our perspectives.

Please learn the basics of the topic before wasting people's time with
such drivel.

In short, this has nothing to do with my capability to understand sampling
theory, and everything to do with what one can actually purchase and/or
use. I tried to emphasize my point by excerpting your own posts,
indicating the limitations typical of such systems. So, if it's drivel,
I'm afraid it didn't originate with me, sir.



If properly filtered prior to sampling then the sampled data is a
*perfect* representation of the filtered subject. In short, there
may be
*less* information in the properly sampled and reconstructed subject
than in the original, but there can never be more.

Which only further reinforces my disagreement with your usage of
"identitiy". I've not heard the term used in such a way that it
includes a "less than" clause. ;-)

Try *reading*! The identity is with the filtered subject which,
having been filtered is less than the subject!

Your statement, that the sampled data is a perfect representation of the
filtered subject is essentially stating that the sampling alogrithm has
not altered the post-filter data. On a theoretical level, we are in
agreement about this point; the input filter has presumably restricted the
information to fall within the capabilities of the sampling algorithm to
represent it accurately. More to the point, the only way that I dispute
this is in real-world implementations, e.g. math coprocessor variances
such as the rounding errors I wrote of. Surely, you don't insist that such
impacts are non-existant in real-world systems?

More obfuscation and/or deliberate misrepresentation!

On whose part? Here is the exchange in question:

" Recently, Kennedy McEwen posted:
That is simply untrue although it is a very popular misconception -
NO reconstruction has taken place at the point that sampling
occurs.

Oh? Then, are you under the impression that the sample data and the
subject are in identity?"


" No, however the sampled data is in identity with the subject after
it has been correctly filtered at the input stage. "


It is clear in this exchange that you have relocated "the subject" from
being the pre-filter object I inquired about to a post-filtered
representation of that object. I am not now, nor have I ever been
referring to "the subject" as a post-filtered representation of the
object. The distortion I spoke of is the difference between the subject
and the post-filter representation, and in other parts of the exchange,
included the possibile accumulation of errors due to hardware
computational limitations. I've never claimed differently. So, where is
the "obfuscation and/or deliberate misrepresentation", beyond your claim
that it exists in this material?

However imperfect
reconstruction will result in artefacts and distortion which are not
present in the original subject - false additional information, and
jaggies fall into this category, they are not aliasing artefacts.

I didn't suggest that jaggies are aliasing artifacts.


No? I didn't suggest you did, however you did defend the suggestion
made by a third party that they were. Try reading your opening input
into this thread again and stop the obfuscation.

Perhaps you should re-read that opening input again, and stop trying to
misrepresent what I stated. Here it is, for your convenience:

Don wrote:
" It is
the source of the "jaggies" you see on straight edges in improperly
digitized imagery as well as other problems."


Your reply:
" No it isn't!

" Jaggies occur because of inadequately filtered reconstruction systems.
Not because of inadequate sampling! A jagged edge occurs because the
reconstruction of each sample introduces higher spatial frequencies
than the sampled image contains, for example by the use of sharp
square pixels to represent each sample in the image."

My reply:
"While I understand your complaint, I think it is too literal to be useful
in this context. Once a subject has been sampled, the "reconstruction" has
already taken place, and a distortion will be the inevitable result of any
further representation of those samples. This is true for either digital
or analog sampling, btw."

My opening statement, "...I understand your complaint..." is that I am
agreeing with you, but questioning the value of the distinction you are
making. Put plainly, you are referring strictly to the algorithm applied
to post-filtered data. To clarify my response, it is that by the time the
subject (not post-filtered representation of the subject) is sampled, it
is already distorted (by the input filter), and will only be further
distorted by the time of output in a real-world system.

And, directly to the issue of jaggies:
You stated:
" Aliasing only occurs on the
input to the sampling system - jaggies occur at the output."

My reply was:
"Whether one has "jaggies" or "lumpies" on output will depend on how
pixels
are represented, e.g. as squares or some other shape. However, that really
misses the relevance, doesn't it? That there is a distortion as a result
of sampling, and said distortion will have aliasing which exemplifies
the difficulty of drum scanning negatives, and that appears to be the
point of Don's original assertion. Our elaborations haven't disputed this
basic fact."

Clearly, I am agreeing with YOU that jaggies are output artifacts. My
response elaborates on some possible artifacts that _output devices_ may
introduce. There is NOTHING in my statement that merits your claim that
"...however you did defend the suggestion made by a third party that..."
(jaggies are aliasing artifacts). The remaining content merely questions
whether the points you are making addresses the OP's question at hand.

At best, my entry recognized the idea that a real-world system, e.g.
scanner as a piece of hardware, not simply the sampling-stage mathematic
operation on post-filtered data, can present the end user with a file that
contains aliasing, and possibly to that end, Don was responding to the OP.
I was not then, and am not now arguing about any aspect of sampling theory
independent of a real-world implementation through existant hardware. Make
no mistake that my choice is not because I don't understand, or have not
read the material.

Your insults aside, the fact is that we're talking apples and oranges. The
problem is, you fail to acknowledge this. If you wish to criticise the
accuracy or relevance of my comments, you'll do so not by pointing at
various sources of sampling theory, but by pointing at the hardware that
performs to the degree of accuracy that such theories imply. To distill
the point of my input to a single sentence: If such hardware existed, the
"trade offs" you spoke of would, in all likelihood, be unnecessary.

Regards,

--
Neil Gould
--------------------------------------
Terra Tu AV - www.terratu.com
Technical Graphics & Media


  #24  
Old April 5th 04, 07:50 AM
Kennedy McEwen
external usenet poster
 
Posts: n/a
Default difficulty drum scanning negatives

In article .net, Neil
Gould writes
Recently, Kennedy McEwen posted:

In article .net,
Neil Gould writes

From your own response to an earlier post:
"With a drum scanner the spot size (and it's shape) is the anti-alias
filter, and the only one that is needed. One of the most useful
features of most drum scanners is that the spot size can be adjusted
independently of the sampling density to obtain the optimum
trade-off between resolution and aliasing..."
^^^^^^^^^^^^^^^^^^^^^^^^^
In another post, you reported:
"then the photomultiplier in the scanner produces a signal which is
proportional to the average illumination over
the area of the spot."

Sounds (and looks) like distortion to me, given that the "area of the
spot" may have more than one illumination level, and the recorded
value is averaged. ;-)

Ok - I give up! I thought I was discussing the subject with someone
who understood a little of what they were talking about and merely
required some additional explanatory information. That comment
indicates that you simply do not have a clue what you are talking
about at all, since you are clearly incapable of understanding either
the action of a spatial filter or the difference between the action
of the filter and the action of sampling.

What *should* be clear to you is that I have repeatedly stated that I am
referring to real-world implementations, and not simply sampling theory.


Really - the issues raised in this and other posts do not relate to
specific hardware implementations, but to generic steps in the process.
In particular your insistence that sampling itself, not the filters,
introduces distortions which you have never specified. I have already
mentioned the practical limitations of positional accuracy in real
sampling systems which are insignificant in modern systems, but you have
yet to divulge what these imaginary distortions you think exist in real
practical hardware at the sampling stage.

I
have repeatedly asked you to suggest a system (to make it clearer that is
HARDWARE I'm talking about) capable of performing near the levels of
accuracy that sampling theories implied. Your response is to point once
again at -- usually the same -- theoretical sources, and you've NOT ONCE
indicated the existance of such hardware.


I did, but you were clearly too lost in your own flawed mental model of
the process to notice that I had. I suggest you back up a few posts and
find it.

--
Kennedy
Yes, Socrates himself is particularly missed;
A lovely little thinker, but a bugger when he's ****ed.
Python Philosophers (replace 'nospam' with 'kennedym' when replying)
  #25  
Old April 5th 04, 01:08 PM
Neil Gould
external usenet poster
 
Posts: n/a
Default difficulty drum scanning negatives

Recently, Kennedy McEwen posted:

Neil Gould writes
I
have repeatedly asked you to suggest a system (to make it clearer
that is HARDWARE I'm talking about) capable of performing near the
levels of accuracy that sampling theories implied. Your response is
to point once again at -- usually the same -- theoretical sources,
and you've NOT ONCE indicated the existance of such hardware.


I did, but you were clearly too lost in your own flawed mental model
of the process to notice that I had. I suggest you back up a few
posts and find it.

While I did respond to the various analogies that others presented, I
don't recall presenting a mental model of the process. However, perhaps
you did answer the question, and it's possible that the post you are
referencing above is not available on my news server. The closest response
that I can locate is from our exchange on 4/5:

You wrote:
"The point is that he has already done this - most drum scanner
manufacturers produce equipment capable of the task, unfortunately many
operators are not up to driving them close to perfection - often because
they erroneously believe that such perfection is unobtainable in sampled
data, so why bother at all."

Is your intent is to suggest that the only source of grain aliasing in the
resultant file is operator error? If so, the difficulty that I have is in
reconciling such a notion against your own excellent description on 4/1:

There, you wrote in part:
"Part of the skill of the drum scan operator is adjusting the spot or
aperture size to optimally discriminate between the grain and the image
detail for particular film types, however some film types are difficult,
if not impossible to achieve satisfactory discrimination."

It appears to imply that, regardless of operator skill, there will be
cases in which some artifacts are unavoidable. This explanation is one
that I understood to be the case, and directly experienced, at least
decades before this thread began. Perhaps you'll indulge me by clarifying
this, as it is the primary source of any "confusion" that I may have?

Regards,

--
Neil Gould
--------------------------------------
Terra Tu AV - www.terratu.com
Technical Graphics & Media


  #26  
Old April 5th 04, 07:08 PM
Kennedy McEwen
external usenet poster
 
Posts: n/a
Default difficulty drum scanning negatives

In article .net, Neil
Gould writes

While I did respond to the various analogies that others presented, I
don't recall presenting a mental model of the process.


Your repeated statements that sampling itself introduces distortion is
evidence of a flawed mental model of the process, one which is at direct
odds with the underlying principles of sampling in general.

You wrote:
"The point is that he has already done this - most drum scanner
manufacturers produce equipment capable of the task, unfortunately many
operators are not up to driving them close to perfection - often because
they erroneously believe that such perfection is unobtainable in sampled
data, so why bother at all."

Is your intent is to suggest that the only source of grain aliasing in the
resultant file is operator error?


Not at all, many systems are designed in such a way that grain aliasing
cannot be avoided. For example, until recently, this was impossible to
avoid in almost all desktop scanners, and still is in many. Some drum
scanners apparently suffer from a similar problem, specifically that the
aperture shape and size and/or the sampling density cannot be increased
to a sufficient degree to prevent aliasing.

If so, the difficulty that I have is in
reconciling such a notion against your own excellent description on 4/1:

There, you wrote in part:
"Part of the skill of the drum scan operator is adjusting the spot or
aperture size to optimally discriminate between the grain and the image
detail for particular film types, however some film types are difficult,
if not impossible to achieve satisfactory discrimination."

It appears to imply that, regardless of operator skill, there will be
cases in which some artifacts are unavoidable. This explanation is one
that I understood to be the case, and directly experienced, at least
decades before this thread began. Perhaps you'll indulge me by clarifying
this, as it is the primary source of any "confusion" that I may have?

As mentioned above, there are cases where this cannot be avoided,
irrespective of the operator skill, simply due to hardware design
limitations. Also, as previously mentioned there are some films, almost
exclusively monochrome, high contrast, ultrathin emulsions, which are
capable of resolving image detail right up to the spatial frequencies at
which grain structure exists. Had you looked up some of the references
I cited you would have found that this type of case is specifically
addressed, where the image is effectively randomly sampled by the film
grain which is in turn regularly sampled by the scanner system. If
neither loss of image content nor grain aliasing are acceptable then
these films require sampling and input filtering beyond the resolution
of the film itself. The aperture, together with normal optical
diffraction limits, still performs an input filter to the sampling
process, reducing the contrast of the grain to a minimum above the
Nyquist of the sampling density, however the sampling density can easily
reach 12,000ppi or more (true, not interpolated). Few scanners are
capable of this, however, given that the film MTF has fallen
significantly before grain contrast becomes significant, it is still
perfectly feasible to identify an optimum, if less than perfect,
differentiation point in lesser scanners.

Such issues are rarely a problem with the much thicker and multilayer
colour emulsions where resolution generally falls off well before grain
contrast becomes significant. Just as importantly the grain itself is
indistinct, having been bleached from the emulsion to leave soft edged
dye clouds, resulting in a slow rise in granular noise as a function of
spatial frequency. Thus the ability to differentiate between resolved
image content and grain is much enhanced and the failure to do so with
adequate equipment is invariably due to operator skill (or interest or
both) limitations.
--
Kennedy
Yes, Socrates himself is particularly missed;
A lovely little thinker, but a bugger when he's ****ed.
Python Philosophers (replace 'nospam' with 'kennedym' when replying)
  #27  
Old April 5th 04, 11:46 PM
Neil Gould
external usenet poster
 
Posts: n/a
Default difficulty drum scanning negatives

Recently, Kennedy McEwen posted:

In article .net,
Neil Gould writes

While I did respond to the various analogies that others presented, I
don't recall presenting a mental model of the process.


Your repeated statements that sampling itself introduces distortion is
evidence of a flawed mental model of the process, one which is at
direct odds with the underlying principles of sampling in general.

I'm afraid that you are mistaken about my comments sampling errors.
Rather than put full quotes here, I'll follow your lead and invite you to
read them again. I've never questioned the integrity of the theoretical
functions involved in sampling, and wrote so more than once. However, I
did state that any real-world implementation of sampling algorithms by
hardware will introduce at least rounding errors due to hardware
limitations. I would not call that a "mental model of the process", in
that it explicitly describes hardware functioning.

All of my other comments regarding distortions (errors, if you prefer)
involved the state of the information about the subject post-input filter,
the issue being GIGO at the sampling stage. Again, this is simply a
description of hardware functioning, and not a mental model of the
process. If you disagree with any of this, please let me know how and why.

You wrote:
"The point is that he has already done this - most drum scanner
manufacturers produce equipment capable of the task, unfortunately
many operators are not up to driving them close to perfection -
often because they erroneously believe that such perfection is
unobtainable in sampled data, so why bother at all."

Is your intent is to suggest that the only source of grain aliasing
in the resultant file is operator error?


Not at all, many systems are designed in such a way that grain
aliasing cannot be avoided. For example, until recently, this was
impossible to avoid in almost all desktop scanners, and still is in
many. Some drum scanners apparently suffer from a similar problem,
specifically that the aperture shape and size and/or the sampling
density cannot be increased to a sufficient degree to prevent
aliasing.

Now, we're getting somewhere. My repeated request was for a reference to a
commonly available machine which has sufficiently high performance
capabilities to reliably avoid grain aliasing with all commonly available
films (obviously, for all subjects and without sacrificing detail or
introducing other artifacts). I am unaware of the existance of such a
scanner, and would appreciate make and model, or a pointer to the site. If
you've already done so, it isn't on my news service.

But, I suspect that we actually agree about this, as you have responded
with:

As mentioned above, there are cases where this cannot be avoided,
irrespective of the operator skill, simply due to hardware design
limitations. Also, as previously mentioned there are some films,
almost exclusively monochrome, high contrast, ultrathin emulsions,
which are capable of resolving image detail right up to the spatial
frequencies at which grain structure exists.


Which is the crux of the problem, is it not? And, it's not news to me.
;-)

Regards,

--
Neil Gould
--------------------------------------
Terra Tu AV - www.terratu.com
Technical Graphics & Media


  #28  
Old April 6th 04, 12:42 AM
Kennedy McEwen
external usenet poster
 
Posts: n/a
Default difficulty drum scanning negatives

In article .net, Neil
Gould writes

I'm afraid that you are mistaken about my comments sampling errors.
Rather than put full quotes here, I'll follow your lead and invite you to
read them again. I've never questioned the integrity of the theoretical
functions involved in sampling, and wrote so more than once.


You wrote:
"Whether one has "jaggies" or "lumpies" on output will depend on how
pixels are represented, e.g. as squares or some other shape. However,
that really misses the relevance, doesn't it? That there is a distortion
as a result of sampling"

In your next post you the wrote:
"However, more to the point, distortion is inextricably inherent in the
sampled data"

And then wrote:
"My claim is that the numeric data contains various distortions of the
subject, and while some may be assignable to the input filtering
(including those you mentioned), but others are assignable to the
practical limitations of math operations, and that these errors are
inextricable."

All of these statements, especially the last one, refer quite
specifically to the sampling process, not to the limitations of the
input filter which you specifically address separately in the latter
statement.


Now, we're getting somewhere. My repeated request was for a reference to a
commonly available machine which has sufficiently high performance
capabilities to reliably avoid grain aliasing with all commonly available
films (obviously, for all subjects and without sacrificing detail or
introducing other artifacts). I am unaware of the existance of such a
scanner, and would appreciate make and model, or a pointer to the site. If
you've already done so, it isn't on my news service.

Pick any of the currently available film/flatbed scanners and you will
have in your hands a scanner which does not alias grain.

Look at the Minolta 5400 for a higher resolution scanner which, which
the grain dissolver activated, does not alias grain.

Although not technically a drum scanner, the Imacon 848 provides most of
the related features and will cope with most photographic film without
grain aliasing or resolution loss.

Finally, its expensive but the Aztek Premier will do 16000ppi optical
sampling with independent aperture control to get everything off the
highest resolution monochrome film without introducing grain aliasing at
all.

But, I suspect that we actually agree about this, as you have responded
with:

As mentioned above, there are cases where this cannot be avoided,
irrespective of the operator skill, simply due to hardware design
limitations. Also, as previously mentioned there are some films,
almost exclusively monochrome, high contrast, ultrathin emulsions,
which are capable of resolving image detail right up to the spatial
frequencies at which grain structure exists.


Which is the crux of the problem, is it not?


Not really. Most, if not all of the people on this forum, are
interested in scanning images from colour film where such high
resolution requirements just don't exist.
--
Kennedy
Yes, Socrates himself is particularly missed;
A lovely little thinker, but a bugger when he's ****ed.
Python Philosophers (replace 'nospam' with 'kennedym' when replying)
  #29  
Old April 6th 04, 02:22 PM
external usenet poster
 
Posts: n/a
Default difficulty drum scanning negatives

Paul Schmidt wrote:

What are the best films for scanning say one or two brands/types
in each of these categories:

B&W (what's best old tech, new tech, chromogenic)


I'm partial to the Fuji line. I've settled mostly on Neopan 400 and Neopan
1600. Some Acros 100. (Don't ask me why it's not Neopan 100. I have no
idea.

Acros 100:

http://canid.com/sioux_falls/falls_park1.html


Neopan 400:

http://canid.com/johanna/butterfly1.html


Neopan 1600:

http://canid.com/johanna/balancing_act.html


--
Eric
http://canid.com/

  #30  
Old April 7th 04, 12:15 PM
Neil Gould
external usenet poster
 
Posts: n/a
Default difficulty drum scanning negatives

Recently, Kennedy McEwen posted:

In article .net,
Neil Gould writes

I'm afraid that you are mistaken about my comments sampling
errors. Rather than put full quotes here, I'll follow your lead and
invite you to read them again. I've never questioned the integrity
of the theoretical functions involved in sampling, and wrote so more
than once.


You wrote:
"Whether one has "jaggies" or "lumpies" on output will depend on how
pixels are represented, e.g. as squares or some other shape. However,
that really misses the relevance, doesn't it? That there is a
distortion as a result of sampling"

"Jaggies or lumpies" clearly refers to the result post output-filter, as
identified in the first part of the first sentence by the words "on
output". The latter reference of distortion has to do with GIGO, and I
didn't go into detail at that point. I did make it plainly clear in
subsequent posts that I am referring to real-world implementations in
hardware.

In your next post you the wrote:
"However, more to the point, distortion is inextricably inherent in
the sampled data"

It should be obvious that this refers to the state of the information post
input-filter, as that comprises the content of "the sampled data". GIGO,
once again.

And then wrote:
"My claim is that the numeric data contains various distortions of the
subject, and while some may be assignable to the input filtering
(including those you mentioned), but others are assignable to the
practical limitations of math operations, and that these errors are
inextricable."

All of these statements, especially the last one, refer quite
specifically to the sampling process, not to the limitations of the
input filter which you specifically address separately in the latter
statement.

Not really. That "the numeric data contains various distortions of the
subject" directly addresses the end result of all stages up to the point
where that data can be examined -- e.g. post sampling, and post storage.
It in no way isolates the sampling stage, as exemplified by "...some may
be assignable to the input filtering...", while the last portion refers to
the *implementation*, e.g. "practial limitations of math operations", or
put another way, real-world execution of those functions. Unless you have
access to some device the rest of the world has yet to see, this is an
accurate statement.

Now, we're getting somewhere. My repeated request was for a
reference to a commonly available machine which has sufficiently
high performance capabilities to reliably avoid grain aliasing with
all commonly available films (obviously, for all subjects and
without sacrificing detail or introducing other artifacts). I am
unaware of the existance of such a scanner, and would appreciate
make and model, or a pointer to the site. If you've already done so,
it isn't on my news service.

Pick any of the currently available film/flatbed scanners and you will
have in your hands a scanner which does not alias grain.

However, in the process, they compromise the image in other ways, and as
such do not meet the criteria that I've spelled out, above in "...for all
subjects and without sacrificing detail or introducing other artifacts".

Look at the Minolta 5400 for a higher resolution scanner which, which
the grain dissolver activated, does not alias grain.

Ditto.

Although not technically a drum scanner, the Imacon 848 provides most
of the related features and will cope with most photographic film
without grain aliasing or resolution loss.

"Most photographic film" is not "all commonly available film", which is
another of the criteria from above.

Finally, its expensive but the Aztek Premier will do 16000ppi optical
sampling with independent aperture control to get everything off the
highest resolution monochrome film without introducing grain aliasing
at all.

I'll look into this model. Thank you for the reference, even if I remain
skeptical that 16000 ppi is sufficiently high frequency to "get everything
off the highest resolution monochrome film" without any artifacts, at
least it's not flatbed territory or CCD-based.

Which is the crux of the problem, is it not?


Not really. Most, if not all of the people on this forum, are
interested in scanning images from colour film where such high
resolution requirements just don't exist.

Definitely not "all of the people on this forum", based on the number of
inquiries related to scanning monochrome negatives. You shouldn't have to
search very deeply to find a significant number of such requests.

Furthermore, there are color films that are also challenging to scan, such
as the Kodachromes. I've gotten much better results from optical
enlargements of those slides. I haven't used NPS 160, as is the case of
the OP, but allow for the possibility that this might be another such
film. Do you know for certain that it isn't?

Regards,

--
Neil Gould
--------------------------------------
Terra Tu AV - www.terratu.com
Technical Graphics & Media



 




Thread Tools
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

vB code is On
Smilies are On
[IMG] code is Off
HTML code is Off
Forum Jump

Similar Threads
Thread Thread Starter Forum Replies Last Post
Scanning Software versus Photoshop Dale Digital Photography 3 July 1st 04 05:20 PM
cheap processing, are negatives OK ? Kevin Graham In The Darkroom 18 June 30th 04 03:00 PM
color drum problems! Mike In The Darkroom 4 April 2nd 04 05:27 PM
Salvaging Old Negatives Jim Rosengarten Film & Labs 11 March 26th 04 02:21 AM
B&W negatives from digtal files Sheldon Strauss In The Darkroom 4 February 26th 04 02:10 AM


All times are GMT +1. The time now is 08:12 PM.


Powered by vBulletin® Version 3.6.4
Copyright ©2000 - 2024, Jelsoft Enterprises Ltd.
Copyright ©2004-2024 PhotoBanter.com.
The comments are property of their posters.