A Photography forum. PhotoBanter.com

If this is your first visit, be sure to check out the FAQ by clicking the link above. You may have to register before you can post: click the register link above to proceed. To start viewing messages, select the forum that you want to visit from the selection below.

Go Back   Home » PhotoBanter.com forum » Digital Photography » Digital SLR Cameras
Site Map Home Register Authors List Search Today's Posts Mark Forums Read Web Partners

18 megapixels on a 1.6x crop camera - Has Canon gone too far?



 
 
Thread Tools Display Modes
  #61  
Old September 7th 09, 03:01 PM posted to rec.photo.digital.slr-systems,rec.photo.digital
mcdonaldREMOVE TO ACTUALLY REACH [email protected]
external usenet poster
 
Posts: 243
Default 18 megapixels on a 1.6x crop camera - Has Canon gone too far?

"mcdonaldREMOVE TO ACTUALLY REACH wrote:
I have an 8 MP Canon 30D. I can tell the difference between
the sharpness and general quality of a 4x6 print made from it,
using a wide angle lens (24 mm), and a print (same size) made
from a panorama assembly of images made with a 50 mm lens,
i.e. roughly 20 (one loses some due to overlap).


By that I mean two prints showing an identical scene
(in this case, Yosemite Valley from Glacier Point.)


In both cases
the lens was used at f/9, its optimal aperture, and lateral
chromatic aberration correction was used in Canon DPP,
and identical amounts of sharpening.

The panorama is clearly better. The difference is even more
obvious as an 8x10.

Doug McDonald

  #62  
Old September 7th 09, 03:11 PM posted to rec.photo.digital.slr-systems,rec.photo.digital
David Kilpatrick
external usenet poster
 
Posts: 693
Default 18 megapixels on a 1.6x crop camera - Has Canon gone too far?

Kennedy McEwen wrote:

Your problem is being compounded by your misuse, and possibly
misunderstanding, of the terms "resolve" and "resolution". These have
nothing whatsoever to do with the detection of single objects, as in
your examples so far, but in the visible separation of two or more
objects as being distinct from a single larger object.

In other words, being able to distinguish one piece of wire from another
that is 25um thicker has nothing whatsoever to do with resolution and,
as others have pointed out, there are many means of achieving that at
sub-resolution scales. However, being able to tell that two pieces of
0.25mm wire viewed end on and separated by only 25um are not a single
oval profiled piece of 0.525mm wire is resolution - you are visually
"resolving" the two separate pieces of 0.25mm wire.


But in this case I am not talking about resolution of spatial frequency.
The eye can officially resolve something like 150-200 cycles at the
typical (a bit close, I think) 12 inch viewing distance cited for
viewing 10 x 8 prints. I've got a steel rule engraved for one section in
100ths of a unch (a certified one we used to have for calibrating and
checking our imagesetters) and I can resolve that - 200 cycles, each
100th being represented by a black line and a metal space - clearly at
18 inches. Increase the viewing distance to 20 inches, and the line
pattern (as expected) looks like a grey tint.

20/20 vision acuity is 175 cycles at 12 inches, so 200 cycles at 18
inches is good acuity.

What interests me here is that although the eye can only resolve a
certain spatial frequency limit (also, I would guess, contrast
dependent) it can provide enough information for the brain to see
differences in the size of objects - hair, wire, carpet fibres, whatever
- which are less than 100th of an inch, at an even greater distance.

One of the concerns with digital photography is that this type of
difference is lost. Some fine detail falls beyond Nyquist in frequency
terms, or (moderated by low-pass filtration and the Bayer pattern) may
be recorded as single pixel with. Therefore close values of fine detail,
which appear different to the naked eye, may be 'quantised' in a digital
image. This in turn removes dimensional clues to the 3D depth of of the
2D image.

Greatly increase pixel counts may produce more realistic looking images
because they more accurately depict tiny differences - beyond the
spatial frequency limit - that the eye and brain can see.

David
  #63  
Old September 7th 09, 03:21 PM posted to rec.photo.digital.slr-systems,rec.photo.digital
David Kilpatrick
external usenet poster
 
Posts: 693
Default 18 megapixels on a 1.6x crop camera - Has Canon gone too far?

David Kilpatrick wrote:

I can resolve that - 200 cycles, each
100th being represented by a black line and a metal space - clearly at
18 inches. Increase the viewing distance to 20 inches, and the line
pattern (as expected) looks like a grey tint.

20/20 vision acuity is 175 cycles at 12 inches, so 200 cycles at 18
inches is good acuity.



I mean 100 cycles not 200! 200 lines. 20/20 vision is 350 lines. Back to
old lines per mm versus line pairs=cycles.

I started in lens testing when most results were still expressed in
lines per millimetre. Magazines changed, generally, to lppm or cycles in
the late 1970s (the optical industry had been there for two decades).

David
  #64  
Old September 7th 09, 03:25 PM posted to rec.photo.digital.slr-systems,rec.photo.digital
Alan Browne
external usenet poster
 
Posts: 12,640
Default 18 megapixels on a 1.6x crop camera - Has Canon gone too far?

John A. wrote:
On Sun, 06 Sep 2009 17:35:59 -0400, Alan Browne
wrote:

Alfred Molon wrote:
In article , Alan Browne
says...

You claim it is right, you provide proof. You've only referenced an
opinion to date, for which the author expresses more doubt than you do!
I'm not claiming this is right, but you are claiming this is false.
Provide proof then.

I did provide an example by backing into the numbers that even a 600 dpi
print (dots of real recorded info) for a 12x8" print was a pretty fat 36
Mpix, or so and if you believe anyone can see more detail than that,
even up close with sharp eyes, you are deluded. I doubt most people
could see differences in a monochrome print at 400dpi, never mind colour.
Do you have any reference for that? Not that I'm questioning what you
write (I hear it all the time), I'm just asking if anybody has actually
tested that.

It is very pertinent to add that resolving line pairs on a chart is not
useful to looking at an actual photograph.

DOF markings on lenses are typically set for an 8x10" print. This in
turn uses a CoC of a mere 6 or 8 lp/mm based on viewing at 10".

600 dpi is 11.8 lp/mm.

http://www.nikonlinks.com/unklbil/dof.htm

The author stated that 30lp/mm prints are perceived as sharper than
15lp/mm by most people - this must be based on some empirical evidence,
which the author is not providing.

"which the author is not providing". Ah. You're beginning to get it.

Subjective balderdash is a better description.

In any case, if the human ear can only hear at most up to 20kHz, why do
you get better results when sampling at 176kHz than at 44kHz?

Why do you make the mistake of using an analogy of an even more
subjective (audio) experience when your basic premise needs so much work?

(MonsterCable is a rich company due to subjective doubt).


It has been noted elsewhere in the thread that people can detect
information from an image beyond that they can directly resolve. (Ref:
guitar strings.) As also stated in that branch, by the same person
IIRC, it's likely due to brightness. Astronomers have used that to
estimate the diameter of exoplanets they are unable to come anywhere
near resolving directly, by measuring the difference in light
intensity as the planets cross between us and their stars.

My point is that perhaps the human eye can gather some information
about an image beyond that in can physically resolve in a photo as
well. Thus printing detail smaller than the eye can resolve may in
fact result in a better-looking, more subjectively natural-looking,
image.


I pointed that out earlier, eg: the bend, colour, texture, etc. of a
string reveals a lot about. But then the same applies in a print. You
don't need detail to suggest ut.


Or perhaps not. But it does seem like the human visual system is
capable of more than just directly resolving edges and widths. Maybe
we don't give it enough credit.

And then, of course, there's the whole cropping and poster print thing
that always seems to get lost in these debates. People do more with
the pictures they take than just print the whole thing at 8x10.


I don't crop very often, actually.
  #65  
Old September 7th 09, 03:27 PM posted to rec.photo.digital.slr-systems,rec.photo.digital
Alan Browne
external usenet poster
 
Posts: 12,640
Default 18 megapixels on a 1.6x crop camera - Has Canon gone too far?

Elliott Roper wrote:
In article , Alan Browne
wrote:

Elliott Roper wrote:
In article , Alan Browne
wrote:

Elliott Roper wrote:
Who gives a flying f&%$ what relationship that has to some
archeological artefact?
My archaeological artifacts include some very high end lenses that cover
a 44mm image circle, so the current "full frame" at 36x24 is just about
right for the long haul with these lenses. I'll put them up for sale in
2030 or so and in that time maybe 2 or 3 more cameras, I would expect.

A digital back for my Hassy would be nice too...
I don't seem to be making myself clear.

You can't make a blivet clear.


Thank you for that gratuitous insult.


Why is it in insult? Nobody can make a blivet clear and your proposal
resembles a blivet. Also see other definitions for blivet with exactly
the same result.


I guess it is an insult. I can attach no meaning to "making a blivet
clear" Do you mean my inability to filter out the solid matter from an
overflowing five pound bag of ****? If so, what relevance does it have?
I must have got under your skin with the Hassy willie waving dig.

However, it seems my problem is making something clear just to you.
It could be you have less than average reading skill. Do try harder.

I'm arguing that "35mm" is a backward looking nomenclature for digital
camera sensors.
I'm arguing it is made worse by describing cameras as fractions of 35mm.


Since you're so deep in the minority, it really doesn't matter.

EOD for me.



I'd like a straightforward naming scheme based on sensor dimensions.

Starting from an obsolete recording medium is helpful only to those
with experience of those old systems.

Asserting that you own a large number of lenses suitable for that
obsolete system is irrelevant.
If anything, the assertion shows you are uncomfortable with thinking
about sensor sizes in a rational way.
Oh wait! You *do* understand. Later you call them 44mm lenses. That is
a better name than 35mm. It means they create a 44mm diameter focused
image. So why persist in calling the lenses 35mm? Is it a comfort
blanket?

A 44mm lens is just the right size for a 24*36mm sensor. That is
obvious to anyone with a calculator and a bit of elementary maths.
It is also obvious that it would be OK for a smaller sensor. A bit
wasteful and subtending a narrower angle of view across it of course.
With that naming convention anyone can do the maths to see how wasteful
and what angle of view. But where does 35mm fit? In the past. That's
where. Who cares how wide the sprocket holes were? Who know what
systems went landscape across the film and which along?

Using the camcorder sensor size naming mess is completely relevant to
this discussion. They too fell into the trap of describing their
sensors relative to obsolete hardware. Asserting that there are a lot
of 44mm lenses out there is no refutation of that relevance.

Finally, who is this "we" that does not care about small sensors in
fixed lens cameras? A decent sensor and lens naming scheme should take
account of them. Making a snobby remark about their quality is not
advancing your argument at all. As far as I know there are no cameras
sold with 1.5 or 1.6 sensors. I think you mean 1/1.5 and 1/1.6. If you
used a proper name like 15*22.5mm you would have less chance of getting
your fractions upside down. And you would know that a 27mm lens would
be adequate.

Yet you say that old 35mm naming convention isn't broken.

  #66  
Old September 7th 09, 03:41 PM posted to rec.photo.digital.slr-systems,rec.photo.digital
Alan Browne
external usenet poster
 
Posts: 12,640
Default 18 megapixels on a 1.6x crop camera - Has Canon gone too far?

Kennedy McEwen wrote:
In article , David Kilpatrick
writes

I'm still trying to work this one out. If we are told the human eye
can only resolve X, and any of our cats' whiskers (not a clump, single
ones) are clearly visible from 12 feet away - and should in theory not
be resolved - something else must be happening.

And indeed it is. In width, the cat's whiskers are sub-resolution, but
in length they are not. Each point on the cat's whisker is detected due
to the contrast difference between it and the background and your brain
is then analysing these adjacent *unresolved* blur spots and identifying
them as a single object, a whisker.

Your problem is being compounded by your misuse, and possibly
misunderstanding, of the terms "resolve" and "resolution". These have
nothing whatsoever to do with the detection of single objects, as in
your examples so far, but in the visible separation of two or more
objects as being distinct from a single larger object.


Excellent description. This is like a radar with a 1.5 deg beam width
detecting a 10 cm diameter pole half a km away when the beam is 13
meters wide by that point. The pole appears to be nearly 26 m wide on
the polar plot as it "paints" from the leading edge of the beam until
the trailing edge... (unless specific "beam sharpening" algorithms are
used which sharpens the plot). But it certainly is detected (perceived).
  #67  
Old September 7th 09, 05:43 PM posted to rec.photo.digital.slr-systems,rec.photo.digital
David Kilpatrick
external usenet poster
 
Posts: 693
Default 18 megapixels on a 1.6x crop camera - Has Canon gone too far?

Alan Browne wrote:

Excellent description. This is like a radar with a 1.5 deg beam width
detecting a 10 cm diameter pole half a km away when the beam is 13
meters wide by that point. The pole appears to be nearly 26 m wide on
the polar plot as it "paints" from the leading edge of the beam until
the trailing edge... (unless specific "beam sharpening" algorithms are
used which sharpens the plot). But it certainly is detected (perceived).


Alan, you've maybe provided the answer - in a different way.

The human eye doesn't have either sensor or lens image stabilisation!

I know my left eye flickers a bit now as I get older. Sometimes I can
notice it when tired. But our whole body/head as well as eyes are in
constant motion.

So, it's a bit like the sweep of the radar beam - the image on the
retina will NEVER be falling in a static position on those cones. It
will be travelling across/around just like the image you see through
live view camera with an unstabilised tele lens if you magnify it to focus.

We don't see that effect unless there is something wrong (drunk, ear
problems etc) but even when we look closely at one point our eyes are
dancing around it. Plus, there's a pair of them, effectively increasing
the theoretical resolution worked out from the cone density.

Vision is effectively not focusing a static image, like a camera on a
tripod. It is constantly scanning and rescanning across the detail,
measuring depth with stereoscopic vision. We appear to see a static
image but on the retina it is anything but static.

David
  #68  
Old September 7th 09, 05:55 PM posted to rec.photo.digital.slr-systems,rec.photo.digital
David Kilpatrick
external usenet poster
 
Posts: 693
Default 18 megapixels on a 1.6x crop camera - Has Canon gone too far?

John A. wrote:


On the other hand, hexagonal sensors would have room for additional
circuitry in the corners. I wonder if hexagonal pixels could work too.
You could have RGB triads instead of the Bayer pattern.


Hexagonal doesn't have the spare room. Octagonal does, which is why Fuji
used it in the original SuperCCD design. The little 'square tiles' are
left in the corners just like an octagonal tiled floor. Then later they
added mini pixels in the same space

David
  #69  
Old September 7th 09, 08:05 PM posted to rec.photo.digital.slr-systems,rec.photo.digital
Kennedy McEwen
external usenet poster
 
Posts: 639
Default 18 megapixels on a 1.6x crop camera - Has Canon gone too far?

In article , David Kilpatrick
writes
Kennedy McEwen wrote:

Your problem is being compounded by your misuse, and possibly
misunderstanding, of the terms "resolve" and "resolution". These have
nothing whatsoever to do with the detection of single objects, as in
your examples so far, but in the visible separation of two or more
objects as being distinct from a single larger object.
In other words, being able to distinguish one piece of wire from
another that is 25um thicker has nothing whatsoever to do with
resolution and, as others have pointed out, there are many means of
achieving that at sub-resolution scales. However, being able to tell
that two pieces of 0.25mm wire viewed end on and separated by only
25um are not a single oval profiled piece of 0.525mm wire is
resolution - you are visually "resolving" the two separate pieces of 0.25mm wire.


But in this case I am not talking about resolution of spatial
frequency.


There is no such thing as "resolution of spatial frequency" - that's
like talking about the "mass of a kilogram". Spatial frequency is a
unit of measure. Resolution is a binary function - yes or no.

The eye can officially resolve something like 150-200 cycles at the
typical (a bit close, I think) 12 inch viewing distance cited for
viewing 10 x 8 prints.


OK - accepting your caveat in the adjacent post that you meant lines,
not cycles.

I've got a steel rule engraved for one section in 100ths of a unch (a
certified one we used to have for calibrating and checking our
imagesetters) and I can resolve that - 200 cycles, each 100th being
represented by a black line and a metal space - clearly at 18 inches.


No, the 1/100th of an inch markings have a fundamental spatial frequency
of 100cycles per inch, not 200. So resolving them at 18inch distance
corresponds to a resolution of 1.8cy/mrad. [range / period / 1000].

Increase the viewing distance to 20 inches, and the line pattern (as
expected) looks like a grey tint.

So you can resolve 1.8cy/mrad but not 2cy/mrad under those conditions.

You will find that this changes under different conditions, as you note
later.

20/20 vision acuity is 175 cycles at 12 inches, so 200 cycles at 18
inches is good acuity.

Actually. 20/20 (2cy/mrad) is a bit better than you are achieving, which
is only 100cycles at 18inches, or 1.8cy/mrad. Nevertheless, 20/20 is
only an acceptable standard of acuity for normal function without the
need for corrective vision. It isn't a measure of good or excellent
acuity, it is just saying that you can resolve at 20ft what the average
person can resolve at 20ft. Some people do a lot better, especially in
their teens or 20's.

Typical literature figures for good eye resolution is around 1 cycle per
arc-minute and, since an arc-minute is 0.3mrad, that corresponds to
around 3.3cy/mrad - significantly better than 20/20 or the slightly
lower resolution that you are achieving.

What interests me here is that although the eye can only resolve a
certain spatial frequency limit (also, I would guess, contrast
dependent) it can provide enough information for the brain to see
differences in the size of objects - hair, wire, carpet fibres,
whatever - which are less than 100th of an inch, at an even greater
distance.

But the eye is using more than resolution to do this, some of which you
have already acknowledged such as different reflected intensity of the
larger object - in the same way that brighter stars look as if they are
larger, even though they are all sub-resolution.

One of the concerns with digital photography is that this type of
difference is lost. Some fine detail falls beyond Nyquist in frequency
terms, or (moderated by low-pass filtration and the Bayer pattern) may
be recorded as single pixel with. Therefore close values of fine
detail, which appear different to the naked eye, may be 'quantised' in
a digital image. This in turn removes dimensional clues to the 3D depth
of of the 2D image.


No. If the camera has the same limiting resolution as your eye then the
additional information which you are using to discriminate apparent size
by eye is also captured by the camera and, on reproduction of the
appropriate image, will be similarly interpreted by your eye/brain as a
difference in size.

Greatly increase pixel counts may produce more realistic looking images
because they more accurately depict tiny differences - beyond the
spatial frequency limit - that the eye and brain can see.

It might, but you will have to come up with something better than your
flawed arithmetic to prove that the accepted physics of the situation
has been wrong all along.
--
Kennedy
Yes, Socrates himself is particularly missed;
A lovely little thinker, but a bugger when he's ****ed.
Python Philosophers (replace 'nospam' with 'kennedym' when replying)
  #70  
Old September 7th 09, 08:17 PM posted to rec.photo.digital.slr-systems,rec.photo.digital
Kennedy McEwen
external usenet poster
 
Posts: 639
Default 18 megapixels on a 1.6x crop camera - Has Canon gone too far?

In article , David Kilpatrick
writes

The human eye doesn't have either sensor or lens image stabilisation!

I know my left eye flickers a bit now as I get older. Sometimes I can
notice it when tired. But our whole body/head as well as eyes are in
constant motion.

So, it's a bit like the sweep of the radar beam - the image on the
retina will NEVER be falling in a static position on those cones. It
will be travelling across/around just like the image you see through
live view camera with an unstabilised tele lens if you magnify it to
focus.

That doesn't help the situation, in fact it degrades it - the motion
itself introduces a spatial frequency filter. Camera shake doesn't
increase the resolution of the sensor, it degrades it.

We don't see that effect unless there is something wrong (drunk, ear
problems etc) but even when we look closely at one point our eyes are
dancing around it. Plus, there's a pair of them, effectively increasing
the theoretical resolution worked out from the cone density.

I used to have a colleague, unfortunately now deceased, who had some
medical procedure which effectively froze the eye movement muscles for
about 3 or 4 hours during which, he assured me, he was completely blind.
Apparently the photoreceptors in the eye require a change in incident
light to work at all - hence the need for eye motion. Remember the line
in "Jurassic Park" about staying still so the T-rex couldn't see them -
not entirely true, but based on some fact.

Vision is effectively not focusing a static image, like a camera on a
tripod. It is constantly scanning and rescanning across the detail,
measuring depth with stereoscopic vision. We appear to see a static
image but on the retina it is anything but static.

Again, true, but it doesn't mean you can resolve more than the retina or
eye lens are capable of.
--
Kennedy
Yes, Socrates himself is particularly missed;
A lovely little thinker, but a bugger when he's ****ed.
Python Philosophers (replace 'nospam' with 'kennedym' when replying)
 




Thread Tools
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

vB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Forum Jump

Similar Threads
Thread Thread Starter Forum Replies Last Post
FA or FS: Canon A300 Digital Camera***3.2 Megapixels jfigueredo Digital Photo Equipment For Sale 1 January 21st 04 03:47 AM


All times are GMT +1. The time now is 12:26 AM.


Powered by vBulletin® Version 3.6.4
Copyright ©2000 - 2024, Jelsoft Enterprises Ltd.
Copyright ©2004-2024 PhotoBanter.com.
The comments are property of their posters.