If this is your first visit, be sure to check out the FAQ by clicking the link above. You may have to register before you can post: click the register link above to proceed. To start viewing messages, select the forum that you want to visit from the selection below. |
|
|
Thread Tools | Display Modes |
#71
|
|||
|
|||
18 megapixels on a 1.6x crop camera - Has Canon gone too far?
John A. wrote:
On Mon, 07 Sep 2009 08:22:35 -0500, "mcdonaldREMOVE TO ACTUALLY REACH wrote: As far as round (circular) sensors, one answer there is obvious, in addition to teh fact that they are traditional: You can't tile a plane with circles ... of useful shapes, only rectangles and hexagons need apply. I believe he's talking about covering a round area with photosensors, not covering an area with round photosensors. That's what I was referring to. Draw a bunch of circles, say 40 mm in diameter, on an 8 inch diameter silicon wafer. Fill up ... as close to the edge as possible ... those 40mm circles with little bitty photosensors, each say 5 microns across, in a standard Bayer array. The problem is, you are thus wasting expensive Si area between the circles. With standard rectangles, there is no such waste. Doug McDonald |
#72
|
|||
|
|||
18 megapixels on a 1.6x crop camera - Has Canon gone too far?
David Kilpatrick wrote:
Alan Browne wrote: Excellent description. This is like a radar with a 1.5 deg beam width detecting a 10 cm diameter pole half a km away when the beam is 13 meters wide by that point. The pole appears to be nearly 26 m wide on the polar plot as it "paints" from the leading edge of the beam until the trailing edge... (unless specific "beam sharpening" algorithms are used which sharpens the plot). But it certainly is detected (perceived). Alan, you've maybe provided the answer - in a different way. The human eye doesn't have either sensor or lens image stabilisation! I know my left eye flickers a bit now as I get older. Sometimes I can notice it when tired. But our whole body/head as well as eyes are in constant motion. So, it's a bit like the sweep of the radar beam - the image on the retina will NEVER be falling in a static position on those cones. It will be travelling across/around just like the image you see through live view camera with an unstabilised tele lens if you magnify it to focus. We don't see that effect unless there is something wrong (drunk, ear problems etc) but even when we look closely at one point our eyes are dancing around it. Plus, there's a pair of them, effectively increasing the theoretical resolution worked out from the cone density. Vision is effectively not focusing a static image, like a camera on a tripod. It is constantly scanning and rescanning across the detail, measuring depth with stereoscopic vision. We appear to see a static image but on the retina it is anything but static. Well, on top of all that, there is also what I mentioned earlier. You might recognize a particular guitar string at a distance for other reasons like its curl, its colour/contrast, etc. IOW, where an "image" has particular information, our brains interpret context to fill in information about what we think we see. |
#73
|
|||
|
|||
18 megapixels on a 1.6x crop camera - Has Canon gone too far?
Alan Browne wrote:
IOW, where an "image" has particular information, our brains interpret context to fill in information about what we think we see. Prof Harald Mante did some excellent demonstrations in the 1970s to show that if you took a photo with small parts of items included in the frame, but most of the objects cut off, students could complete the photo by drawing the completed items outside the shot. He thought the brain also completed these objects/subjects, giving things 'intruding' into a photo more significance than just the bit shown. This puts a whole new slant on the 1.6X crop factor - just let the imagination fill in the missing full frame! David |
#74
|
|||
|
|||
18 megapixels on a 1.6x crop camera - Has Canon gone too far?
David Kilpatrick wrote:
Alan Browne wrote: IOW, where an "image" has particular information, our brains interpret context to fill in information about what we think we see. Prof Harald Mante did some excellent demonstrations in the 1970s to show that if you took a photo with small parts of items included in the frame, but most of the objects cut off, students could complete the photo by drawing the completed items outside the shot. He thought the brain also completed these objects/subjects, giving things 'intruding' into a photo more significance than just the bit shown. There used to be a contest in a local paper where a small part of a well known personality was shown and you had to indicate who it was. Even though I had only seen a particular French news anchor a couple times, and usually from the front, seeing a close crop of the corner of his mouth from the side was unmistakable. This puts a whole new slant on the 1.6X crop factor - just let the imagination fill in the missing full frame! eh ... no thanks. |
#75
|
|||
|
|||
18 megapixels on a 1.6x crop camera - Has Canon gone too far?
David Kilpatrick wrote:
Vision is effectively not focusing a static image, like a camera on a tripod. It is constantly scanning and rescanning across the detail, And it must. If you were to project a static image at the retina, you'd see ... nothing. The retina adjusts permanently and locally to the input. The brain then consolidates the images (see also saccadic eye movement). measuring depth with stereoscopic vision. Usually vastly overrated, otherwise the illusion of depth wouldn't work with 2d films. We appear to see a static image but on the retina it is anything but static. If you *test* what resolution one can observe, this doesn't matter. -Wolfgang |
#76
|
|||
|
|||
18 megapixels on a 1.6x crop camera - Has Canon gone too far?
["Followup-To:" header set to rec.photo.digital.]
John A wrote: It has been noted elsewhere in the thread that people can detect information from an image beyond that they can directly resolve. So can a camera. Put a black hair on a white background and choose a distance and focal length so that on the average 1/4th pixel is occupied by the hair. Will you see it? Of course, the pixel only gets 75% of the light. Will you resolve it? No chance. Can you get the hair's thickness? Yes, you can estimate it on the basis of how black the hair-pixels appear. The more sensitive you are to luminance, the better you can estimate the thickness, based on your knowledge how reflective black hair is. You might be fooled by a colourless grayish hair that's thicker, though. My point is that perhaps the human eye can gather some information about an image beyond that in can physically resolve in a photo as well. Sure can, but can it do any better than in a photo that doesn't exceed the resolution of the eye and thus contains the same information predigested? I don't think so. Thus printing detail smaller than the eye can resolve may in fact result in a better-looking, more subjectively natural-looking, image. From "perhaps" to "thus" in one sentence. Perhaps you are mad, thus you should be locked away ... no judge would agree to that reasoning. Feel free to make the test. Print 2 images, one at high resolution and one *properly* downsampled at lower resolution, then blind-test by having random people look at them from larger distances so that they absolutely cannot resolve the first image. If your theory is true, ... Obvious, isn't it? Or perhaps not. But it does seem like the human visual system is capable of more than just directly resolving edges and widths. Nobody disagreed, last time I looked. And then, of course, there's the whole cropping and poster print thing that always seems to get lost in these debates. People do more with the pictures they take than just print the whole thing at 8x10. Posters get printed at, what, 150dpi at most? Pardon me, but isn't that a bit low? Or are they just to be looked at from a distance? -Wolfgang |
#77
|
|||
|
|||
18 megapixels on a 1.6x crop camera - Has Canon gone too far?
["Followup-To:" header set to rec.photo.digital.]
"mcdonaldREMOVE TO ACTUALLY REACH wrote: I have an 8 MP Canon 30D. I can tell the difference between the sharpness and general quality of a 4x6 print made from it, using a wide angle lens (24 mm), and a print (same size) made from a panorama assembly of images made with a 50 mm lens, i.e. roughly 20 (one loses some due to overlap). In both cases the lens was used at f/9, its optimal aperture, and lateral chromatic aberration correction was used in Canon DPP, and identical amounts of sharpening. I see. So you can tell the difference between an 8 MPix and an 160 MPix image on a 3.8 MPix print (4x6 inch at 400(!) ppi). Or are you telling us you have a 2582 ppi (not dpi) printer stashed away somewhere? What you are seeing is bad downsampling (or vignetting or somesuch). And what is "identical amounts of sharpening" when the same radius of x pixels has a completely different impact (and thus sharpening effect) on the 3.8 MPix resulting image? -Wolfgang |
#78
|
|||
|
|||
18 megapixels on a 1.6x crop camera - Has Canon gone too far?
"mcdonaldREMOVE TO ACTUALLY REACH wrote:
As far as round (circular) sensors, one answer there is obvious, in addition to teh fact that they are traditional: You can't tile a plane with circles ... of useful shapes, only rectangles and hexagons need apply. You can tile a plane with circles, perfectly even, given that wafers are round, in the special case of N=1. Now all the old wafer systems with only 20cm or less in diameter get a new life, and we get round sensors of 15 or 20cm diameter. And since we don't really need the ultra-dense circuity in first place (nice as it may be --- we are going for *large* photosites this time around), we can reuse all the old, cast-off gear no longer used by computer chip makers. Now we just need some larger bodies and lenses with, say, 20+cm image circles. Designing the mirrors and shutters for DSLRs may be fun, too --- I suggest 2 counterrotating disks as mirror to eliminate mirror slap and vibrations --- huge, but workable --- and possibly the same design for the shutter. Or maybe a central shutter after all. And Oly can take away the mirror and use EVIL models with much thinner bodies, to avoid the usual wide angle retro focus design problem. Now we only must hire someone to carry the new cameras and lenses for us and we all shall be happy. Alternatively, one could accept some wastage when tiling up circular sensors, attach camera CPUs directly to the sensor in the 'wasted' space and fill out the rest as well as possible with smaller, different chips. It's not like camera chips are going to shrink anyway. -Wolfgang |
#79
|
|||
|
|||
18 megapixels on a 1.6x crop camera - Has Canon gone too far?
Elliott Roper wrote:
I'm arguing that "35mm" is a backward looking nomenclature for digital camera sensors. So what? Your name is a backward looking nomenclature for you, unless it's gallows bird or jail bird ... I'm arguing it is made worse by describing cameras as fractions of 35mm. Why would something that's factual, but not bad be made worse by something that's factual and not bad, either? I'd like a straightforward naming scheme based on sensor dimensions. Fractions of 35mm frames isn't bad, then. Starting from an obsolete recording medium is helpful only to those with experience of those old systems. Since there are exactly *absolutely ZERO* people with experience in your new system, changing the system would cause harm without any advantage to counter it at all. Asserting that you own a large number of lenses suitable for that obsolete system is irrelevant. Asserting that your new system is in any way better is vastly more irrelevant, since the buying power in that assertion is zero, whereas lenses will be a solid investment. If anything, the assertion shows you are uncomfortable with thinking about sensor sizes in a rational way. Rational? Like ... fractions? Oh wait! You *do* understand. Later you call them 44mm lenses. That is a better name than 35mm. Better in what regard? Some artificial measurement you want to push? It means they create a 44mm diameter focused image. So why persist in calling the lenses 35mm? Is it a comfort blanket? Why do you persist in pushing new names *noone* can relate to? Do you want everyone to feel like you do, left out? A 44mm lens is just the right size for a 24*36mm sensor. That is obvious to anyone with a calculator and a bit of elementary maths. Ah, so you do lens choosing with calculators and math, but are completely unable to handle fractions? Dear me! It is also obvious that it would be OK for a smaller sensor. Try adapting it to your usual point'n'shoot and show us how OK that is. With that naming convention anyone can do the maths to see how wasteful and what angle of view. But you cannot do fractions? But where does 35mm fit? In the past. My, are you ever so bitter. Using the camcorder sensor size naming mess is completely relevant to this discussion. They too fell into the trap of describing their sensors relative to obsolete hardware. Well, you would describe sensors on thin air instead. Better to offend everybody than to offend only some, a very good plan indeed. Asserting that there are a lot of 44mm lenses out there is no refutation of that relevance. The relevance is that the 35mm sensor is going to stay around for a *long*, *long* time. Finally, who is this "we" that does not care about small sensors in fixed lens cameras? Most everybody but you. A decent sensor and lens naming scheme should take account of them. Yes, please make a new naming scheme for all the focal lengths, the world has waited for that for a long time. Call the lengths things like "square root of 3/4th froob" and "9/7th quux cubed". That surely will offend everyone and thus be ideal. used a proper name like 15*22.5mm you would have less chance of getting your fractions upside down. And you would know that a 27mm lens would be adequate. How would *you*, who isn't even able to deal with fractions, know about 27mm? Calculator? Yet you say that old 35mm naming convention isn't broken. It works pretty well, unlike your scheme. Bet you had to look up 15x22.5mm. -Wolfgang |
#80
|
|||
|
|||
18 megapixels on a 1.6x crop camera - Has Canon gone too far?
In rec.photo.digital David Kilpatrick wrote:
Alan Browne wrote: Excellent description. This is like a radar with a 1.5 deg beam width detecting a 10 cm diameter pole half a km away when the beam is 13 meters wide by that point. The pole appears to be nearly 26 m wide on the polar plot as it "paints" from the leading edge of the beam until the trailing edge... (unless specific "beam sharpening" algorithms are used which sharpens the plot). But it certainly is detected (perceived). Alan, you've maybe provided the answer - in a different way. The human eye doesn't have either sensor or lens image stabilisation! I know my left eye flickers a bit now as I get older. Sometimes I can notice it when tired. But our whole body/head as well as eyes are in constant motion. So, it's a bit like the sweep of the radar beam - the image on the retina will NEVER be falling in a static position on those cones. It will be travelling across/around just like the image you see through live view camera with an unstabilised tele lens if you magnify it to focus. We don't see that effect unless there is something wrong (drunk, ear problems etc) but even when we look closely at one point our eyes are dancing around it. Plus, there's a pair of them, effectively increasing the theoretical resolution worked out from the cone density. Vision is effectively not focusing a static image, like a camera on a tripod. It is constantly scanning and rescanning across the detail, measuring depth with stereoscopic vision. We appear to see a static image but on the retina it is anything but static. Not quite. It's true that the image we "see" is constructed in the retinal map areas of the brain from many different snapshots taken as the eye dances around, and includes a lot of pretty heavy (in photographic terms) post processing to do such things as erase the constant shadows of the vascular and nervous twiggery that runs over it, filling in the blind spot, filling in strong expectations based on slight evidence, etc.. But the "dancing around" is done very jerkily by very fast saccadic jumps during which the eye is blinded, followed by stationary intervals during which the image is recorded. It's also not true that the eye doesn't have image stabilisation. It has two different kinds of image stabilisation. The first is the learned stabilisation of keeping the eye focused on the point of attention despite movement of the head or body. That allows you to keep your eye on prey you're chasing down, or on the fists and weapons of an enemy whose blows you're dodging. The second is distinguishing between movement of your eyes and movement of the world. That's why the world doesn't seem to move if you move your head, walk, or move your eyes. but does seem to move if you poke your eyeball. And is sometimes fooled, as when you think your train has started moving when it's the other train you're looking at that is moving. One of the reasons why so much care has been taken to "snap" the eye's images when the image is stabilised is that at a very low and very fast reacting level of visual processing in the retina and early brain retinal maps it is exquisitely sensitive to small movements anywhere in the field of view. That's how we spot lurking animals who might be dinner or who might think we were dinner. That sensitivity to movement in an otherwise stationary image would be lost if so much care to stabilise the image wasn't taken. An interesting demonstration of this is the experiment that can be done with a computer screen of text. We think we can see all the text, some of of it well enough focussed to read. But that's an impression stitched up like a panorama from constant flickering saccadic snaps. If the screen viewer's head is locked down and the gaze direction monitored you can write software which scrambles the screen image text except for the few words the eye is pointing to. The scrambling is done during the saccadic jumps, and the screen is stable when the eye is stable and seeing. The viewer thinks he's looking at a completely clear stable screen of legible text. But nobody else can read any of the flickering scrambled mess! -- Chris Malcolm |
Thread Tools | |
Display Modes | |
|
|
Similar Threads | ||||
Thread | Thread Starter | Forum | Replies | Last Post |
FA or FS: Canon A300 Digital Camera***3.2 Megapixels | jfigueredo | Digital Photo Equipment For Sale | 1 | January 21st 04 03:47 AM |