If this is your first visit, be sure to check out the FAQ by clicking the link above. You may have to register before you can post: click the register link above to proceed. To start viewing messages, select the forum that you want to visit from the selection below. |
|
|
Thread Tools | Display Modes |
#21
|
|||
|
|||
Hasselblad's answer to the Foveon
On Tue, 24 Nov 2009 12:08:11 -0600, Mr. Commentary
wrote: On Tue, 24 Nov 2009 10:16:04 -0500, Alan Browne wrote: John A. wrote: On Tue, 24 Nov 2009 09:21:42 -0500, Alan Browne wrote: John A. wrote: On Mon, 23 Nov 2009 17:30:17 -0500, Alan Browne wrote: John A. wrote: On Mon, 23 Nov 2009 16:07:57 -0500, Alan Browne wrote: John A. wrote: On Sun, 22 Nov 2009 20:48:56 -0500, Alan Browne wrote: John A. wrote: On Sun, 22 Nov 2009 15:04:01 -0800 (PST), RichA wrote: Alan Browne wrote: http://www.dpreview.com/news/0911/09...ladh3d50ms.asp For still photography, every pixel is shot with each colour filter by mechanically moving the filter array by 1 pixel between shots. 4 shots later you have 50 Mpix with true RGB for each pixel. At that, it is 16 bits/colour. No demosaicing means more accurate colour at each pixel and sharper images. For repros (of paintings), product, etc. this will be phenomenal. (At 1.4s per capture, an image will take nearly 6 seconds to take... better have a very sturdy tripod). The camera can also be used in normal mosaic mode. Any tripod that can hold a camera steady from shot to shot with less than say a 1/4 micron shift would have to be one hell of a tripod. http://www.pbase.com/andersonrm/image/101130219 I figure any tripod good enough for that many megapixels and a 6 second single exposure can do the job. It's not like you're going to be using the shutter release button on the camera body, after all. Plus, the shift tolerance would be greater than that, and vary with the length of the lens. It would be particularly high for wide angle shots. Remember: it's the amount of shift needed to make a part of the scene shift one pixel, not the width of the pixel on the sensor. If I understand it correctly, the entire filter array is moved vertically or horizontally by 1 pixel height/width for each of the 4 shots. See the Hasselblad site. There is a graphic. Exactly. Rich's concern is that the camera might shift between shots, throwing off the registration. I also meant to state that the filter array, indeed, has to shift by a whole pixel size - that is to say move the center of one pixel to the center of the next. I'd expect the position tolerance to be on the order of less than 5 - 10% (and that there is sufficient anti-leak masking). From what I gather, it shifts the whole sensor, not just the filter. Whichever. One relative to the other in any case. No, the sensor+filter array all together. You lose a row and a column, but what the hey? Not sure what you mean. If they both move, you get nothing. You get the image projected by the lens shifted one pixel each shot. (Or rather the sensor shifted around in the projected light.) So each pixel-sized spot of light is captured in turn by different colored bayer-array sensors. You're not explaining anything clearly. There is no need to move both. The point of the exercise is to expose each sensor to a different color filtered light at least once. By moving the colour filter array over the sensor -or- moving the sensor under the filter array that is achieved. There is no need to move both, and it seems to me most logical to move only the colour filter array as it is presumably lighter. However, per the Hasselblad brochu "High precision piezo motors control movements of the sensor in one pixel increments. By combining four shots, each offset by one pixel, the true colours, Red, Green and Blue of each point are obtained." So, only one is moved and it is the sensor. For registration purposes this means the measured array needs to be shifted numerically to register properly. Hah! I see even explaining it to him didn't help. Perhaps he's all out of troll's brain-cells. You used them both up. Now go rest your little head. |
#22
|
|||
|
|||
Hasselblad's answer to the Foveon
John A. wrote:
On Tue, 24 Nov 2009 12:24:49 -0500, Alan Browne wrote: David J. Littleboy wrote: "Alan Browne" wrote: You get the image projected by the lens shifted one pixel each shot. (Or rather the sensor shifted around in the projected light.) So each pixel-sized spot of light is captured in turn by different colored bayer-array sensors. You're not explaining anything clearly. There is no need to move both. Sure there is: it's physically impossible to move the filter array independently of the sensor. The point of the exercise is to expose each sensor to a different color filtered light at least once. No, it's not. The point is to use a _different_ physical pixel to record each of R, G, and B at each _location_ in the image plane. You better go read the Hasselblad brochure, for that is precisely what they are doing: moving the sensor wrt to the filter array. That it takes nearly 6 seconds to take one photo (4 images) is another matter. (This has to do with the very slow frame rate of the high Mpix Hassy's, about 1.6 s per image). I've seen it. They're moving the whole array. Otherwise each spot in the image would still be captured with a Bayer array pattern that does not achieve the desired effect. The filters *have to* move for this scheme to work. Moving the sensor but not the filters does nothing to achieve the desired effect of R+G+B capture at every pixel. They *are* moving the sensor so that *must* include the filters. You don't seem to get that _relative_ to one another, one is moving, otherwise it's damned hard to get a different color over a pixel location. Also it would be needlessly complicated on an engineering level to do it any other way. By moving the whole sensor+filter assembly they avoid a radical change in sensor+filter fabrication, sensor+filter registration issues, physical wear, etc. They might use existing IS hardware if that's precise enough. They might add special actuators designed to simply and precisely move the sensor one exact pixel width/height at a time. Either way they move it, it works, and very simply. They use piezoelectric actuators. Such can be made to move by very tiny (and accurate) movements (as is obviously required here). |
#23
|
|||
|
|||
Hasselblad's answer to the Foveon
"Alan Browne" wrote: John A. wrote: I've seen it. They're moving the whole array. Otherwise each spot in the image would still be captured with a Bayer array pattern that does not achieve the desired effect. The filters *have to* move for this scheme to work. Moving the sensor but not the filters does nothing to achieve the desired effect of R+G+B capture at every pixel. They *are* moving the sensor so that *must* include the filters. You don't seem to get that _relative_ to one another, one is moving, otherwise it's damned hard to get a different color over a pixel location. It's easy: they just shift the data. Assume pixel 759,759 in the sensor is R. Shift the whole sensor shebang one pixel position left, and now pixel 759,760 in the sensor (B) is measuring the same point in the image. Shift vertically, and pixel 760,760 in the sensor (G) is measuring the same point in the image. Now construct pixel 759,759 in the data from pixels 759,759, 759,760, and 760,760. Pixel 759,759 in the data now has RGB data from the same point in the image plane. -- David J. Littleboy Tokyo, Japan |
#24
|
|||
|
|||
Hasselblad's answer to the Foveon
On Tue, 24 Nov 2009 14:41:48 -0500, Alan Browne
wrote: John A. wrote: On Tue, 24 Nov 2009 12:24:49 -0500, Alan Browne wrote: David J. Littleboy wrote: "Alan Browne" wrote: You get the image projected by the lens shifted one pixel each shot. (Or rather the sensor shifted around in the projected light.) So each pixel-sized spot of light is captured in turn by different colored bayer-array sensors. You're not explaining anything clearly. There is no need to move both. Sure there is: it's physically impossible to move the filter array independently of the sensor. The point of the exercise is to expose each sensor to a different color filtered light at least once. No, it's not. The point is to use a _different_ physical pixel to record each of R, G, and B at each _location_ in the image plane. You better go read the Hasselblad brochure, for that is precisely what they are doing: moving the sensor wrt to the filter array. That it takes nearly 6 seconds to take one photo (4 images) is another matter. (This has to do with the very slow frame rate of the high Mpix Hassy's, about 1.6 s per image). I've seen it. They're moving the whole array. Otherwise each spot in the image would still be captured with a Bayer array pattern that does not achieve the desired effect. The filters *have to* move for this scheme to work. Moving the sensor but not the filters does nothing to achieve the desired effect of R+G+B capture at every pixel. They *are* moving the sensor so that *must* include the filters. You don't seem to get that _relative_ to one another, one is moving, otherwise it's damned hard to get a different color over a pixel location. Listen, you amazingly and pathetically stooopid moron of a troll .... It's not a matter of moving the colors over different photosites, but moving the individual image details over each color of photosite. So that each individual speck of image detail is then recorded by all 4 photosites in each 2x2 RGGB group. Can you start to grasp that in that amazingly ignorant and inexperienced troll's mind of yours? I thought not. But there might be others *almost* as stupid as you who are believing your pathetically stupid bull**** and incomprehensible reasoning. |
#25
|
|||
|
|||
Hasselblad's answer to the Foveon
David J. Littleboy wrote:
"Alan Browne" wrote: John A. wrote: I've seen it. They're moving the whole array. Otherwise each spot in the image would still be captured with a Bayer array pattern that does not achieve the desired effect. The filters *have to* move for this scheme to work. Moving the sensor but not the filters does nothing to achieve the desired effect of R+G+B capture at every pixel. They *are* moving the sensor so that *must* include the filters. You don't seem to get that _relative_ to one another, one is moving, otherwise it's damned hard to get a different color over a pixel location. It's easy: they just shift the data. Assume pixel 759,759 in the sensor is R. Shift the whole sensor shebang one pixel position left, and now pixel 759,760 in the sensor (B) is measuring the same point in the image. Shift vertically, and pixel 760,760 in the sensor (G) is measuring the same point in the image. Now construct pixel 759,759 in the data from pixels 759,759, 759,760, and 760,760. Pixel 759,759 in the data now has RGB data from the same point in the image plane. I understand that perfectly well. I don't understand why John A. has both the sensor AND the CFA moving. This is why only one of the sensor or array need move, not both. However, the disadvantage of moving the sensor, rather than the CFA, is that one has to correct for registration whereas if the CFA were moved, then registration corrections are not needed. (Not that it's a big deal considering the very slow shoot rate of the camera - and it's likely this is done in raw processing on the computer and not in camera... hmm, unless there's also a camera monitor view... Statements regarding IS are a bit at the coarse level - IS stab in camera moves the entire sensor chip carrier which includes the CFA. What Hassy are doing is occurring within the chip carrier as far as I can tell. |
#26
|
|||
|
|||
Hasselblad's answer to the Foveon
"Alan Browne" wrote in message ... David J. Littleboy wrote: "Alan Browne" wrote: John A. wrote: I've seen it. They're moving the whole array. Otherwise each spot in the image would still be captured with a Bayer array pattern that does not achieve the desired effect. The filters *have to* move for this scheme to work. Moving the sensor but not the filters does nothing to achieve the desired effect of R+G+B capture at every pixel. They *are* moving the sensor so that *must* include the filters. You don't seem to get that _relative_ to one another, one is moving, otherwise it's damned hard to get a different color over a pixel location. It's easy: they just shift the data. Assume pixel 759,759 in the sensor is R. Shift the whole sensor shebang one pixel position left, and now pixel 759,760 in the sensor (B) is measuring the same point in the image. Shift vertically, and pixel 760,760 in the sensor (G) is measuring the same point in the image. Now construct pixel 759,759 in the data from pixels 759,759, 759,760, and 760,760. Pixel 759,759 in the data now has RGB data from the same point in the image plane. I understand that perfectly well. If that were true... I don't understand why John A. has both the sensor AND the CFA moving. You wouldn't have said that. This is why only one of the sensor or array need move, not both. Your head is completely wedged. Both the sensor and the filter array move together. -- David J. Littleboy Tokyo, Japan |
#27
|
|||
|
|||
Hasselblad's answer to the Foveon
David J. Littleboy wrote:
"Alan Browne" wrote in message ... David J. Littleboy wrote: "Alan Browne" wrote: John A. wrote: I've seen it. They're moving the whole array. Otherwise each spot in the image would still be captured with a Bayer array pattern that does not achieve the desired effect. The filters *have to* move for this scheme to work. Moving the sensor but not the filters does nothing to achieve the desired effect of R+G+B capture at every pixel. They *are* moving the sensor so that *must* include the filters. You don't seem to get that _relative_ to one another, one is moving, otherwise it's damned hard to get a different color over a pixel location. It's easy: they just shift the data. Assume pixel 759,759 in the sensor is R. Shift the whole sensor shebang one pixel position left, and now pixel 759,760 in the sensor (B) is measuring the same point in the image. Shift vertically, and pixel 760,760 in the sensor (G) is measuring the same point in the image. Now construct pixel 759,759 in the data from pixels 759,759, 759,760, and 760,760. Pixel 759,759 in the data now has RGB data from the same point in the image plane. I understand that perfectly well. If that were true... I don't understand why John A. has both the sensor AND the CFA moving. You wouldn't have said that. This is why only one of the sensor or array need move, not both. Your head is completely wedged. Both the sensor and the filter array move together. That makes absolutely no sense. If a pixel sensor is to record first R, then G, then B and then G again in separate shots, they certainly should not move together. See: http://www.hasselblad.co.uk/media/99...tasheet_v3.pdf in particular p. 3 where it says, as plainly as can be: "High precision piezo motors control movements of the sensor in one pixel increments. By combining four shots, each offset by one pixel, the true colours, Red, Green and Blue of each point are obtained" So all that moves is the sensor. The CFA stays fixed. There is even a nice diagram to help you. |
#28
|
|||
|
|||
Hasselblad's answer to the Foveon
Alan Browne wrote:
See: http://www.hasselblad.co.uk/media/99...tasheet_v3.pdf in particular p. 3 where it says, as plainly as can be: "High precision piezo motors control movements of the sensor in one pixel increments. By combining four shots, each offset by one pixel, the true colours, Red, Green and Blue of each point are obtained" So all that moves is the sensor. The CFA stays fixed. There is even a nice diagram to help you. Follow the X, which represents an RGB pixel at position 1, 1 in the final image. In the first shot it's on sensor pixel 1, 1 (which is always red), then it's on 1, 2 (always green), then 2, 2 (always blue), then 2, 1 (also always green). Those four values are put together to get the RGB values for the image pixel at position 1, 1. If the sensor was moving under the filter array, the square with the X on it would always be the same colour. |
#29
|
|||
|
|||
Hasselblad's answer to the Foveon
"Alan Browne" wrote in message ... David J. Littleboy wrote: "Alan Browne" wrote in message ... David J. Littleboy wrote: "Alan Browne" wrote: John A. wrote: I've seen it. They're moving the whole array. Otherwise each spot in the image would still be captured with a Bayer array pattern that does not achieve the desired effect. The filters *have to* move for this scheme to work. Moving the sensor but not the filters does nothing to achieve the desired effect of R+G+B capture at every pixel. They *are* moving the sensor so that *must* include the filters. You don't seem to get that _relative_ to one another, one is moving, otherwise it's damned hard to get a different color over a pixel location. It's easy: they just shift the data. Assume pixel 759,759 in the sensor is R. Shift the whole sensor shebang one pixel position left, and now pixel 759,760 in the sensor (B) is measuring the same point in the image. Shift vertically, and pixel 760,760 in the sensor (G) is measuring the same point in the image. Now construct pixel 759,759 in the data from pixels 759,759, 759,760, and 760,760. Pixel 759,759 in the data now has RGB data from the same point in the image plane. I understand that perfectly well. If that were true... I don't understand why John A. has both the sensor AND the CFA moving. You wouldn't have said that. This is why only one of the sensor or array need move, not both. Your head is completely wedged. Both the sensor and the filter array move together. That makes absolutely no sense. Please read what I wrote above. It describes exactly how to do it. If a pixel sensor is to record first R, then G, then B and then G again in separate shots, they certainly should not move together. They don't use the same physical pixel on the sensor to create the output pixel, they use three different pixels in the sensor, all placed at the same point in the sensor plane. The idea you seem to be missing is the concept of constructing a single output pixel from measurements taken with three different pixels. See: http://www.hasselblad.co.uk/media/99...tasheet_v3.pdf in particular p. 3 where it says, as plainly as can be: "High precision piezo motors control movements of the sensor in one pixel increments. By combining four shots, each offset by one pixel, the true colours, Red, Green and Blue of each point are obtained" So all that moves is the sensor. The CFA stays fixed. There is even a nice diagram to help you. The funniest thing here is that your idea that the CFA stays fixed is completely dizzy. If the CFA doesn't move, each physical pixel position will only see one color, and you wouldn't get a three-color image. This one is a ROFL class mistake. -- David J. Littleboy Tokyo, Japan |
#30
|
|||
|
|||
Hasselblad's answer to the Foveon
"Wilba" wrote: Follow the X, which represents an RGB pixel at position 1, 1 in the final image. In the first shot it's on sensor pixel 1, 1 (which is always red), then it's on 1, 2 (always green), then 2, 2 (always blue), then 2, 1 (also always green). Those four values are put together to get the RGB values for the image pixel at position 1, 1. If the sensor was moving under the filter array, the square with the X on it would always be the same colour. Exactly! -- David J. Littleboy Tokyo, Japan |
Thread Tools | |
Display Modes | |
|
|
Similar Threads | ||||
Thread | Thread Starter | Forum | Replies | Last Post |
Does anybody have an answer? | footless crow | Digital Photography | 5 | June 7th 09 09:44 PM |
Does anybody have an answer? | N[_5_] | Digital Photography | 1 | June 6th 09 11:06 PM |
Does anybody have an answer? | Pete D | Digital Photography | 2 | June 6th 09 11:13 AM |
Hasselblad's 500CM Gold / 503CX Platinum | Manny | Medium Format Photography Equipment | 0 | March 19th 04 01:10 PM |
HASSELBLAD'S GOLD 500CM / PLATINUM 503CX | Manny | Medium Format Equipment For Sale | 0 | February 26th 04 05:09 PM |