A Photography forum. PhotoBanter.com

If this is your first visit, be sure to check out the FAQ by clicking the link above. You may have to register before you can post: click the register link above to proceed. To start viewing messages, select the forum that you want to visit from the selection below.

Go Back   Home » PhotoBanter.com forum » Digital Photography » Digital SLR Cameras
Site Map Home Register Authors List Search Today's Posts Mark Forums Read Web Partners

Infinite depth of field a good thing?



 
 
Thread Tools Display Modes
  #1  
Old November 3rd 05, 03:19 AM
Rich
external usenet poster
 
Posts: n/a
Default Infinite depth of field a good thing?

Computer scientists create 'light field camera' banishing fuzzy
photos

We've all done it. Lost that priceless Kodak moment by snapping a
photo that was grainy, dark, overexposed or out of focus. While user
ineptitude is often at the root of our blurry snapshots, the limits of
conventional cameras can be to blame as well. But Stanford computer
scientists are now making strides to combat the fuzzy photo by
bringing photographic technology into sharp focus.

Ren Ng, a computer science graduate student in the lab of Pat
Hanrahan, the Canon USA Professor in the School of Engineering, has
developed a "light field camera" capable of producing photographs in
which subjects at every depth appear in finely tuned focus. Adapted
from a conventional camera, the light field camera overcomes low-light
and high-speed conditions that plague photography and foreshadows
potential improvements to current scientific microscopy, security
surveillance and sports and commercial photography.

"Currently, cameras have to make decisions about the focus before
taking the exposure, which engineering-wise can be very difficult,"
said Ng. "With the light field camera, you can take one exposure,
capture a lot more information about the light and make focusing
decisions after you've already taken the shot. It is more flexible."


The light field camera, sometimes referred to as a "plenoptic camera,"
looks and operates exactly like an ordinary handheld digital camera.
The difference lies inside. In a conventional camera, rays of light
are corralled through the camera's main lens and converge on the film
or digital photosensor directly behind it. Each point on the resulting
two-dimensional photo is the sum of all the light rays striking that
location.

The light field camera adds an additional element—a microlens
array—inserted between the main lens and the photosensor. Resembling
the multi-faceted compound eye of an insect, the microlens array is a
square panel composed of nearly 90,000 miniature lenses. Each lenslet
separates back out the converged light rays received from the main
lens before they hit the photosensor and changes the way the light
information is digitally recorded. Custom processing software
manipulates this "expanded light field" and traces where each ray
would have landed if the camera had been focused at many different
depths. The final output is a synthetic image in which the subjects
have been digitally refocused.

Tweaking tradition

Expanding the light field demands that the rules of traditional
photography be tweaked. Ordinarily, a tradeoff exists between aperture
size, which determines the amount of light reaching the film or
photosensor, and depth of field, which determines which objects in an
image will be sharp and which will be fuzzy. As the aperture size
increases, more light passes through the lens and the depth of field
shallows—bringing into focus only the nearest objects and severely
blurring the surrounding subjects.

The light field camera decouples aperture size and depth of field. The
microlens array harnesses the additional light to reveal the depth of
each object in the image and project tiny, sharp subimages onto the
photosensor. The blurry halo typically surrounding the centrally
focused subject is "un-blurred." In this way, the benefits of large
apertures—increased light, shorter exposure time, reduced
graininess—can be exploited without sacrificing the depth of field or
sharpness of the image.

Extending the depth of field while maintaining a wide aperture may
provide significant benefits to several industries, such as security
surveillance. Often mounted in crowded or dimly lit areas, such as
congested airport security lines and backdoor exits, monitoring
cameras notoriously produce grainy, indiscernible images.

"Let's say it's nighttime and the security camera is trying to focus
on something," said Ng. "If someone comes and they are moving around,
the camera will have trouble tracking them. Or if there are two
people, whom does it choose to track? The typical camera will close
down its aperture to try capturing a sharp image of both people, but
the small aperture will produce video that is dark and grainy."

The idea behind the light field camera is not new. With the roots of
its conception dating back nearly a century, several variants of the
light field camera have been devised over the years, each with slight
variations in its optical system. Other models that rely on refocusing
light fields have been slow and bulky and have generated gaps in the
light fields, known as aliasing. Ng's camera—compact and portable with
drastically reduced aliasing—displays greater commercial utility.

Marc Levoy, professor of computer science and electrical engineering;
Mark Horowitz, the Yahoo! Founders Professor in the School of
Engineering; Mathieu Bredif, M.S. '05 in computer science; and Gene
Duval, B.S. '75, M.S. '78 in mechanical engineering and founder of
Duval Design, also contributed to this work.

The research was supported by the Office of Technology Licensing
Birdseed Fund, which provides small grants for the prototype
development of unlicensed technologies. A manuscript detailing the
theoretic performance of the light field camera appeared in
Transactions on Graphics, published by the Association for Computing
Machinery in July, and subsequently was presented at the 2005 ACM
SIGGRAPH (Special Interest Group on Computer Graphics and Interactive
Techniques) conference in August in Los Angeles.

Source: Stanford University




This news is brought to you by PhysOrg.com

  #2  
Old November 3rd 05, 04:20 AM
Paul Furman
external usenet poster
 
Posts: n/a
Default Infinite depth of field a good thing?

Rich wrote:
....

The light field camera adds an additional element—a microlens
array—inserted between the main lens and the photosensor. Resembling
the multi-faceted compound eye of an insect, the microlens array is a
square panel composed of nearly 90,000 miniature lenses. Each lenslet
separates back out the converged light rays received from the main
lens before they hit the photosensor and changes the way the light
information is digitally recorded. Custom processing software
manipulates this "expanded light field" and traces where each ray
would have landed if the camera had been focused at many different
depths. The final output is a synthetic image in which the subjects
have been digitally refocused.

....

The idea behind the light field camera is not new. With the roots of
its conception dating back nearly a century, several variants of the
light field camera have been devised over the years, each with slight
variations in its optical system. Other models that rely on refocusing
light fields have been slow and bulky and have generated gaps in the
light fields, known as aliasing. Ng's camera—compact and portable with
drastically reduced aliasing—displays greater commercial utility.



Probably the compromises are too strange-looking for presentation
photography. It sounds like more useful for security cameras but not
very pretty pictures with less than 1MP resolution to the focusing (90k
microlenses).

I have tried merging bracketed focused images and I know it's a huge
problem, even the zoom and angle of view is all messed up, and it is
messed up in big blurry circles, wider the aperture, blurrier the
circles. I've also played with deconvolution software and it looks
awful, even if it does create more info, it's ugly.

I was hoping they had invented a clear 3D holographic sensor g. Then
all you need is a crystal ball on a stick and you've got 360deg
coverage. Or give it a transmitter, levitation powers and a joy stick!
Fill the room with this new quantum holographic phenomenon & you have
every concievable vantage point captured in perfect focus at all distances.

But seriously, thanks for passing on the story, who knows it could be
very cool and useable.
  #3  
Old November 3rd 05, 05:06 AM
Paul Furman
external usenet poster
 
Posts: n/a
Default Infinite depth of field a good thing?

Rich wrote:
http://graphics.stanford.edu/papers/...a/lfcamera.wmv

Whoah, that's an interesting video. Worth watching a few times to
understand. They use each microlens as 1MP of pinhole cameras.


Paul Furman wrote:

Rich wrote:
...


The light field camera adds an additional element—a microlens
array—inserted between the main lens and the photosensor. Resembling
the multi-faceted compound eye of an insect, the microlens array is a
square panel composed of nearly 90,000 miniature lenses. Each lenslet
separates back out the converged light rays received from the main
lens before they hit the photosensor and changes the way the light
information is digitally recorded. Custom processing software
manipulates this "expanded light field" and traces where each ray
would have landed if the camera had been focused at many different
depths. The final output is a synthetic image in which the subjects
have been digitally refocused.

...


The idea behind the light field camera is not new. With the roots of
its conception dating back nearly a century, several variants of the
light field camera have been devised over the years, each with slight
variations in its optical system. Other models that rely on refocusing
light fields have been slow and bulky and have generated gaps in the
light fields, known as aliasing. Ng's camera—compact and portable with
drastically reduced aliasing—displays greater commercial utility.




Probably the compromises are too strange-looking for presentation
photography. It sounds like more useful for security cameras but not
very pretty pictures with less than 1MP resolution to the focusing (90k
microlenses).

I have tried merging bracketed focused images and I know it's a huge
problem, even the zoom and angle of view is all messed up, and it is
messed up in big blurry circles, wider the aperture, blurrier the
circles. I've also played with deconvolution software and it looks
awful, even if it does create more info, it's ugly.

I was hoping they had invented a clear 3D holographic sensor g. Then
all you need is a crystal ball on a stick and you've got 360deg
coverage. Or give it a transmitter, levitation powers and a joy stick!
Fill the room with this new quantum holographic phenomenon & you have
every concievable vantage point captured in perfect focus at all distances.

But seriously, thanks for passing on the story, who knows it could be
very cool and useable.

  #4  
Old November 3rd 05, 10:17 AM
Kyle Jones
external usenet poster
 
Posts: n/a
Default Infinite depth of field a good thing?

In article ,
Paul Furman wrote:
Rich wrote:

http://graphics.stanford.edu/papers/...a/lfcamera.wmv

Whoah, that's an interesting video. Worth watching a few times to
understand. They use each microlens as 1MP of pinhole cameras.


I loved the bit where the macro shot of the crayons was brought
into full focus and then they faked a camera move, exposing
surfaces that were previously hidden. That was like the image
enhancement scene in Bladerunner. Spooky.

  #5  
Old November 3rd 05, 12:04 PM
cjcampbell
external usenet poster
 
Posts: n/a
Default Infinite depth of field a good thing?

Considering that I spend a lot of my time trying to make my depth of
field more shallow, I probably would not be real interested in it.
Still, might have some interesting specialized applications.

  #6  
Old November 3rd 05, 08:28 PM
Frank ess
external usenet poster
 
Posts: n/a
Default Infinite depth of field a good thing?

Kyle Jones wrote:
In article ,
Paul Furman wrote:
Rich wrote:

http://graphics.stanford.edu/papers/...a/lfcamera.wmv

Whoah, that's an interesting video. Worth watching a few times to
understand. They use each microlens as 1MP of pinhole cameras.


I loved the bit where the macro shot of the crayons was brought
into full focus and then they faked a camera move, exposing
surfaces that were previously hidden. That was like the image
enhancement scene in Bladerunner. Spooky.


I keep telling people: save all the photo data you can acquire. Some
day this kind of manipulation will allow you to render a full-size,
3-D holographic representation of your beloved but departed naked mole
rat, from a single image.

--
Frank ess

 




Thread Tools
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

vB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Forum Jump

Similar Threads
Thread Thread Starter Forum Replies Last Post
Waverfront Coded Imaging: Infinite Depth of Field at Large Aperatures Q.G. de Bakker Medium Format Photography Equipment 0 April 11th 05 04:15 PM
Decrease depth of field with telephoto attachment? Jon Harris Digital Photography 2 September 19th 04 06:31 AM
Depth of Field Quest. From Newbie, Please Robert11 Digital Photography 34 August 31st 04 03:46 PM
roll-film back: DOF question RSD99 Large Format Photography Equipment 41 July 30th 04 03:12 AM
Depth of Field is Enough to Negate Parallel Issues?????? Dr. Slick Large Format Photography Equipment 10 March 1st 04 09:40 PM


All times are GMT +1. The time now is 11:57 PM.


Powered by vBulletin® Version 3.6.4
Copyright ©2000 - 2024, Jelsoft Enterprises Ltd.
Copyright ©2004-2024 PhotoBanter.com.
The comments are property of their posters.