A Photography forum. PhotoBanter.com

If this is your first visit, be sure to check out the FAQ by clicking the link above. You may have to register before you can post: click the register link above to proceed. To start viewing messages, select the forum that you want to visit from the selection below.

Go Back   Home » PhotoBanter.com forum » Photo Equipment » 35mm Photo Equipment
Site Map Home Register Authors List Search Today's Posts Mark Forums Read Web Partners

Convolution, deconvolution and out of focus images.



 
 
Thread Tools Display Modes
  #11  
Old December 11th 04, 07:25 PM
Dave
external usenet poster
 
Posts: n/a
Default

Al Denelsbeck wrote:
Dave wrote in :


First, before asking this, I should add I have just come back from the
pub and had quite a few beers, so take that into account !!


I'm an electrical engineer/scientist, with an interest in protography. I
know in theory what you actualy record on an instrument (e.g.
oscilloscope, but possibly a camera ???) is the convolution of the real
signal and the impulse response of the system.

Measured = Real * Impulse_response

where * = convolution.

But in theory (but far less so in practice), knowing the 'impulse
response' of your system and what you record, it is possible to perform
deconvolution to calculate the real signal is - despite the fact you
have not recorded it.

This got me thinking about whether you can correct for out of focus
images, or imperfect lenses by knowing their impulse response. This
might (or might not) be you call the MTF. I think they are related.

So assuming you have a poor lens, and so you take a poor photograph of a
scene. can you measure the properties of that lens (find its impulse
response) and deconvolve the recorded image with the impulse response of
the lens to find out what the real picture is, without the distorsions,
so negating the effect of your poor lens?

You probably think I'm either drunk (true), a mad scientist (also true),
but does anyone reading this have a clue what I am on about?




This has actually been done, and if I remember right it's
accomplished through fourier processing. I've seen the results for high-
magnification things like photo micrography.


This does not surprise me. The convolution of A and B can be obtained by
taking the Fourier transforms of A and B and multiplying them together.

What was needed was a guideline portion of the image - a fuzzy spot
that should have been tightly-focused spot, or a streak that should be a
line. Given that, the programs were able to reprocess the image to account
for the deconvolution.


I was hoping you could do better than that, without so much information.
Someone mentioned the fisheye lens software.


Part of the problem is, you have differing amounts of effect because
your original subject is usually three-dimensional. Think depth-of-field -
you may *want* the background out-of-focus. Working only from the resulting
image, the process has no way of knowing what portion of the image should
be considered in the proper focal plane, and what portion is soft simply
because the lens isn't focused there.


Yes, I see that. The original image has 3D data, the imperfect lens
produces 3D data, but the film plane captures only 2D data. So some
information has been lost.

Variations of this have been used extensively by NASA,


As someone said, there you have point sources.


  #12  
Old December 14th 04, 04:52 AM
Al Denelsbeck
external usenet poster
 
Posts: n/a
Default

Dave wrote in :

Al Denelsbeck wrote:
Dave wrote in :


First, before asking this, I should add I have just come back from
the pub and had quite a few beers, so take that into account !!


I'm an electrical engineer/scientist, with an interest in
protography. I know in theory what you actualy record on an
instrument (e.g. oscilloscope, but possibly a camera ???) is the
convolution of the real signal and the impulse response of the
system.

Measured = Real * Impulse_response

where * = convolution.

But in theory (but far less so in practice), knowing the 'impulse
response' of your system and what you record, it is possible to
perform deconvolution to calculate the real signal is - despite the
fact you have not recorded it.

This got me thinking about whether you can correct for out of focus
images, or imperfect lenses by knowing their impulse response. This
might (or might not) be you call the MTF. I think they are related.

So assuming you have a poor lens, and so you take a poor photograph
of a scene. can you measure the properties of that lens (find its
impulse response) and deconvolve the recorded image with the impulse
response of the lens to find out what the real picture is, without
the distorsions, so negating the effect of your poor lens?

You probably think I'm either drunk (true), a mad scientist (also
true), but does anyone reading this have a clue what I am on about?




This has actually been done, and if I remember right it's
accomplished through fourier processing. I've seen the results for
high- magnification things like photo micrography.


This does not surprise me. The convolution of A and B can be obtained
by taking the Fourier transforms of A and B and multiplying them
together.

What was needed was a guideline portion of the image - a
fuzzy spot
that should have been tightly-focused spot, or a streak that should
be a line. Given that, the programs were able to reprocess the image
to account for the deconvolution.


I was hoping you could do better than that, without so much
information. Someone mentioned the fisheye lens software.


Part of the problem is, you have differing amounts of effect
because
your original subject is usually three-dimensional. Think
depth-of-field - you may *want* the background out-of-focus. Working
only from the resulting image, the process has no way of knowing what
portion of the image should be considered in the proper focal plane,
and what portion is soft simply because the lens isn't focused there.


Yes, I see that. The original image has 3D data, the imperfect lens
produces 3D data, but the film plane captures only 2D data. So some
information has been lost.


Actually, I think this is the key.

Given a 3D "model" of the subject and the ideal properties of the
lens, you can account for lens aberrations, chromatic separation, und so
wieter, by knowing what kind of path the light *should* be taking and what
effect this should have had on the finished image.

But even with test subjects, the best information that you can get is
what effect the lens might have for subjects within a given focal distance,
i.e., a flat surface parallel to the 'film' plane. The processes I
mentioned were intended for microscopy, where the subject was always a flat
plane, or occasionally used on surveillance cameras with fixed-focus lenses
where the subject matter fell within the large depth-of-field for the
camera.

Most photographic subjects aren't so simple, though. You can
potentially measure the focal distance the lens is set at, physically, but
you can't determine from the resulting image how far away the subject
actually was, much less what portions were closer or further, and by how
much. This requires a lot more info, and without it, you can't calculate
how to correct them on the film plane.

Someone I know worked on a 3D rendering system, now used by law
enforcement for crime-scene mapping. A tripod-mounted camera coupled with a
laser measuring system not only records the scene visually, but maps out
the precise locations of, and distances to, the subject within the field of
view, allowing the crime scene to be recreated in three dimensions. You
could probably produce a lot more accurate corrections with a system of
that type, since you could then determine that the grey blob at x/y is
blurry because it's ten inches past the focal point, and not because it's a
cloud six miles off.

Just my two pfennings...;-)


- Al.

--
To reply, insert dash in address to match domain below
Online photo gallery at www.wading-in.net
  #13  
Old December 14th 04, 04:52 AM
Al Denelsbeck
external usenet poster
 
Posts: n/a
Default

Dave wrote in :

Al Denelsbeck wrote:
Dave wrote in :


First, before asking this, I should add I have just come back from
the pub and had quite a few beers, so take that into account !!


I'm an electrical engineer/scientist, with an interest in
protography. I know in theory what you actualy record on an
instrument (e.g. oscilloscope, but possibly a camera ???) is the
convolution of the real signal and the impulse response of the
system.

Measured = Real * Impulse_response

where * = convolution.

But in theory (but far less so in practice), knowing the 'impulse
response' of your system and what you record, it is possible to
perform deconvolution to calculate the real signal is - despite the
fact you have not recorded it.

This got me thinking about whether you can correct for out of focus
images, or imperfect lenses by knowing their impulse response. This
might (or might not) be you call the MTF. I think they are related.

So assuming you have a poor lens, and so you take a poor photograph
of a scene. can you measure the properties of that lens (find its
impulse response) and deconvolve the recorded image with the impulse
response of the lens to find out what the real picture is, without
the distorsions, so negating the effect of your poor lens?

You probably think I'm either drunk (true), a mad scientist (also
true), but does anyone reading this have a clue what I am on about?




This has actually been done, and if I remember right it's
accomplished through fourier processing. I've seen the results for
high- magnification things like photo micrography.


This does not surprise me. The convolution of A and B can be obtained
by taking the Fourier transforms of A and B and multiplying them
together.

What was needed was a guideline portion of the image - a
fuzzy spot
that should have been tightly-focused spot, or a streak that should
be a line. Given that, the programs were able to reprocess the image
to account for the deconvolution.


I was hoping you could do better than that, without so much
information. Someone mentioned the fisheye lens software.


Part of the problem is, you have differing amounts of effect
because
your original subject is usually three-dimensional. Think
depth-of-field - you may *want* the background out-of-focus. Working
only from the resulting image, the process has no way of knowing what
portion of the image should be considered in the proper focal plane,
and what portion is soft simply because the lens isn't focused there.


Yes, I see that. The original image has 3D data, the imperfect lens
produces 3D data, but the film plane captures only 2D data. So some
information has been lost.


Actually, I think this is the key.

Given a 3D "model" of the subject and the ideal properties of the
lens, you can account for lens aberrations, chromatic separation, und so
wieter, by knowing what kind of path the light *should* be taking and what
effect this should have had on the finished image.

But even with test subjects, the best information that you can get is
what effect the lens might have for subjects within a given focal distance,
i.e., a flat surface parallel to the 'film' plane. The processes I
mentioned were intended for microscopy, where the subject was always a flat
plane, or occasionally used on surveillance cameras with fixed-focus lenses
where the subject matter fell within the large depth-of-field for the
camera.

Most photographic subjects aren't so simple, though. You can
potentially measure the focal distance the lens is set at, physically, but
you can't determine from the resulting image how far away the subject
actually was, much less what portions were closer or further, and by how
much. This requires a lot more info, and without it, you can't calculate
how to correct them on the film plane.

Someone I know worked on a 3D rendering system, now used by law
enforcement for crime-scene mapping. A tripod-mounted camera coupled with a
laser measuring system not only records the scene visually, but maps out
the precise locations of, and distances to, the subject within the field of
view, allowing the crime scene to be recreated in three dimensions. You
could probably produce a lot more accurate corrections with a system of
that type, since you could then determine that the grey blob at x/y is
blurry because it's ten inches past the focal point, and not because it's a
cloud six miles off.

Just my two pfennings...;-)


- Al.

--
To reply, insert dash in address to match domain below
Online photo gallery at www.wading-in.net
  #14  
Old December 15th 04, 11:02 AM
Ken Tough
external usenet poster
 
Posts: n/a
Default

Al Denelsbeck wrote:

Yes, I see that. The original image has 3D data, the imperfect lens
produces 3D data, but the film plane captures only 2D data. So some
information has been lost.


Actually, I think this is the key.

Given a 3D "model" of the subject and the ideal properties of the
lens, you can account for lens aberrations, chromatic separation, und so
wieter, by knowing what kind of path the light *should* be taking and what
effect this should have had on the finished image.

But even with test subjects, the best information that you can get is
what effect the lens might have for subjects within a given focal distance,
i.e., a flat surface parallel to the 'film' plane. The processes I
mentioned were intended for microscopy, where the subject was always a flat
plane, or occasionally used on surveillance cameras with fixed-focus lenses
where the subject matter fell within the large depth-of-field for the
camera.


Qinetiq's method is to introduce a special diffraction grating on
the surface of the lens. Interesting that this has tangential links
to Canon's DO.. Anyway, here are some links:

http://news.bbc.co.uk/2/hi/technology/3643964.stm
http://oemagazine.com/fromTheMagazine/oct04/smart.html


--
Ken Tough
  #15  
Old December 15th 04, 11:02 AM
Ken Tough
external usenet poster
 
Posts: n/a
Default

Al Denelsbeck wrote:

Yes, I see that. The original image has 3D data, the imperfect lens
produces 3D data, but the film plane captures only 2D data. So some
information has been lost.


Actually, I think this is the key.

Given a 3D "model" of the subject and the ideal properties of the
lens, you can account for lens aberrations, chromatic separation, und so
wieter, by knowing what kind of path the light *should* be taking and what
effect this should have had on the finished image.

But even with test subjects, the best information that you can get is
what effect the lens might have for subjects within a given focal distance,
i.e., a flat surface parallel to the 'film' plane. The processes I
mentioned were intended for microscopy, where the subject was always a flat
plane, or occasionally used on surveillance cameras with fixed-focus lenses
where the subject matter fell within the large depth-of-field for the
camera.


Qinetiq's method is to introduce a special diffraction grating on
the surface of the lens. Interesting that this has tangential links
to Canon's DO.. Anyway, here are some links:

http://news.bbc.co.uk/2/hi/technology/3643964.stm
http://oemagazine.com/fromTheMagazine/oct04/smart.html


--
Ken Tough
  #16  
Old December 15th 04, 06:08 PM
Paul Bielec
external usenet poster
 
Posts: n/a
Default


"Dave" wrote in message ...
First, before asking this, I should add I have just come back from the
pub and had quite a few beers, so take that into account !!


I'm an electrical engineer/scientist, with an interest in protography. I
know in theory what you actualy record on an instrument (e.g.
oscilloscope, but possibly a camera ???) is the convolution of the real
signal and the impulse response of the system.

Measured = Real * Impulse_response

where * = convolution.

But in theory (but far less so in practice), knowing the 'impulse
response' of your system and what you record, it is possible to perform
deconvolution to calculate the real signal is - despite the fact you
have not recorded it.

This got me thinking about whether you can correct for out of focus
images, or imperfect lenses by knowing their impulse response. This
might (or might not) be you call the MTF. I think they are related.

So assuming you have a poor lens, and so you take a poor photograph of a
scene. can you measure the properties of that lens (find its impulse
response) and deconvolve the recorded image with the impulse response of
the lens to find out what the real picture is, without the distorsions,
so negating the effect of your poor lens?

You probably think I'm either drunk (true), a mad scientist (also true),
but does anyone reading this have a clue what I am on about?


Ahhh, these engineers...They get few pints and their crazy nerd brains start
to come up with all those crazy ideas...hehehe

Paul
computer engineer, amateur photographer


  #17  
Old December 15th 04, 06:08 PM
Paul Bielec
external usenet poster
 
Posts: n/a
Default


"Dave" wrote in message ...
First, before asking this, I should add I have just come back from the
pub and had quite a few beers, so take that into account !!


I'm an electrical engineer/scientist, with an interest in protography. I
know in theory what you actualy record on an instrument (e.g.
oscilloscope, but possibly a camera ???) is the convolution of the real
signal and the impulse response of the system.

Measured = Real * Impulse_response

where * = convolution.

But in theory (but far less so in practice), knowing the 'impulse
response' of your system and what you record, it is possible to perform
deconvolution to calculate the real signal is - despite the fact you
have not recorded it.

This got me thinking about whether you can correct for out of focus
images, or imperfect lenses by knowing their impulse response. This
might (or might not) be you call the MTF. I think they are related.

So assuming you have a poor lens, and so you take a poor photograph of a
scene. can you measure the properties of that lens (find its impulse
response) and deconvolve the recorded image with the impulse response of
the lens to find out what the real picture is, without the distorsions,
so negating the effect of your poor lens?

You probably think I'm either drunk (true), a mad scientist (also true),
but does anyone reading this have a clue what I am on about?


Ahhh, these engineers...They get few pints and their crazy nerd brains start
to come up with all those crazy ideas...hehehe

Paul
computer engineer, amateur photographer


 




Thread Tools
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

vB code is On
Smilies are On
[IMG] code is Off
HTML code is Off
Forum Jump


All times are GMT +1. The time now is 03:45 PM.


Powered by vBulletin® Version 3.6.4
Copyright ©2000 - 2024, Jelsoft Enterprises Ltd.
Copyright ©2004-2024 PhotoBanter.com.
The comments are property of their posters.