Jump to content

Wavefront coding

From Wikipedia, the free encyclopedia
This is an old revision of this page, as edited by Lightbot (talk | contribs) at 17:01, 15 June 2008 (Units/dates/other). The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.

In optics, wavefront coding is a method for increasing the depth of field in an image to produce sharper images. It works by blurring the image using a specially shaped waveplate so that the image is out of focus by a constant amount. Digital image processing then removes the blur and introduces noise. Dynamic range is sacrificed to extend the depth of field. It can also correct optical aberration.[1]

The technique was pioneered by a radar engineer Edward Dowski and his thesis adviser Thomas Cathey at the University of Colorado in the United States in the 1990s. After the university showed little interest in the research[2] they have since founded a company to commercialize the method called CDM-Optics. The company was acquired in 2005 by OmniVision Technologies, which has released wavefront-coding-based mobile camera chips as TrueFocus sensors.

Wavefront coding allows for variable focusing across one or multiple selected fields. For example, it may focus upon one field, two fields significantly distanced apart, or all fields in view to any degree.

Wavefront coding falls under the broad category of computational photography as a technique to enhance the depth of field.

See also

References