Wavefront coding
In optics and signal processing, wavefront coding is a method for creating optical transfer functions of lenses with specially designed phase masks, encoding, to produce point spread functions of visible light, images, with manipulatable information such as depth of field and distance.
Wavefront coding falls under the broad category of computational photography as a technique to enhance the depth of field.
Encoding
Linear phase mask
Wavefront Coding with linear phase masks works by creating an optical transfer function that encodes distance information.
Cubic phase mask
Wavefront Coding with cubic phase masks works to blur the image uniformly using a cubic shaped waveplate so that the intermediate image, the optical transfer function, is out of focus by a constant amount. Digital image processing then removes the blur and introduces noise. Dynamic range is sacrificed to extend the depth of field. It can also correct optical aberration.[1]
The mask was developed by using the ambiguity function and the stationary phase method
History
The technique was pioneered by radar engineer Edward Dowski and his thesis adviser Thomas Cathey at the University of Colorado in the United States in the 1990s. After the university showed little interest in the research[2] they have since founded a company to commercialize the method called CDM-Optics. The company was acquired in 2005 by OmniVision Technologies, which has released wavefront-coding-based mobile camera chips as TrueFocus sensors.
TrueFocus sensors are able to simulate older autofocus technologies that use rangefinders and narrow depth of fields.[3] In fact, the technology theoretically allows for any number of combinations of focal points per pixel for effect. Currently, it is the only technology not limited to EDoF.
External links
- Applications, with sample pictures
- CDM-Optics
- Wavefront coding finds increasing use (Laser Focus World)
- Wikinvest's OmniVision Technologies article