Jump to content

Image-based modeling and rendering

From Wikipedia, the free encyclopedia
This is an old revision of this page, as edited by Arsine (talk | contribs) at 11:43, 21 September 2005. The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.
(diff) ← Previous revision | Latest revision (diff) | Newer revision → (diff)

In between computer graphics and computer vision, Image-Based Modeling and Rendering (IBMR) methods rely on a set of images (Image-Based) of a scene to generate a three-dimensional model (Modeling) and/or some novel views (Rendering (computer graphics)) of this scene.

The fundamental concept behind IBMR is the plenoptic illumination function which a parametrisation of the light field. The plenoptic function describes the light rays contained in a given volume. It can be represented with seven dimensions: a ray is defined by its position , its orientation , its wave length and its time :

IBMR methods try to approximate the plenoptic function to render a novel set of two-dimensional images from another. Given the high dimensionality of this function, most of the methods put constraints in order to reduce this number.

A couple of well-known IBMR methods and and algorithms are the following: view morphing generates a transition between images, QuickTime VR renders panoramas using image mosaics, Lumigraph relies on a dense sampling of the scene and Space Carving generates a 3D model based on a photo-consistency check.