Jump to content

Point distribution model

From Wikipedia, the free encyclopedia
This is an old revision of this page, as edited by Vectraproject (talk | contribs) at 18:35, 15 July 2007 (Created page with 'The Point Distribution Model is a model for representing the mean geometry of a shape and some statistical modes of geometric variation inferred from a training set...'). The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.
(diff) ← Previous revision | Latest revision (diff) | Newer revision → (diff)

The Point Distribution Model is a model for representing the mean geometry of a shape and some statistical modes of geometric variation inferred from a training set of shapes. It has been developped by Cootes, Taylor et al [1] and became a standard in computer vision for statistical study of shape [2] and for segmentation of medical images [1] where shape priors really help interpretation of noisy and low-contrasted pixels/voxels.


Point Distribution Models rely on landmarks. A landmark a point annoting a given locus on the shape to learn across the training set population. For instance, the same landmark will designate the tip of the index in a training set of 2D hands outlines. Principal Component Analysis (PCA), for instance, is a relevant tool for studying correlations of movement between groups of landmarks among the training set population. Typically, it might detect that all the landmarks located along the same finger move exactly together across the training set examples.

The implementation of the procedure is rouglhy the following: 1 annotate the training set outlines with enough corresponding landmark to sufficiently approximate the geometry of the original shapes 2 align the clouds of landmark using the Generalized Procrustes Method (minimization of overall distance between landmarks of same label). The big idea is that shape information is not related to affine pose parameters, which need to be removed before any shape study 3 now the shape outlines are reduced to sequences of n landmarks, we can see the training set as a 2n or 3n (2D/3D) space where any shape instance is a single dot. Assuming the scattering is gaussian in this space, PCA is supposedly the most straightforward tool to analyse the training set in this space 4 PCA computes normalized eigenvectors and eigenvalues of the training set matrix covariance. Each eigenvector describe a principal mode of variation along the set, the corresponding eigenvalue indicating the importance of this mode in the shape space scattering. Since correlation was found between landmarks, the total variation of the space is concentrated on the very first eigenvectors, showing a very fast descent. Otherwise correlation was not found, suggesting the training set shows no variation or the landmarks are not properly posed.

An eigenvector, interpreted in euclidean space, can be seen as a sequence of n euclidean vectors associated to corresponding landmark and designating a compound move for the whole shape. Global nonlinear variation is usually well handled provided nonlinear variation is kept to a reasonable level. Typically, a twisting nematod worm opens the road to Kernel PCA-based methods.

The big idea is that eigenvectors can be linearly combined to create an infinity of new shape instances that will 'look like' the one in the training set, though they do not belong to it. This feature is directly due to the PCA properties, since eigenvectors are mutually orhtogonal, form a basis of the training set cloud in the shape space: they cross at the 0 in this space, which represents the mean shape, i.e the mean of the aligned landmarked shape instances of the training set.


[1]: @article{ cooper95asp_training, author = "D.H. Cooper and T.F. Cootes and C.J. Taylor and J. Graham", title = "Active shape models - their training and application", journal = "Computer Vision and Image Understanding", number = 61, pages = "38--59", year = 1995 }

[2]: @inproceedings{davies03impi, title = "Shape discrimination in the Hippocampus using an MDL Model", author = "Rhodri H. Davies and Carole J. Twining and P. Daniel Allen and Tim F. Cootes and Chris J. Taylor", year = 2003, conference = "IMPI" }