Jump to content

Projection (linear algebra)

From Wikipedia, the free encyclopedia
This is an old revision of this page, as edited by Vecter (talk | contribs) at 19:10, 10 June 2010 (Classification). The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.

Template:Redirect6

The transformation P is the orthogonal projection onto the line m.

In linear algebra and functional analysis, a projection is a linear transformation P from a vector space to itself such that P2 = P. It leaves its image unchanged.[1] Though abstract, this definition of "projection" formalizes and generalizes the idea of graphical projection. One can also consider the effect of a projection on a geometrical object by examining the effect of the projection on points in the object.

Simple example

Orthogonal projection

For example, the function which maps the point (x, y, z) in three-dimensional space R3 to the point (x, y, 0) is a projection onto the x-y plane. This function is represented by the matrix

The action of this matrix on an arbitrary vector is

and

therefore P = P2, proving that P is indeed a projection.

Oblique projection

An example of a simple non-orthogonal (oblique) projection (for definition see below) is

It is easy to see that

proving that P is indeed a projection.

The projection P is orthogonal if and only if = 0.

Classification

Assume the underlying vector space is finite dimensional (therefore issues such as continuity of a projection need not be considered).

The transformation T is the projection along k onto m. The range of T is m and the null space is k.

As stated in the introduction, a projection P is a linear transformation that is idempotent, meaning that P2 = P.

Let W be an underlying vector space. Suppose the subspaces U and V are the range and null space of P respectively. Then we have these basic properties:

  1. P is the identity operator I on U:
  2. We have a direct sum W = UV. This means that every vector x may be decomposed uniquely in the manner x = u + v, where u is in U and v is in V. The decomposition is given by

The range and kernel of a projection are complementary, as are P and Q = I − P. The operator Q is also a projection and the range and kernel of P become the kernel and range of Q and vice-versa.

We say P is a projection along V onto U (kernel/range) and Q is a projection along U onto V.

Decomposition of a vector space into direct sums is not unique in general. Therefore, given a subspace V, in general there are many projections whose range (or kernel) is V.

Only 1 and 0 can be an eigenvalue of a projection. The corresponding eigenspaces are the range and kernel of the projection.

If a projection is nontrivial it has minimal polynomial , which factors into distinct roots, and thus P is diagonalizable.

Orthogonal projections

If the underlying vector space is endowed with an inner product, orthogonality and its attendant notions (such as the self-adjointness of a linear operator) become available. An orthogonal projection is a projection for which the range U and the null space V are orthogonal subspaces. A projection is orthogonal if and only if it is self-adjoint, which means that, in the context of real vector spaces, the associated matrix is symmetric relative to an orthonormal basis: P = PT (for the complex case, the matrix is hermitian: P = (P*)T). Indeed, if x and y are vectors in the domain of the projection, then PxU and yPyV, and

where is the positive-definite scalar product, so Px and yPy are orthogonal for all x and y if and only if P = PTP, which is equivalent to ( P = PT and P = P2 ).[2]

The simplest case is where the projection is an orthogonal projection onto a line. If u is a unit vector on the line, then the projection is given by

This operator leaves u invariant, and it annihilates all vectors orthogonal to u, proving that it is indeed the orthogonal projection onto the line containing u.[3] A simple way to see this is to consider an arbitrary vector as the sum of a component on the line (i.e. the projected vector we seek) and another perpendicular to it, . Applying projection, we get by the properties of the dot product of parallel and perpendicular vectors.

This formula can be generalized to orthogonal projections on a subspace of arbitrary dimension. Let u1, ..., uk be an orthonormal basis of the subspace U, and let A denote the n-by-k matrix whose columns are u1, ..., uk. Then the projection is given by

[4]

The matrix AT is the partial isometry that vanishes on the orthogonal complement of U and A is the isometry that embeds U into the underlying vector space. The range of PA is therefore the final space of A. It is also clear that ATA is the identity operator on U.

The orthonormality condition can also be dropped. If u1, ..., uk is a (not necessarily orthonormal) basis, and A is the matrix with these vectors as columns, then the projection is

[5]

The matrix A still embeds U into the underlying vector space but is no longer an isometry in general. The matrix (ATA)−1 is a "normalizing factor" that recovers the norm. For example, the rank-1 operator uuT is not a projection if ||u|| ≠ 1. After dividing by uTu = ||u||2, we obtain the projection u(uTu)−1uT onto the subspace spanned by u.

All these formulas also hold for complex inner product spaces, provided that the conjugate transpose is used instead of the transpose.

Oblique projections

The term oblique projections is sometimes used to refer to non-orthogonal projections. These projections are also used to represent spatial figures in two-dimensional drawings (see oblique projection), though not as frequently as orthogonal projections.

Oblique projections are defined by their range and null space. A formula for the matrix representing the projection with a given range and null space can be found as follows. Let the vectors u1, ..., uk form a basis for the range of the projection, and assemble these vectors in the n-by-k matrix A. The range and the null space are complementary spaces, so the null space has dimension n − k. It follows that the orthogonal complement of the null space has dimension k. Let v1, ..., vk form a basis for the orthogonal complement of the null space of the projection, and assemble these vectors in the matrix B. Then the projection is defined by

This expression generalizes the formula for orthogonal projections given above.[6]

Projections on normed vector spaces

When the underlying vector space X is a (not necessarily finite-dimensional) normed vector space, analytic questions, irrelevant in the finite-dimensional case, need to be considered. Assume now X is a Banach space.

Many of the algebraic notions discussed above survive the passage to this context. A given direct sum decomposition of X into complementary subspaces still specifies a projection, and vice versa. If X is the direct sum X = UV, then the operator defined by P(u + v) = u is still a projection with range U and kernel V. It is also clear that P2 = P. Conversely, if P is projection on X, i.e. P2 = P, then it is easily verified that (IP)2 = (IP). In other words, (IP) is also a projection. The relation I = P + (IP) implies X is the direct sum Ran(P) ⊕ Ran(IP).

However, in contrast to the finite-dimensional case, projections need not be continuous in general. If a subspace U of X is not closed in the norm topology, then projection onto U is not continuous. In other words, the range of a continuous projection P must be a closed subspace. Furthermore, the kernel of a continuous projection (in fact, a continuous linear operator in general) is closed. Thus a continuous projection P gives a decomposition of X into two complementary closed subspaces: X = Ran(P) ⊕ Ker(P) = Ran(P) ⊕ Ran(IP).

The converse holds also, with an additional assumption. Suppose U is a closed subspace of X. If there exists a closed subspace V such that X = UV, then the projection P with range U and kernel V is continuous. This follows from the closed graph theorem. Suppose xnx and Pxny. One needs to show Px = y. Since U is closed and {Pxn} ⊂ U, y lies in U, i.e. Py = y. Also, xnPxn = (IP)xnxy. Because V is closed and {(IP)xn} ⊂ V, we have xyV, i.e. P(xy) = PxPy = Pxy = 0, which proves the claim.

The above argument makes use of the assumption that both U and V are closed. In general, given a closed subspace U, there need not exist a complementary closed subspace V, although for Hilbert spaces this can always be done by taking the orthogonal complement. For Banach spaces, a one-dimensional subspace always has a closed complementary subspace. This is an immediate consequence of Hahn–Banach theorem. Let U be the linear span of u. By Hahn–Banach, there exists a bounded linear functional Φ such that φ(u) = 1. The operator P(x) = φ(x)u satisfies P2 = P, i.e. it is a projection. Boundedness of φ implies continuity of P and therefore Ker(P) = Ran(IP) is a closed complementary subspace of U.

Applications and further considerations

Projections (orthogonal and otherwise) play a major role in algorithms for certain linear algebra problems:

As stated above, projections are a special case of idempotents. Analytically, orthogonal projections are non-commutative generalizations of characteristic functions. Idempotents are used in classifying, for instance, semisimple algebras, while measure theory begins with considering characteristic functions of measurable sets. Therefore, as one can imagine, projections are very often encountered in the context operator algebras. In particular, a von Neumann algebra is generated by its complete lattice of projections.

Generalizations

More generally, given a map between normed vector spaces one can analogously ask for this map to be an isometry on the orthogonal complement of the kernel: that be an isometry; in particular it must be onto. The case of an orthogonal projection is when W is a subspace of V. In Riemannian geometry, this is used in the definition of a Riemannian submersion.

See also

Notes

  1. ^ Meyer, pp 386+387
  2. ^ Meyer, p. 433
  3. ^ Meyer, p. 431
  4. ^ Meyer, equation (5.13.4)
  5. ^ Meyer, equation (5.13.3)
  6. ^ Meyer, equation (7.10.39)

References

  • N. Dunford and J.T. Schwartz, Linear Operators, Part I: General Theory, Interscience, 1958.
  • Carl D. Meyer, Matrix Analysis and Applied Linear Algebra, Society for Industrial and Applied Mathematics, 2000. ISBN 978-0-89871-454-8.