Jump to content

Lie's theorem

From Wikipedia, the free encyclopedia
This is an old revision of this page, as edited by TakuyaMurata (talk | contribs) at 23:54, 4 October 2019 (Proof). The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.

In mathematics, specifically the theory of Lie algebras, Lie's theorem states that,[1] over an algebraically closed field of characteristic zero, if is a finite-dimensional representation of a solvable Lie algebra, then stabilizes a flag ; "stabilizes" means for each and i.

Put in another way, the theorem says there is a basis for V such that all linear transformations in are represented by upper triangular matrices. This is a generalization of the result of Frobenius that commuting matrices are simultaneously upper triangularizable, as commuting matrices form an abelian Lie algebra, which is a fortiori solvable.

A consequence of Lie's theorem is that any finite dimensional solvable Lie algebra over a field of characteristic 0 has a nilpotent derived algebra.

Counter-example

For algebraically closed fields of characteristic p>0 Lie's theorem holds provided the dimension of the representation is less than p, but can fail for representations of dimension p. An example is given by the 3-dimensional nilpotent Lie algebra spanned by 1, x, and d/dx acting on the p-dimensional vector space k[x]/(xp), which has no eigenvectors. Taking the semidirect product of this 3-dimensional Lie algebra by the p-dimensional representation (considered as an abelian Lie algebra) gives a solvable Lie algebra whose derived algebra is not nilpotent.

Proof

The proof is by induction on the dimension of and consists of several steps. The basic case is trivial and we assume the dimension of is positive. For simplicity, we write .

Step 1: Observe that the theorem is equivalent to the statement:[2]

  • There exists a vector in V that is an eigenvector for each linear transformation in .
Indeed, the theorem says in particular that a nonzero vector spanning is a common eigenvector for all the linear transformations in . Conversely,

Step 2: Find an ideal of codimension-one in .

Let be the derived algebra. Since is solvable, and so the quotient is a nonzero abelian Lie algebra, which certainly contains an ideal of codimension one and by the ideal correspondence, it corresponds to an ideal of codimension one in .

Step 3: There exists some linear functional in such that

is nonzero.

This follows from the inductive hypothesis (it is easy to check that the eigenvalues determine a linear functional).

Step 4: is a -module.

(Note this step proves a general fact and does not involve solvability.)
Let be in , and set recursively . For any , since is an ideal,
.
This says that (that is ) restricted to is represented by a matrix whose diagonal is . Hence, . Since is invertible, and is an eigenvector for X.

Step 5: Finish up the proof by finding a common eigenvector.

Write where L is one-dimensional vector subspace. Since the base field k is algebraically closed, there exists an eigenvector in for some (thus every) nonzero element of L. Since that vector is also eigenvector for each element of , the proof is complete.

See also

References

  1. ^ Serre, Theorem 3
  2. ^ Serre, Theorem 3″
  • Fulton, William; Harris, Joe (1991). Representation theory. A first course. Graduate Texts in Mathematics, Readings in Mathematics. Vol. 129. New York: Springer-Verlag. doi:10.1007/978-1-4612-0979-9. ISBN 978-0-387-97495-8. MR 1153249. OCLC 246650103.
  • Humphreys, James E. (1972), Introduction to Lie Algebras and Representation Theory, Berlin, New York: Springer-Verlag, ISBN 978-0-387-90053-7.
  • Jean-Pierre Serre: Complex Semisimple Lie Algebras, Springer, Berlin, 2001. ISBN 3-5406-7827-1