Jump to content

User:FilemonNeira/sandbox

From Wikipedia, the free encyclopedia

Schur Functions

[edit]

There is another basis, called the schur functions, with strong connections to combinatorics and representation theory. We give several equivalent definitions

Tableaux

[edit]

We first give a purely combinatorial definition

where the sum is over all Semi-Standard Young Tableaux of shape and is the monomial representing its content. An example in 3 variables is

corresponding to the tableaux


Here its just a one line notation for the more familiar

Note in the example that the schur function is indeed symmetric, a fact which is not direct from its definition. In general, the symmetry of schur functions requires a proof, not hard, and once we know it, we can write

with the following natural interpretation: is the number of Semi-Standard Young Tableaux (SSYT for short) with shape and content . A careful analysis reveals that is nonzero if and only if . Even more , with the only tableaux being the one with all 1's in the first row, 2's in the second, and so on. So in fact we have

Which means that the transition matrix will be lower unitriangular with respect to any order extending the dominance order. This shows that the schur functions are an integral basis for the ring of symmetric functions.

Remark: this is stronger that triangularity since being smaller in the linear order doesn't guarantees being smaller in dominance order. It is triangular, but several other positions are forced to be zero too.

One advantage of the combinatorial definition is that it makes sense also to define schur functions indexed by skew shapes.

Ratio of determinants

[edit]

There is also a purely algebraic definition of :

For any integer vector define

Set and note that is the Vandermonde determinant . We have the alternative expression for the schur functions

Remark: This makes sense even if is not a partition. In its useful to define this way for any integer vector. If nonzero, will be equal (up to sign) to for a unique partition.

One common application is the following: to find the coefficient of in a symmetric function we can just compute the coefficient of in . For instance the hook length formula can be proven this way (See Wikipedia)

More determinants

[edit]

From the first definition we were able to expand schur in terms of the monomial basis. What about the other bases? The answer for the homogeneous (and elementary) basis is given by the Jacobi-Trudi determinant

Applying the involution we get

Relation with the representation theory of the symmetric group

[edit]

The remarkable connection between schur functions and the characters of the irreducible representations of the symmetric group is given by the following magical formula

where is the value of the character of the irreducible representation in the element and is the power symmetric function of the partition associated to the cycle decomposition of . For example, if then .

Since in cycle notation, . Then the formula says

Considering the expansion of schur functions in terms on monomial symmetric functions using the Kostka numbers

the inner product with is , because . Note that is equal to , the number of Standard Young Tableaux of shape . Hence

and

which will be useful later

The magical formula is equivalent to:

This gives a conceptual proof of the identity , by comparing the coefficients and taking into account that because tensoring by the sign representation gives the irreducible representation for the conjugate partition.

Frobenius Series

[edit]

The expansion in terms of the power symmetric functions suggest we define the following map: The Frobenius Characteristic map F takes class functions on the symmetric group to symmetric function by sending and extending by linearity. An important fact is that F is an isometry with respect to the inner products.

Remark F does not commute with multiplication

The formula:

is equivalent to

And this also comes from representation theory. There is a module decomposition in is given by the multiplicities , and the above equation is simply the Frobenius translation. (See Sagan)

Let be a graded module, with each finite dimensional, define the Frobenius Series of A as

where is the image of under the Frobenius map.

And similarly if is a doubly graded module, with each finite dimensional, define the Frobenius Series of A as

It is clear that the Frobenius series expand positively in terms of schur functions, because the coefficients of schur come from the multiplicities (obviously positive) of the irreducibles on each graded piece. The proof of the positivity conjecture for Macdonald Polynomials consist of finding a module whose Frobenius series is the desired symmetric function.

Characters of the General Linear group

[edit]

Another way of thinking about schur functions is as the characters of the irreducible representations of . Lets go through a simple example:

The first nontrivial representation of that comes into mind is itself, which comes from the natural action on , call this representation . The character is just the trace, which is, as function of the eigenvalues, equal to . What happens if we tensor ? The character gets raised by two and we have the identity

Since decomposing the characters gives the information to decompose the representation, the above identity says that decomposes into two irreducibles one corresponding to the partition and other to the partition . This are the symmetric and antisymmetric part, respectively.

More generally, consider and its defining representation given by the natural action on . If we want to decompose into irreducibles we will need to write in term of schur. The remarkable formula, in the crossroads between symmetric functions, representation theory and combinatorics is

Which is the expansion of in terms of schur functions using the coefficients given by the inner product, because and . The above equality can be proven also checking the coefficients of each monomial at both sides and using the Robinson–Schensted–Knuth correspondence. For a more detailed analysis of the decomposition of see Schur–Weyl duality.

In this context we can express the schur functions by using Weyl's Character Formula

which is equivalent to the ratio of determinants.

Raising operators

[edit]

For a set of variables define

Now define the Bernstein Operators on symmetric functions as

In words this means: we have some variables x, and we add one more variable u, so , and is a subtle business but it can be thought as adding the variable to the set. So is an expression containing the variables x and u. takes coefficient of in the big mess

The following theorem is fundamental, not by itself, but because a lot of the theory is developed by deforming this operator, and while the proofs are different for the other cases, there is a pattern in the proofs. so lets describe this one to get a flavor of what's going on

Theorem The Bernstein operators add a part to the indexing of the Schur function, that is, for we have

Sketch of proof

We need the following ingredients. First by partial fraction expansion

The second ingredient is Weyl's character formula

By expanding its easy to check that for a polynomial f

Now we're ready to stir the elements. First lets mix the first with definition, so for any f

because is an operator, it distributes over sum. Furthermore, considering as a polynomial in we can use the third ingredient, with playing the role of to get


By the virtues of the plethystic substitution is the same thing as evaluating in the other variables, i.e., taking out. Now specialize to the schur function and consider the second ingredient, Weyl's formula, then it follows easily by induction, because note the factor

is the same as for the terms in the formula for when the permutation sends i to the first position.

This gives our final definition of schur functions

Final Remarks

[edit]

The schur functions are a basis for the symmetric functions with the following properties

1. Lower unitriangularity with respect to monomials

2. Orthogonality

The Kostka numbers have two interpretations, a combinatorial and an algebraic one. These properties are important to keep in mind while generalizing with one or two parameters.

Hall Littlewood Polynomials

[edit]

We know the schur basis, and many more, for the ring of symmetric functions over a field . The next step of generalization is consider the field , and twist a little bit the inner product. In contrast with Macdonald polynomials, we can give a closed expression for Hall-Littlewood polynomials

Straight definition and first properties

[edit]

First we need the following - analogues

Then the "Hall-Littlewood polynomial" in n variables is given by the following formula

Where and is such that

Note that when the denominator goes away and we get precisely the Weyl's character formula for the schur functions, so

at the products inside cancel and we get the usual monomial funcitons

The Hall-Littlewood polynomials will form a basis, then we can expand schur in this new basis. The "Kostka-Foulkes polynomials" are defined by

They don't deserve the name polynomials yet, because so far we just know that they are rational functions in t. But we will see why they're actual polynomials.

Definition with raising operators

[edit]

Define the Jing Operators as t deformations (this is part of a general recipe to t-deform something. See Zabrocki) of the Bersntein operator in the following way


and their modified version

which are related by

where is the operator with the plethystic substituion , and is its inverse, namely

Analogously to the schur functions now defined the transformed Hall-Littlewood polynomials as

And if we set we get

Recall that the Bernstein operators added one part to a partition. This new operators behave in a more complicated way, but of similar spirit

Theorem for Jing Operators If and then

Moreover, appears with coefficient 1

The last part is saying something similar to the previous situation, we will get the schur function with an additional part m added, but the theorem is saying that we get also polynomial combinations of other schur functions.

By repeated use of the theorem we can conclude that

where are polynomials with

That means that we have upper unitriangularity with respect to the schur basis.

We have analogous statements for Q (although with different proof!)

Theorem for modified Jing Operators If then

Moreover, appears with coefficient where is the multiplicity of m as a part of

Again by repeated use of the theorem we can conclude that

where are polynomials with

That means that we have lower triangularity (but with a messier diagonal elements) with respect to the schur basis.

The operator is self adjoint for the inner product, i.e. we have

By the opposite triangularities of and we have that if then . Passing the to the other side, we obtain the opposite conclusion and hence . Which implies the following claim

The transformed Hall-Littlewood polynomials are orthogonal with respect to the inner product and their self inner products are given by

Now everything fits smoothly

[edit]

Really. First, from the definition of que one can get the following formula by induction

The relation with the original Hall-Littlewood polynomials is

Note that the denominator is precisely the self inner product of the in the inner product . Classically something a bit different is defined


In this product, the basis and are orthogonal and furthermore, they are dual! So recall that we defined the Kostka - Foulkes polynomial as

By taking inner products, and using the duality just mentioned we arrive at

But that last coefficient is equal to our previously defined polynomials , showing that the Kostka-Foulkes polynomials are in fact polynomials.

Positivity of Kostka-Foulkes polynomials

[edit]

It turns out that they are not just integer polynomials, but their coefficients are positive. It may not sound very interesting to show that a quantity is positive, but usually the question is implicitly asking for a interpretation. There are many different approaches here, all far from trivial. Let's review them

Deep representation theory

[edit]

The work of Hotta, Lusztig, and Springer showed deep connections with representation theory. I cannot say more than a few words (that i don't even understand): They relate the Kostka-Foulkes polynomials, and a variation of them, called cocharge Kostka-Foulkes polynomials to some hardcore math where the keywords are Unipotent Characters, local intersection homology, Springer fiber and perverse sheaves.

For now,the important thing is that they found a ring, the cohomology ring of the Springer fiber, whose Frobenius series is given by the cocharge transformed Hall-Littlewood polynomials, implying they expand schur positively.

Combinatorics of Tableaux

[edit]

Lascoux and Schutzenberger proved the following simple and elegant formula, that gives a concrete meaning to each coefficient

the sum is over all SSYT of shape and content . The new definition is the charge which is easier to define in terms of cocharge which is an invariant characterized by

1. Cocharge is invariant under jeu-de-taquin slides

2. Suppose the shape of is disconnected, say with above and left of , and no entry of is equal to 1. Then , obtained by swapping, has

3. If is a single row, then

And then . The existence of such an invariant requires proof. There is a process to compute the cocharge called catabolism.

Alternative description using tableaux

[edit]

Kirillov and Reshetikhin gave the following formula

where the sum is over all - admissible configurations .

While nasty, this thing has clearly positive coefficients. The origin of this formula is from a technique in mathematical physics known as Bethe ansatz, which is used to produced highest weight vectors for some tensor products. The theorem is relating : with the enumeration of highesst weight vectors in by a quantum number. For more info, stay tuned, probably Anne has something to say about in class.

Commutative Algebra

[edit]

This may be the less technical. Garsi and Procesi simplified the first proof by giving a down to earth interpretation of the cohomology ring of the springer fiber . Now the action happens inside the polynomial ring . And

For an ideal with a relatively explicit description. They manage to give generators, and finally they proof with more elementary methods that the frobenius series is the cocharge invariant

where is the cocharge Kostka-Foulkes poylnomial.