 | Finished writing a draft article? Are you ready to request review of it by an experienced editor for possible inclusion in Wikipedia? Submit your draft for review! |
There is another basis, called the schur functions, with strong connections to combinatorics and representation theory. We give several equivalent definitions
We first give a purely combinatorial definition
where the sum is over all Semi-Standard Young Tableaux of shape
and
is the monomial representing its content. An example in 3 variables is
corresponding to the tableaux
Here
its just a one line notation for the more familiar
Note in the example that the schur function is indeed symmetric, a fact which is not direct from its definition. In general, the symmetry of schur functions requires a proof, not hard, and once we know it, we can write

with the following natural interpretation:
is the number of Semi-Standard Young Tableaux (SSYT for short) with shape
and content
. A careful analysis reveals that
is nonzero if and only if
. Even more
, with the only tableaux being the one with all 1's in the first row, 2's in the second, and so on. So in fact we have

Which means that the transition matrix will be lower unitriangular with respect to any order extending the dominance order. This shows that the schur functions are an integral basis for the ring of symmetric functions.
Remark: this is stronger that triangularity since being smaller in the linear order doesn't guarantees being smaller in dominance order. It is triangular, but several other positions are forced to be zero too.
One advantage of the combinatorial definition is that it makes sense also to define schur functions indexed by skew shapes.
Ratio of determinants
[edit]
There is also a purely algebraic definition of
:
For any integer vector
define
![{\displaystyle a_{(l_{1},l_{2},\cdots ,l_{k})}(x_{1},x_{2},\dots ,x_{k})=\det \left[{\begin{matrix}x_{1}^{l_{1}}&x_{2}^{l_{1}}&\dots &x_{k}^{l_{1}}\\x_{1}^{l_{2}}&x_{2}^{l_{2}}&\dots &x_{k}^{l_{2}}\\\vdots &\vdots &\ddots &\vdots \\x_{1}^{l_{k}}&x_{2}^{l_{k}}&\dots &x_{k}^{l_{k}}\end{matrix}}\right]}](/media/api/rest_v1/media/math/render/svg/15be441a9b5e9aa55ec9fc56214306207e530dd1)
Set
and note that
is the Vandermonde determinant
. We have the alternative expression for the schur functions

Remark: This makes sense even if
is not a partition. In its useful to define
this way for any integer vector. If nonzero,
will be equal (up to sign) to
for a unique partition.
One common application is the following: to find the coefficient of
in a symmetric function
we can just compute the coefficient of
in
. For instance the hook length formula can be proven this way (See Wikipedia)
From the first definition we were able to expand schur in terms of the monomial basis. What about the other bases? The answer for the homogeneous (and elementary) basis is given by the Jacobi-Trudi determinant

Applying the involution
we get

Relation with the representation theory of the symmetric group
[edit]
The remarkable connection between schur functions and the characters of the irreducible representations of the symmetric group is given by the following magical formula
where
is the value of the character of the irreducible representation
in the element
and
is the power symmetric function of the partition
associated to the cycle decomposition of
. For example, if
then
.
Since
in cycle notation,
. Then the formula says

Considering the expansion of schur functions in terms on monomial symmetric functions using the Kostka numbers

the inner product with
is
, because
. Note that
is equal to
, the number of Standard Young Tableaux of shape
. Hence

and

which will be useful later
The magical formula is equivalent to:

This gives a conceptual proof of the identity
, by comparing the coefficients and taking into account that
because tensoring by the sign representation gives the irreducible representation for the conjugate partition.
The expansion in terms of the power symmetric functions suggest we define the following map: The Frobenius Characteristic map F takes class functions on the symmetric group to symmetric function by sending
and extending by linearity. An important fact is that F is an isometry with respect to the inner products.
Remark F does not commute with multiplication
The formula:

is equivalent to

And this also comes from representation theory. There is a module
decomposition in
is given by the multiplicities
, and the above equation is simply the Frobenius translation. (See Sagan)
Let
be a graded
module, with each
finite dimensional, define the Frobenius Series of A as

where
is the image of
under the Frobenius map.
And similarly if
is a doubly graded
module, with each
finite dimensional, define the Frobenius Series of A as

It is clear that the Frobenius series expand positively in terms of schur functions, because the coefficients of schur come from the multiplicities (obviously positive) of the irreducibles on each graded piece. The proof of the positivity conjecture for Macdonald Polynomials consist of finding a module whose Frobenius series is the desired symmetric function.
Characters of the General Linear group
[edit]
Another way of thinking about schur functions is as the characters of the irreducible representations of
. Lets go through a simple example:
The first nontrivial representation of
that comes into mind is itself, which comes from the natural action on
, call this representation
. The character is just the trace, which is, as function of the eigenvalues, equal to
. What happens if we tensor
? The character gets raised by two and we have the identity

Since decomposing the characters gives the information to decompose the representation, the above identity says that
decomposes into two irreducibles one corresponding to the partition
and other to the partition
. This are the symmetric and antisymmetric part, respectively.
More generally, consider
and its defining representation
given by the natural action on
. If we want to decompose
into irreducibles we will need to write
in term of schur. The remarkable formula, in the crossroads between symmetric functions, representation theory and combinatorics is

Which is the expansion of
in terms of schur functions using the coefficients given by the inner product, because
and
. The above equality can be proven also checking the coefficients of each monomial at both sides and using the Robinson–Schensted–Knuth correspondence. For a more detailed analysis of the decomposition of
see Schur–Weyl duality.
In this context we can express the schur functions by using Weyl's Character Formula

which is equivalent to the ratio of determinants.
For a set of variables
define

Now define the Bernstein Operators
on symmetric functions as
![{\displaystyle S_{m}^{0}(f)=[u^{m}]f(x-u^{-1})\Omega (ux)}](/media/api/rest_v1/media/math/render/svg/6eefe1ada0186612ab42937972240d68914e557e)
In words this means: we have some variables x, and we add one more variable u, so
, and
is a subtle business but it can be thought as adding the variable
to the set. So
is an expression containing the variables x and u.
takes coefficient of
in the big mess
The following theorem is fundamental, not by itself, but because a lot of the theory is developed by deforming this operator, and while the proofs are different for the other cases, there is a pattern in the proofs. so lets describe this one to get a flavor of what's going on
Theorem The Bernstein operators add a part to the indexing of the Schur function, that is, for
we have
Sketch of proof
We need the following ingredients. First by partial fraction expansion

The second ingredient is Weyl's character formula

By expanding its easy to check that for a polynomial f
![{\displaystyle [u^{m}]f(u^{-1}){\dfrac {1}{1-uz}}=z^{m}f(z)}](/media/api/rest_v1/media/math/render/svg/4b98086c0766157492ba4ca0e884faa308506eea)
Now we're ready to stir the elements. First lets mix the first with definition, so for any f
![{\displaystyle [u^{m}]f(x-u^{-1})\Omega (ux)=[u^{m}]f(x-u^{-1})\sum _{i}{\dfrac {1}{1-ux_{i}}}\prod _{j\neq i}{\dfrac {1}{1-x_{j}/x_{i}}}=\sum _{i}[u^{m}]f(x-u^{-1}){\dfrac {1}{1-ux_{i}}}\prod _{j\neq i}{\dfrac {1}{1-x_{j}/x_{i}}}}](/media/api/rest_v1/media/math/render/svg/b5b3e5bbf4d543978f77a08087ef6cd93476175e)
because
is an operator, it distributes over sum. Furthermore, considering
as a polynomial in
we can use the third ingredient, with
playing the role of
to get

By the virtues of the plethystic substitution
is the same thing as evaluating in the other variables, i.e., taking
out. Now specialize to the schur function and consider the second ingredient, Weyl's formula, then it follows easily by induction, because note the factor

is the same as for the terms in the formula for
when the permutation sends i to the first position.
This gives our final definition of schur functions

The schur functions are a basis for the symmetric functions with the following properties
1. Lower unitriangularity with respect to monomials
2. Orthogonality
The Kostka numbers
have two interpretations, a combinatorial and an algebraic one. These properties are important to keep in mind while generalizing with one or two parameters.
Hall Littlewood Polynomials
[edit]
We know the schur basis, and many more, for the ring of symmetric functions over a field
. The next step of generalization is consider the field
, and twist a little bit the inner product. In contrast with Macdonald polynomials, we can give a closed expression for Hall-Littlewood polynomials
Straight definition and first properties
[edit]
First we need the following
- analogues
![{\displaystyle [k]_{t}:={\dfrac {1-t^{k}}{1-t}}=1+t+t^{2}+\cdots +t^{k-1}}](/media/api/rest_v1/media/math/render/svg/04abf01ea8e7520fb04c9a852bd4ebd47f0a5fc6)
![{\displaystyle [k]_{t}!:=[k]_{t}[k-1]_{t}\cdots [1]_{t}}](/media/api/rest_v1/media/math/render/svg/b359784a2dceae54f64423af2491cbc1743454a9)
Then the "Hall-Littlewood polynomial"
in n variables is given by the following formula
![{\displaystyle P_{\lambda }(x;t)={\dfrac {1}{\prod _{i\geq 0}[\alpha _{i}]_{t}!}}\sum _{w\in Sn}w\left(x^{\lambda }{\dfrac {\Pi _{i<j}(1-tx_{j}/x_{i})}{\Pi _{i<j}(1-x_{j}/x_{i})}}\right)}](/media/api/rest_v1/media/math/render/svg/e4169fdfa4e050e0fcc5fa9895cd93a9df476b40)
Where
and
is such that
Note that when
the denominator
goes away and we get precisely the Weyl's character formula for the schur functions, so

at
the products inside cancel and we get the usual monomial funcitons

The Hall-Littlewood polynomials will form a basis, then we can expand schur in this new basis. The "Kostka-Foulkes polynomials"
are defined by

They don't deserve the name polynomials yet, because so far we just know that they are rational functions in t. But we will see why they're actual polynomials.
Definition with raising operators
[edit]
Define the Jing Operators as t deformations (this is part of a general recipe to t-deform something. See Zabrocki) of the Bersntein operator in the following way
![{\displaystyle S_{m}^{t}f=[u^{m}]f[X+(t-1)u^{-1}]\Omega [uX]}](/media/api/rest_v1/media/math/render/svg/0ea67f3e0d07f8aefdc535ac4151555bdb7575c9)
and their modified version
![{\displaystyle {\tilde {S}}_{m}^{t}f=[u^{m}]f[X-u^{-1}]\Omega [(1-t)uX]}](/media/api/rest_v1/media/math/render/svg/065f45c4812824c490fbe80d58746c2a504cde97)
which are related by

where
is the operator with the plethystic substituion
, and
is its inverse, namely
Analogously to the schur functions now defined the transformed Hall-Littlewood polynomials as

And if we set
we get

Recall that the Bernstein operators added one part to a partition. This new operators behave in a more complicated way, but of similar spirit
Theorem for Jing Operators If
and
then
![{\displaystyle S_{m}^{t}s_{\lambda }\in \mathbb {Z} [t]\{s_{\gamma }:\gamma \geq (m,\mu )\}}](/media/api/rest_v1/media/math/render/svg/2f8660d508a50dac2c7f24de33a1a97b5efdafb0)
Moreover,
appears with coefficient 1
The last part is saying something similar to the previous situation, we will get the schur function with an additional part m added, but the theorem is saying that we get also polynomial combinations of other schur functions.
By repeated use of the theorem we can conclude that

where
are polynomials with
That means that we have upper unitriangularity with respect to the schur basis.
We have analogous statements for Q (although with different proof!)
Theorem for modified Jing Operators If
then
![{\displaystyle {\tilde {S}}_{m}^{t}s_{\lambda }\in \mathbb {Z} [t]\{s_{\gamma }:\gamma \leq (m,\lambda )\}}](/media/api/rest_v1/media/math/render/svg/954c41ae0cae18e2ec829b15f569fe60f29d7d5a)
Moreover,
appears with coefficient
where
is the multiplicity of m as a part of
Again by repeated use of the theorem we can conclude that

where
are polynomials with
That means that we have lower triangularity (but with a messier diagonal elements) with respect to the schur basis.
The operator
is self adjoint for the inner product, i.e. we have
![{\displaystyle \langle f,g[(1-t)X]\rangle =\langle f[(1-t)X],g\rangle }](/media/api/rest_v1/media/math/render/svg/242e3e232d484e31f30dac2f898d82fa4e4b9525)
By the opposite triangularities of
and
we have that if
then
. Passing the
to the other side, we obtain the opposite conclusion
and hence
. Which implies the following claim
The transformed Hall-Littlewood polynomials are orthogonal with respect to the inner product
and their self inner products are given by
![{\displaystyle \langle H_{\mu },H_{\mu }[(1-t)X]\rangle =(1-t)^{l(\mu )}\prod _{i\geq 0}[\alpha _{i}]_{t}!}](/media/api/rest_v1/media/math/render/svg/79547319b0eeca94385c8b86aee842acc183977e)
Now everything fits smoothly
[edit]
Really. First, from the definition of que
one can get the following formula by induction
![{\displaystyle Q_{\lambda }(x;t)=(1-t)^{l(\lambda )}[n-l(\lambda )]_{t}!\sum _{w\in S_{n}}w\left(x^{\lambda }{\dfrac {\Pi _{i<j}(1-tx_{j}/x_{i})}{\Pi _{i<j}(1-x_{j}/x_{i})}}\right)}](/media/api/rest_v1/media/math/render/svg/8a413ecc4bb5bf4b47df2bba90583972abb38c19)
The relation with the original Hall-Littlewood polynomials is
![{\displaystyle P_{\lambda }(x;t)={\dfrac {Q_{\lambda }(x;t)}{(1-t)^{l(\lambda )}\prod _{i\geq 0}[\alpha _{i}]_{t}!}}}](/media/api/rest_v1/media/math/render/svg/374f9b9efcbbb3b82a570e771e547d20f2087140)
Note that the denominator is precisely the self inner product of the
in the inner product
. Classically something a bit different is defined
![{\displaystyle \langle f,g\rangle _{t}=\langle f,g[X/(1-t)]\rangle }](/media/api/rest_v1/media/math/render/svg/050217494e9e16fd01b6bdd46ba3a80497fdadcb)
In this product, the basis
and
are orthogonal and furthermore, they are dual! So recall that we defined the Kostka - Foulkes polynomial as

By taking inner products, and using the duality just mentioned we arrive at

But that last coefficient is equal to our previously defined polynomials
, showing that the Kostka-Foulkes polynomials are in fact polynomials.
Positivity of Kostka-Foulkes polynomials
[edit]
It turns out that they are not just integer polynomials, but their coefficients are positive. It may not sound very interesting to show that a quantity is positive, but usually the question is implicitly asking for a interpretation. There are many different approaches here, all far from trivial. Let's review them
Deep representation theory
[edit]
The work of Hotta, Lusztig, and Springer showed deep connections with representation theory. I cannot say more than a few words (that i don't even understand): They relate the Kostka-Foulkes polynomials, and a variation of them, called cocharge Kostka-Foulkes polynomials to some hardcore math where the keywords are Unipotent Characters, local intersection homology, Springer fiber and perverse sheaves.
For now,the important thing is that they found a ring, the cohomology ring of the Springer fiber, whose Frobenius series is given by the cocharge transformed Hall-Littlewood polynomials, implying they expand schur positively.
Combinatorics of Tableaux
[edit]
Lascoux and Schutzenberger proved the following simple and elegant formula, that gives a concrete meaning to each coefficient

the sum is over all SSYT of shape
and content
. The new definition is the charge
which is easier to define in terms of cocharge
which is an invariant characterized by
1. Cocharge is invariant under jeu-de-taquin slides
2. Suppose the shape of
is disconnected, say
with
above and left of
, and no entry of
is equal to 1. Then
, obtained by swapping, has
3. If
is a single row, then
And then
. The existence of such an invariant requires proof. There is a process to compute the cocharge called catabolism.
Alternative description using tableaux
[edit]
Kirillov and Reshetikhin gave the following formula
![{\displaystyle K_{\lambda \mu }(t)=\sum _{\upsilon }t^{n(\mu )-\Sigma \mu _{j}^{'}\upsilon _{j}^{1}+\Sigma \upsilon _{j}^{k}(\upsilon _{j}^{k}-\upsilon _{j}^{k+1})}\prod {\dfrac {[p_{j}^{k}+\upsilon _{j}^{k}-\upsilon _{j+1}^{k}]_{t}}{[\upsilon _{j}^{k}-\upsilon _{j+1}^{k}]_{t}[p_{j}^{k}]_{t}}}}](/media/api/rest_v1/media/math/render/svg/3ef18c3658b2dfdcf576443e63aad5affd16eb3b)
where the sum is over all
- admissible configurations
.
While nasty, this thing has clearly positive coefficients. The origin of this formula is from a technique in mathematical physics known as Bethe ansatz, which is used to produced highest weight vectors for some tensor products. The theorem is relating :
with the enumeration of highesst weight vectors in
by a quantum number. For more info, stay tuned, probably Anne has something to say about in class.
Commutative Algebra
[edit]
This may be the less technical. Garsi and Procesi simplified the first proof by giving a down to earth interpretation of the cohomology ring of the springer fiber
. Now the action happens inside the polynomial ring
. And
![{\displaystyle R_{\mu }=C[x]/I_{\mu }}](/media/api/rest_v1/media/math/render/svg/52d2af4efce239c34adfb2c2734822cd07cf5d63)
For an ideal with a relatively explicit description. They manage to give generators, and finally they proof with more elementary methods that the frobenius series is the cocharge invariant

where
is the cocharge Kostka-Foulkes poylnomial.