Polynomial matrix spectral factorization
This article, Polynomial matrix spectral factorization, has recently been created via the Articles for creation process. Please check to see if the reviewer has accidentally left this template after accepting the draft and take appropriate action as necessary.
Reviewer tools: Inform author |
Comment: largely unsourced Seraphim System (talk) 11:19, 23 May 2017 (UTC)
Polynomial matrices are widely studied in the fields of systems theory and control theory and have seen other uses relating to Stable polynomials. In stability theory, Spectral Factorization has been used to find determinental matrix representations for bivariate stable polynomials and real zero polynomials[1]. A key tool used to study these is a matrix factorization known as either the Polynomial Matrix Spectral Factorization or the Matrix Fejer-Riesz Theorem.
Given a univariate positive polynomial , a polynomial which takes on positive values for any real input , the Fejer-Riesz Theorem yields the polynomial spectral factorization . Results of this form are generically referred to as Positivstellensatz.
Often polynomial matrices show up in a context in which, when viewed as a function from the reals to scalar matrices, have all positive definite matrices in their image. This should be viewed as the matrix-analogue of a real coefficient polynomial who takes on all nonnegative values for real inputs. The Polynomial Matrix Spectral Factorization, also known as the Matrix Valued Fejer-Riesz Theorem, yields a Polynomial Matrix decomposition for such polynomial matrices which is meant to simultaneously be the polynomial analogue of the Cholesky decomposition for scalar matrices and the spectral factorization of a nonnegative real coefficient univariate polynomial. This result was originally proven by Wiener[2] in a more general context which was concerned with integrable matrix-valued functions that also had integrable log determinant. Because applications are often concerned with the polynomial restriction, simpler proofs have appeared since of this specific case. This article follows the recent proof method of Lasha Ephremidze[3]which relies only on elementary Linear algebra and Complex analysis.
Spectral Factorization is used extensively in Linear–quadratic–Gaussian control. Because of this application there have been many algorithms to calculate spectral factors[4]. Some modern algorithms focus on the more general setting originally studied by Wiener[5]. In the case the problem is known as polynomial spectral factorization, or Fejer-Riesz Theorem, and has many classical algorithms and some modern algorithms with improvements[6].
Statement
Let be a polynomial matrix where each entry is a complex coefficient polynomial of degree at most . Suppose that for all we have is a positive definite hermitian matrix. Then there exists a polynomial matrix such that for all . We can furthermore find which is nonsingular on the lower half plane.
Extension to Complex Inputs
Note that if then . When is a complex coefficient polynomial or complex coefficient rational function we have is also a polynomial or rational function respectively. For we haveSince the entries of and are complex polynomials which agree on the real line, they are in fact the same polynomials. We can conclude they in fact agree for all complex inputs.
Polynomial (and Rational) Spectral Factorization
Cholesky Decomposition
The inspiration for this result is a factorization which characterizes positive definite matrices.
Decomposition for Scalar Matrices
Given any positive definite scalar matrix , we are guaranteed by Cholesky decomposition where is a lower triangular matrix. If we don't restrict to lower triangular matrices we can consider all factorizations of the form . It is not hard to check that all factorizations are achieved by looking at the orbit of under right multiplication by a unitary matrix, .
To obtain the lower triangular decomposition we induct by splitting off the first row and first column:Solving these in terms of we get
Since is positive definite we have is a positive real number, so it has a square root. The last condition from induction since the right hand side is the Schur complement of , which is itself positive definite.
Decomposition for Rational Matrices
Now consider where the are complex rational functions and is positive definite hermitian for almost all real . Then by the symmetric Gaussian elimination we performed above, all we need to show is there exists a rational such that for real . Once we have that then we can solve for . Since the Schur complement is positive definite for the real away from the poles and the Schur complement is a rational polynomial matrix we can induct to find .
Square-Roots of Positive Rational Functions
To prove that there exists rational with, we write where . Since is positive definite hermitian for almost all real we have is positive real for almost all real . Letting we can conclude that is real and positive. The numerator and denominator have distinct sets of roots, so all real roots which show up in either must have even multiplicity. We can divide out these real roots to reduce to the case where has only complex roots and poles. By hypothesis we have . Since all of the are complex (and hence not fixed points of conjugation) they both come in conjugate pairs.
Extension to Polynomial Decompositions
To prove the existence of polynomial matrix spectral factorization, we begin with the rational polynomial matrix Cholesky Decomposition and modify it to remove the poles. Namely given with each entry a complex coefficient polynomial we have rational polynomial matrix with for real . Given a rational polynomial matrix which is unitary valued for real , there exists another decomposition,.
Removing Lower Half Plane Poles
If any entry of has a pole, , in the lower half plane we multiply by For real , since the numerator and denominator are complex conjugates of each other.
Removing Lower Half Plane Singularities
If then there exists a scalar unitary matrix such that . This implies has first column vanish at . To remove the singularity at we multiply byhas determinant with one less zero (by multiplicity) at a, without introducing any poles in the lower half plane of any of the entries.
Extend Analyticity to all of
After modifications, the decomposition satisfies is analytic and invertible on the lower half plane. From our observation in Extension to Complex Inputs, we have
for all complex numbers. This implies . Since has no zeroes in the lower half plane, by the Adjugate matrix formula we have is analytic in the lower half plane. Thus the right hand side is analytic on the lower half plane, which implies is analytic on the upper half plane. Finally if has a pole on the real line then has the same pole on the real line which contradicts the fact has no poles on the real line (it is analytic everywhere by hypothesis).
The above shows that is analytic and invertible on the lower half plane implies is analytic everywhere and hence a polynomial matrix.
Uniqueness
Given two polynomial matrix decompositions which are invertible on the lower half plane we rearrange to . Since is analytic on the lower half plane and nonsingular, is a rational polynomial matrix which is analytic and invertible on the lower half plane. Then by the same argument as above we have is in fact a polynomial matrix which is unitary for all real . This means that if is the th row of then . For real this is a sum of non-negative polynomials which sums to a constant, implying each of the summands are in fact constant polynomials. Then where is a scalar unitary matrix.
References
- ^ Anatolii Grinshpan, Dmitry S. Kaliuzhnyi-Verbovetskyir, Victor Vinnikov, Hugo J. Woerdeman. "Stable and real-zero polynomials in two variables". Multidimensional Systems and Signal Processing. 27: 1–26.
{{cite journal}}
: CS1 maint: multiple names: authors list (link) - ^ N. Wiener and P. Masani. "The prediction theory of multivariate stochastic processe". Acta Math. 98: 111–150.
- ^ Ephremidze, Lasha. "An Elementary Proof of the Polynomial Matrix Spectral Factorization Theorem". Proceedings of the Royal Society of Edinburgh Section A: Mathematics. 144: 747–751.
- ^ Thomas Kailath, A. H. Sayed. "A survey of spectral factorization methods". Numerical Linear Algebra Techniques for Control and Signal Processing. Volume 8, Issue 6-7: 467–496.
{{cite journal}}
:|volume=
has extra text (help) - ^ Gigla Janashia, Edem Lagvilava, Lasha Ephremidze. "A New Method of Matrix Spectral Factorization". IEEE Transactions on Informat. 57.
{{cite journal}}
: CS1 maint: multiple names: authors list (link) - ^ D.A. Bini, G. Fiorentino, L. Gemignani, B. Meini. "Effective Fast Algorithms for Polynomial Spectral Factorization". Numerical Algorithms. 34: 217–227.
{{cite journal}}
: CS1 maint: multiple names: authors list (link)