Jump to content

Moment problem

From Wikipedia, the free encyclopedia
This is an old revision of this page, as edited by Charles Matthews (talk | contribs) at 12:15, 5 December 2004 (initial page). The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.
(diff) ← Previous revision | Latest revision (diff) | Newer revision → (diff)

In mathematics, a moment problem arises as the result of trying to invert the mapping that takes a measure μ to the sequences of moments

Mdμ

where M runs over a set of monomials. In the classical setting, μ is a measure on the real line, and M is in the sequence

xn

giving moments

mn

for n = 0, 1, 2, 3, ... . It is in this form that the question would appear in probability theory, of asking to what extent a probability measure may have specified its mean, variance and so on.

There are three named classical moment problems: the Hamburger moment problem in which the support of μ is allowed to be the whole real line; the Stieltjes moment problem, for [0,+∞); and the Hausdorff moment problem for a finite interval, which without loss of generality may be taken as [0,1]. For example, the uniqueness of μ in the Hausdorff moment problem follows because polynomials are dense in the uniform norm on [0,1]. It is the question of existence that matters. It was realised that this is closely connected to Hilbert spaces and spectral theory. In more concrete terms, the condition on a positive measure μ that

∫|P|2dμ > 0

for any complex-valued polynomial P(t) gives rise to matrix conditions, necessary on any sequence of moments, namely that some Hankel matrices are positive definite.