Jump to content

Kolmogorov's two-series theorem

From Wikipedia, the free encyclopedia
This is an old revision of this page, as edited by 188.26.146.202 (talk) at 16:40, 27 January 2014 (References). The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.

In probability theory, Kolmogorov's two-series theorem is a result about the convergence of random series. It follows from Kolmogorov's inequality and is used in one proof of the strong law of large numbers.

Statement of the theorem

Let (Xn)nN be independent random variables with expected values E[Xn] = an and variances var(Xn) = σn2, such that ∑n=1an converges in ℝ and ∑n=1 σi2 < ∞. Then ∑n=1 Xn converges in ℝ almost surely.

Proof

References

  • Durrett, Rick. Probability: Theory and Examples. Duxbury advanced series, Third Edition, Thomson Brooks/Cole, 2005, Section 1.8, pp. 60-69.
  • M. Loève, Probability theory, Princeton Univ. Press (1963) pp. Sect. 16.3
  • W. Feller, An introduction to probability theory and its applications, 2, Wiley (1971) pp. Sect. IX.9