Jump to content

Source coding theorem

From Wikipedia, the free encyclopedia
This is an old revision of this page, as edited by Jheald (talk | contribs) at 21:05, 4 January 2006. The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.

In information theory, the source coding theorem (Shannon 1948) states that N i.i.d. random variables each with entropy H(X) can be compressed into more than NH(X) bits with negligible risk of information loss, as N tends to infinity; conversely, if they are compressed into fewer than NH(X) bits it is virtually certain that information will be lost. (MacKay 2003).