Jump to content

Source coding theorem

From Wikipedia, the free encyclopedia
This is an old revision of this page, as edited by Jheald (talk | contribs) at 15:13, 5 January 2006 (+dabs). The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.

In information theory, the source coding theorem (Shannon 1948) informally states that:

"N i.i.d. random variables each with entropy H(X) can be compressed into more than NH(X) bits with negligible risk of information loss, as N tends to infinity; but conversely, if they are compressed into fewer than NH(X) bits it is virtually certain that information will be lost." (MacKay 2003).

Devising coding strategies to achieve successfully this compression is the basis of the field of entropy encoding.