Jump to content

Talk:Noisy-channel coding theorem

Page contents not supported in other languages.
From Wikipedia, the free encyclopedia
This is an old revision of this page, as edited by SineBot (talk | contribs) at 23:40, 6 October 2008 (Signing comment by 220.253.153.87 - "Title: new section"). The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.

This article should be moved to Noisy-channel coding theorem. The hyphen becomes necessary when "noisy channel" is used as an adjective. Otherwise we descrive the theorem itself as noisy, which is certainly not the intent here. -- 130.94.162.61 16:15, 19 January 2006 (UTC)[reply]

Copyediting error in page title

I agree with the comment above. "Noisy channel" should be hyphenated when used as an adjective (as it is, inconsistently, in the body text). 38.113.17.3 19:35, 12 October 2006 (UTC)VP[reply]

Error in noisy-channel coding theorem

Before the change I just made, the statement of the theorem implied that when rate equaled capacity, error could never be made arbitrarily small. This is clearly wrong; a lossless channel can achieve its capacity rate and, in spite of its being somewhat degenerate, does fall within this framework. The 1 December 2005 "fix" was wrong (though what it attempted to correct was also wrong). I've fixed this so that it's clear that the R=C case is not addressed in the noisy-channel coding theorem, but someone might want to double-check my wording on this (which is also in Shannon–Hartley theorem) and redraw the math PNG. Calbaer 23:25, 25 April 2006 (UTC)[reply]

Attribution to 1948 Shannon

Theorem 10 of Shannon's 1948 paper corresponds to the noisy channel coding theorem, but this only has part 1 of the theorem as presented here. Could someone double-check this and re-attribute accordingly? Calbaer 22:36, 31 August 2006 (UTC)[reply]

"Theorem 11: Let a discrete channel have the capacity C and a discrete source the entropy per second H. If H ≤ C there exists a coding system such that the output of the source can be transmitted over the channel with an arbitrarily small frequency of errors (or an arbitrarily small equivocation). If H > C it is possible to encode the source so that the equivocation is less than H -C+ ε where ε is arbitrarily small. There is no method of encoding which gives an equivocation less than H - C".
Shannon's H becomes our R, Shannon's equivocation becomes our HR, and thus Shannon's statement is equivalent to our parts 1, 2 and 3 Jheald 23:36, 31 August 2006 (UTC).[reply]

Note that Shannon did not provide a formal proof. That was finally done by Feinstein for the coding part, and Fano for the converse part. I added the references. Please check. Wullj 27 December 2006.

Merge with Shannon–Hartley theorem article?

There is a lot of overlap between this article and the Shannon–Hartley theorem. Should these pages be merged? Please contribute your thoughts on the Talk:Shannon–Hartley theorem discussion page. -- technopilgrim 20:29, 30 October 2006 (UTC)[reply]

What about non-discrete channels?

I'm a student civil engineering, and in information coding class, we saw a variation of the shannon theorem. It states that, on a non-discrete memory-less channel, the shannon capacity [math]C_{shannon} = B log_2(1 + P_u / P_n)[/math]. Can this be added to the article?

Take a look at Shannon-Hartley theorem. And the merge discussion mentioned above.--LutzL 15:53, 28 November 2006 (UTC)[reply]

How about a simple intuitive/non-mathematical explanation?

Something like: "You have a finite bandwidth, so at some point you can't switch any faster, and you have a finite signal-to-noise level, so at some point you can't go to increased multi-level signaling." But better worded of course! It would be nice for the non-math-majors plus those who are first learning about it to get an intuitive feel for why it exists. —Preceding unsigned comment added by 71.164.9.240 (talk) 06:10, 28 June 2008 (UTC)[reply]

Title

At a university level this is generally known as 'Shannon's capacity formula', not 'Noisy-channel coding theorem'. —Preceding unsigned comment added by 220.253.153.87 (talk) 23:39, 6 October 2008 (UTC)[reply]