Jump to content

Talk:Shannon's source coding theorem

Page contents not supported in other languages.
From Wikipedia, the free encyclopedia
This is an old revision of this page, as edited by Christian75 (talk | contribs) at 13:48, 24 May 2022 (Assessment (C/Low): +Computer science (Rater)). The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.
WikiProject iconComputer science C‑class Low‑importance
WikiProject iconThis article is within the scope of WikiProject Computer science, a collaborative effort to improve the coverage of Computer science related articles on Wikipedia. If you would like to participate, please visit the project page, where you can join the discussion and see a list of open tasks.
CThis article has been rated as C-class on Wikipedia's content assessment scale.
LowThis article has been rated as Low-importance on the project's importance scale.
Things you can help WikiProject Computer science with:

What does the E in ES mean?

Could someone explain the 'E' in 'ES' please? Centroyd (talk) 12:57, 19 September 2016 (UTC)[reply]

I believe it is the expected value operator. If so, ES denotes the expected value of the random variable S, although the correct presentation would be , I guess. If someone confirms, please change it.
177.138.210.163 (talk) 03:20, 25 March 2017 (UTC)[reply]

Reorganised/Merged

I suggest the 'Shannon noiseless source coding theorem' article be merged with this 'Source coding' article.

--Hpalaiya 22:54, 19 May 2006 (UTC)[reply]

this is a separate large article. maybe a summary of this article might be appropriate on the source coding page - strong disagree

—Preceding unsigned comment added by 81.104.79.141 (talkcontribs) 16:18, 21 December 2006

Merge carried out, having first:

Some more clean-up to do, to blend the presentation of the two theorems more closely. (Detailed editing not yet started).

-- Jheald 22:00, 6 March 2007 (UTC).[reply]

I don't think that the that keeps showing up is ever explained/defined in this article. A quick explanation as to what it is would go a long way for someone that is not familiar with the material already.

Jeejaw (talk) 02:24, 11 September 2009 (UTC)[reply]

An alternative proof for the symbol code case

Applying the Jensen's inequality for the expression we can have a direct proof without using any 's:

Using the Kraft's inequality on the right side:

--Vamos (talk) 20:12, 18 November 2009 (UTC)[reply]

Source coding theorem for symbol codes is wrong?

As far as I can see, Source coding theorem for symbol codes is wrong or at least is not accurate. Here is the counterexample: let , , , for other ( is the empty word), X is 0 or 1 with equal probability. Then, it is clear that f is decipherable code, but . There is also a counterexample that does not use the empty word. Could someone help to state this theorem more accurate? Alexei Kopylov (talk) 17:30, 5 May 2010 (UTC)[reply]


Well Alexei, Using any zero cost symbol (empty symbol), a message longer than a single symbol will not be uniquely decodable. It contradicits to the Kraft's inequality which is a necessary condition for decodability. Vamos (talk) 17:19, 12 May 2010 (UTC)[reply]

Oh, thanks! The problem was that the statement was about decipherable code, and there were no definition what decipherable code was. I changed it to uniquely decodable code, with the wiki link. See my change Alexei Kopylov (talk) 01:06, 14 May 2010 (UTC)[reply]

Practical example

Why is no practical example included for the less mathematically inclined? I often find a practical example can greatly help to illustrate abstract concepts, and I think this would greatly enhance the usefulness of the article. — Preceding unsigned comment added by 145.18.212.47 (talk) 08:31, 27 June 2013 (UTC)[reply]

Why not drop "with negligible probability of loss"?

Shannon's original version is 100% reliable. Source sequences that are not typical can be encoded with longer code words. That does not hurt the approximation much, because the probability of atypical sequences is so small. Wstomv (talk) 22:00, 4 December 2015 (UTC)[reply]