Shannon–Fano–Elias coding
Appearance
In information theory, Shannon–Fano–Elias coding is a precursor to arithmetic coding, in which probabilities are used to determine codewords.[1]
Algorithm Description
Given a discrete random variable X of ordered values to be encoded, let be the probability for any x in X. Define a function
Algorithm: For each x in X,
- Let Z be the binary expansion of .
- Choose the length of the encoding of x, , to be the integer
- Choose the encoding of x, , be the first most significant bits after the decimal point of Z.
Example
Let X = {A, B, C, D}, with probabilities p = {1/3, 1/4, 1/6, 1/4}.
- For A
- Z(A) = = = 0.1666...
- In binary, Z(A) = 0.0010101010...
- L(A) = = 3
- code(A) is 001
- Z(A) = = = 0.1666...
- For B
- Z(B) = = = 0.4583333...
- In binary, Z(B) = 0.01110101010101...
- L(B) = = 3
- code(B) is 011
- Z(B) = = = 0.4583333...
- For C
- Z(C) = = = 0.66666...
- In binary, Z(C) = 0.101010101010...
- L(C) = = 4
- code(C) is 1010
- Z(C) = = = 0.66666...
- For D
- Z(D) = = = 0.875
- In binary, Z(D) = 0.111
- L(D) = = 3
- code(D) is 111
- Z(D) = = = 0.875
References
- ^ T. M. Cover and Joy A. Thomas (2006). Elements of information theory (2nd ed.). John Wiley and Sons. pp. 127–128. ISBN 978-0-471-24195-9.