Talk:Golomb coding
The first image
Exactly what is the image with q-1, q and q+1 supposed to show? What is the curved arrow at the top for? The 'r' is placed so it looks like a badly typeset .
I'm sorry, but the whole image just seems confusing to me. /87.96.186.152 (talk) 10:23, 11 March 2008 (UTC)
Yes, the image isn't very clear, but I don't know how to make it better. I think is an illustration of the modulo operation. Any possible number length n (in inches) can be divided into 2 parts, q (the number of yardsticks I have to use), and r (the number of inches along the last yardstick). --68.0.124.33 (talk) 08:31, 12 September 2008 (UTC)
Question on Unary coding...
On the Unary Coding page it says 5 is represented as 11110 yet in the table on the Golomb coding page it says 5 is 111110. Where is the mistake? Or if Golomb coding does use 111110 as 5 and Unary coding uses 11110 as 5, then was it a mistake to say that Golomb uses Unary?
Eric.frederich 16:59, 9 March 2007 (UTC)
The paper "Optimal source codes for geometrically distributed integer alphabets" by Gallager and Van Voorhis (1975, IEEE Transactions on Information Theory IT-21(2):228-230) is the citation "Managing Gigabytes" gives for the probability distribution quoted in the article, but I haven't had a chance to read the paper myself, so I'm hesitant to add it to the refrences. I'll get to it eventually, but if someone else wants to read it and see if it should be refrenced, please do.Mark Tozzi 06:47, 20 December 2005 (UTC)
their paper is not so great. i can send it to you if you provide email.. []
Insufficient Context
This article could use a bit more introductory context to orient non-expert readers.
For example:
- 'entropy' is introduced without context (cryptanalysis? astrophysics? neoclassical economics? what sense of entropy is relevant here?). Which connotation is not necessarily obvious to those unfamiliar.
- a quick 1 sentence blurb about what this kind of coding is used for would help (eg data compression, natural language processing, lexical analysis, string functions, whatever). It is implied in the text, but intro could be beefed up a bit.
dr.ef.tymac 16:27, 19 November 2006 (UTC)
I agree with the comments above about insufficient context, but I'm also confused about Rice's contribution to Golomb-Rice codes. The introduction says Golomb's scheme (which references a paper dated 1966) generalized Rice's scheme which was developed earlier, except that the only reference to Rice's work is a paper dated 1979. If Rice really did invent something before 1966, where's the reference? Otherwise, why is Rice's name even associated with the codes?
- Golomb's codes came first (for memoryless run-length codes which yield a geometric distribution of run lengths). Later they were proven optimal for any geometric distribution. Still later Rice showed how an adaptive code based the simplest Golomb code (those that were parametrized by powers of two). People call both Rice's method and his subset "Rice coding," thus the confusion. I hope I've clarified all that and more in the intro; if someone thinks it's clear enough to remove the context header, please do so. Calbaer (talk) 21:40, 6 March 2009 (UTC)
Question on alphabets
Quote from the article: "Golomb coding can also be used (as in Golomb's original article) to encode an alphabet of two symbols, where one is more likely than the other. In this case it can be thought of as a kind of run-length encoding."
As far as I have understood the concept of the code from reading the original paper of S. Golomb, the code ALWAYS operates via a two-symbols alphabet, as it is used to produce a binary representation of a given number; hence, it is a mapping of that particular number (i.e., 42) to a sequence of the both sybmols occuring in the alphabet (obviously, 0 and 1), and vice versa. The length of the sequence is determined by the probability for one of those symbols to arise (for the case M = 10, this probability is 2^(-1/10) = 0,933 for one of the symbols; it is a question of convention if it would be called 0 or 1). If we were to code a number in a ternary representation, our alphabet would contain a third symbol, i.e. the ternary digit 2.
In the light of the above thoughts, I think that there is a slight confusion in the article in the sense that the number to be coded is regarded as part of the alphabet.
The quote from the article is completely misleading and confusing; it can be interpreted as if the numbers we would like to code were only two; however, this is not the case; it should mean that the base (or alphabet) used to produce some output consists of only two symbols - which is already clear, since we are talking of a binary code.
There should be a clear distinguishment between the set of numbers to be coded (i.e., 1, 2, ..., n), and the alphabet used to produce the code of the numbers in that set.
Nikolay Petkov 7c0[at]mail.com 22:27, 18 May 2007 (EET)
encoding an input that uses an alphabet of 2 symbols
I agree that sentence is confusing and easy to mis-interpret. What I *think* it is getting at is: Say we have a input file composed of 2 symbols, where each symbol in the sequence is more-or-less independent of the next (a Bernoulli process), but one symbol is vastly more likely than the other symbol -- for example, a recording of a Geiger counter, with occasional clicks ("1" bits) typically separated by long periods of silence (streams of "0" bits). Say I then partially compress that file with run-length encoding. This intermediate file contains the lengths of the runs of "0" bits (i.e., 0,1,2,... n). A length of "0" indicates consecutive "1" bits.
Further compressing that sequence of lengths with Golomb coding with the appropriate M parameter is the "most efficient" compression code. By "most efficient", I mean in the same way that Huffman coding is the "most efficent" compression code -- for this kind of file, both methods generate practically the same code. (My understanding is that arithmetic coding on the original file would give an even smaller compressed file).
The appropriate M parameter depends on the probability of the 2 symbols -- the more lop-sided the probability, the bigger M needs to be to get the most efficient compression.
Is this something that needs to be described in this article?
--68.0.124.33 (talk) 09:13, 12 September 2008 (UTC)
Example code
I don't think the example code belongs here. I suggest we remove it or maybe move it to Wikibooks(?). —ZeroOne (talk / @) 19:45, 25 June 2008 (UTC)
- The code, furthermore, is an example of rice code. I think it's incorrect to describe a coding scheme and propose another one in the code example... --akbg (talk) 21:57, 12 September 2008 (UTC)
- The example code was also incorrect - it inverted the remainder bit sequence for no apparent reason. I've deleted it per MOS:CODE. The existing pseudocode is enough to describe the algorithm. --dmmaus (talk) 23:57, 6 January 2014 (UTC)
Construction of code
What does "The final result looks like: " exactly mean? --201.253.134.203 (talk) 23:43, 5 October 2008 (UTC)
- Oh, I got it. --201.253.134.203 (talk) 23:43, 5 October 2008 (UTC)