Jump to content

Talk:Information theory and measure theory

Page contents not supported in other languages.
From Wikipedia, the free encyclopedia
This is an old revision of this page, as edited by 130.94.162.64 (talk) at 04:32, 21 June 2006 (Possible misstatement). The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.

Possible misstatement

I am uncomfortable with the phrase 'we find that Shannon's "measure" of information content satisfies all the postulates and basic properties of a formal measure over sets.' This may not be quite correct, as it is a signed measure, as explained below in the article. How should it be better worded? --130.94.162.64 21:18, 20 June 2006 (UTC)[reply]

signed measure is still a measure, so if that's the only objection, it should be ok. on the other hand, the section title suggests entropy is a measure, that doesn't seem right. Mct mht 02:49, 21 June 2006 (UTC)[reply]

in the same vein, i think the article confuses measure in the sense of information theory with measure in the sense of real analysis in a few places. Mct mht 03:02, 21 June 2006 (UTC)[reply]

There are two different senses of "measure" in the article. One is the abstract measure over sets which forms the analogy with joint entropy, conditional entropy, and mutual information. The other is the measures over which one integrates in the various formulas of information theory. Where is the confusion? --130.94.162.64 04:16, 21 June 2006 (UTC)[reply]

some language in the section is not clear:

If we associate the existence of sets and with arbitrary discrete random variables X and Y, somehow representing the information borne by X and Y, respectively, such that: whenever X and Y are independent, and...

Associate sets to random variables how? are they the supports of the random variables? what's the σ-algebra? what's meant by two random variable being independent? Mct mht 03:14, 21 June 2006 (UTC)[reply]

Just pretend that those sets exist. They are not the supports of the random variables. The sigma-algebra is the algebra generated by the operations of countable set union and intersection on those sets. See statistical independence. --130.94.162.64 03:34, 21 June 2006 (UTC)[reply]
I mean unconditionally independent. --130.94.162.64 03:49, 21 June 2006 (UTC)[reply]


so given a family of random variables, one associates, somehow, a family of sets. the σ-algebra is the one generated by these family (in the same way the open sets generate the Borel σ-algebra), or does one assume that, somehow, the family is already a σ-algebra? also the section seems to imply the Shannon entropy is a measure on the said σ-algebra, is that correct?
Yes. --130.94.162.64 04:32, 21 June 2006 (UTC)[reply]
then there seems to be, at least, two ways measure theory is applied in this context. first, in the sense that entropy is a measure on some, undefined, sets corresponding to random variables. second, one can talk about random variables on a fixed measure space, and define information theoretic objectes in terms of the given measure. that a fair statement? Mct mht 04:14, 21 June 2006 (UTC)[reply]
The σ-algebra is the one generated by the family of sets. (They are not already a σ-algebra.) And I believe that is a fairly reasonable statement if I understand it right. --130.94.162.64 04:29, 21 June 2006 (UTC)[reply]
There's still a lot of explaining to do; that's why the article has the expert tag. --130.94.162.64 04:32, 21 June 2006 (UTC)[reply]

Kullback–Leibler divergence

Also, the Kullback–Leibler divergence should be explained here in a proper measure-theoretic framework. --130.94.162.64 21:27, 20 June 2006 (UTC)[reply]