Jump to content

Conditional probability

From Wikipedia, the free encyclopedia
This is an old revision of this page, as edited by Jonathanstray (talk | contribs) at 00:22, 21 November 2014 (I really think we do need a simpler intro... added note that B need not occur first. lease edit to taste instead). The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.

The conditional probability of an event is the probability that the event will happen given that (by assumption, presumption, assertion or evidence) some other event has also occurred [1]

For example, the conditional probability of having a cold given that you are coughing might be 75%, meaning you probably have a cold if you are coughing. But the non-conditional probability (normally called just "probability") of having a cold may be only 5%, meaning that only 5% of the population as a whole has a cold, including people who are coughing and people who are not. Conditional probability is one of the most fundamental concepts in In probability theory[2] and provides the language in which Bayes' theorem is written..

The expression P(A|B) is read "the probability of A given B" and means the probability of event "A" given that "B" has also happened. This is also sometimes written PB(A).

Note that in general, it is not necessary that "B" occur before "A".

Conditional probabilities are a basic tool that is widely used in most types of statistics. But they can be quite slippery and require careful interpretation.[3]

Definition

Illustration of conditional probabilities with an Euler diagram. The unconditional probability P(A) = 0.52. However, the conditional probability P(A|B1) = 1, P(A|B2) ≈ 0.75, and P(A|B3) = 0.
On a tree diagram, branch probabilities are conditional on the event associated with the parent node.
Venn Pie Chart describing conditional probabilities

Conditioning on an event

Kolmogorov definition

Given two events A and B from the sigma-field of a probability space with P(B) > 0, the conditional probability of A given B is defined as the quotient of the probability of the joint of events A and B, and the probability of B:

This may be visualized as restricting the sample space to B. The logic behind this equation is that if the outcomes are restricted to B, this set serves as the new sample space.

Note that this is a definition but not a theoretical result. We just denote the quantity as and call it the conditional probability of A given B.

As an axiom of probability

Some authors, such as De Finetti, prefer to introduce conditional probability as an axiom of probability:

Although mathematically equivalent, this may be preferred philosophically; under major probability interpretations such as the subjective theory, conditional probability is considered a primitive entity. Further, this "multiplication axiom" introduces a symmetry with the summation axiom for mutually exclusive events:[4]

Definition with σ-algebra

If P(B) = 0, then the simple definition of P(A|B) is undefined. However, it is possible to define a conditional probability with respect to a σ-algebra of such events (such as those arising from a continuous random variable).

For example, if X and Y are non-degenerate and jointly continuous random variables with density ƒX,Y(xy) then, if B has positive measure,

The case where B has zero measure can only be dealt with directly in the case that B = {y0}, representing a single point, in which case

If A has measure zero then the conditional probability is zero. An indication of why the more general case of zero measure cannot be dealt with in a similar way can be seen by noting that the limit, as all δyi approach zero, of

depends on their relationship as they approach zero. See conditional expectation for more information.

Conditioning on a random variable

Conditioning on an event may be generalized to conditioning on a random variable. Let X be a random variable taking some value from xn. Let A be an event. The conditional probability of A given X is defined as the random variable:

More formally:

The conditional probability P(A|X) is a function of X, e.g., if the function g is defined as

,

then

Note that P(A|X) and X are now both random variables. From the law of total probability, the expected value of P(A|X) is equal to the unconditional probability of A.

Example

Suppose that somebody secretly rolls two fair six-sided dice, and we must predict the outcome.

  • Let A be the value rolled on die 1
  • Let B be the value rolled on die 2

What is the probability that A = 2?

Table 1 shows the sample space of 36 outcomes

Clearly, A = 2 in exactly 6 of the 36 outcomes, thus P(A=2) = 636 = 16.

Table 1
+ B=1 2 3 4 5 6
A=1 2 3 4 5 6 7
2 3 4 5 6 7 8
3 4 5 6 7 8 9
4 5 6 7 8 9 10
5 6 7 8 9 10 11
6 7 8 9 10 11 12

Suppose it is revealed that A+B ≤ 5

What is the probability A+B ≤ 5 ?

Table 2 shows that A+B ≤ 5 for exactly 10 of the same 36 outcomes, thus P(A+B ≤ 5) = 1036

Table 2
+ B=1 2 3 4 5 6
A=1 2 3 4 5 6 7
2 3 4 5 6 7 8
3 4 5 6 7 8 9
4 5 6 7 8 9 10
5 6 7 8 9 10 11
6 7 8 9 10 11 12

What is the probability that A = 2 given that A+B ≤ 5 ?

Table 3 shows that for 3 of these 10 outcomes, A = 2

Thus, the conditional probability P(A=2 | A+B ≤ 5) = 310 = 0.3.

Table 3
+ B=1 2 3 4 5 6
A=1 2 3 4 5 6 7
2 3 4 5 6 7 8
3 4 5 6 7 8 9
4 5 6 7 8 9 10
5 6 7 8 9 10 11
6 7 8 9 10 11 12

Statistical independence

Events A and B are defined to be statistically independent if:

.

That is, the occurrence of A does not affect the probability of B, and vice versa. Although the derived forms may seem more intuitive, they are not the preferred definition as the conditional probabilities may be undefined if P(A) or P(B) are 0, and the preferred definition is symmetrical in A and B.

Common fallacies

These fallacies should not be confused with Robert K. Shope's 1978 "conditional fallacy", which deals with counterfactual examples that beg the question.

Assuming conditional probability is of similar size to its inverse

A geometric visualisation of Bayes' theorem. In the table, the values ax, ay, bx and by give the relative weights of each corresponding condition and case. The figures denote the cells of the table involved in each metric, the probability being the fraction of each figure that is shaded. This shows that P(A|X) P(X) = P(X|A) P(A) i.e. P(A|X) = P(X|A) P(A) / P(X). Similar reasoning can be used to show that P(B|X) = P(X|B) P(B) / P(X) etc.

In general, it cannot be assumed that P(A|B) ≈ P(B|A). This can be an insidious error, even for those who are highly conversant with statistics.[5] The relationship between P(A|B) and P(B|A) is given by Bayes' theorem:

That is, P(A|B) ≈ P(B|A) only if P(B)/P(A) ≈ 1, or equivalently, P(A) ≈ P(B).

Alternatively, noting that AB = BA, and applying conditional probability:

Rearranging gives the result.

Assuming marginal and conditional probabilities are of similar size

In general, it cannot be assumed that P(A) ≈ P(A|B). These probabilities are linked through the law of total probability:

.

where the events form a countable partition of .

This fallacy may arise through selection bias.[6] For example, in the context of a medical claim, let SC be the event that a sequela (chronic disease) S occurs as a consequence of circumstance (acute condition) C. Let H be the event that an individual seeks medical help. Suppose that in most cases, C does not cause S so P(SC) is low. Suppose also that medical attention is only sought if S has occurred due to C. From experience of patients, a doctor may therefore erroneously conclude that P(SC) is high. The actual probability observed by the doctor is P(SC|H).

Over- or under-weighting priors

Not taking prior probability into account partially or completely is called base rate neglect. The reverse, insufficient adjustment from the prior probability is conservatism.

Formal derivation

Formally, P(A|B) is defined as the probability of A according to a new probability function on the sample space, such that outcomes not in B have probability 0 and that it is consistent with all original probability measures.[7][8]

Let Ω be a sample space with elementary events {ω}. Suppose we are told the event B ⊆ Ω has occurred. A new probability distribution (denoted by the conditional notation) is to be assigned on {ω} to reflect this. For events in B, it is reasonable to assume that the relative magnitudes of the probabilities will be preserved. For some constant scale factor α, the new distribution will therefore satisfy:

Substituting 1 and 2 into 3 to select α:

So the new probability distribution is

Now for a general event A,

See also

References

  1. ^ Gut, Allan (2013). Probability: A Graduate Course (2 ed.). New York, NY: Springer. ISBN 978-1-4614-4707-8.
  2. ^ Sheldon Ross, A First Course in Probability, 8th Edition (2010), Pearson Prentice Hall, ISBN 978-0-13-603313-4
  3. ^ George Casella and Roger L. Berger, Statistical Inference,(2002), Duxbury Press, ISBN 978-0-534-24312-8
  4. ^ Gillies, Donald (2000); "Philosophical Theories of Probability"; Routledge; Chapter 4 "The subjective theory"
  5. ^ Paulos, J.A. (1988) Innumeracy: Mathematical Illiteracy and its Consequences, Hill and Wang. ISBN 0-8090-7447-8 (p. 63 et seq.)
  6. ^ Thomas Bruss, F; Der Wyatt Earp Effekt; Spektrum der Wissenschaft; March 2007
  7. ^ George Casella and Roger L. Berger (1990), Statistical Inference, Duxbury Press, ISBN 0-534-11958-1 (p. 18 et seq.)
  8. ^ Grinstead and Snell's Introduction to Probability, p. 134