Conditional probability
![]() | This article has multiple issues. Please help improve it or discuss these issues on the talk page. (Learn how and when to remove these messages)
|
Part of a series on statistics |
Probability theory |
---|
![]() |
Part of a series on statistics |
Probability theory |
---|
![]() |
The conditional probability of an event is the probability that the event will happen given that (by assumption, presumption, assertion or evidence) some other event has also occurred [1]
For example, the conditional probability of having a cold given that you are coughing might be 75%, meaning you probably have a cold if you are coughing. But the non-conditional probability (normally called just "probability") of having a cold may be only 5%, meaning that only 5% of the population as a whole has a cold, including people who are coughing and people who are not. Conditional probability is one of the most fundamental concepts in In probability theory[2] and provides the language in which Bayes' theorem is written..
The expression P(A|B) is read "the probability of A given B" and means the probability of event "A" given that "B" has also happened. This is also sometimes written PB(A).
Note that in general, it is not necessary that "B" occur before "A".
Conditional probabilities are a basic tool that is widely used in most types of statistics. But they can be quite slippery and require careful interpretation.[3]
Definition



Conditioning on an event
Kolmogorov definition
Given two events A and B from the sigma-field of a probability space with P(B) > 0, the conditional probability of A given B is defined as the quotient of the probability of the joint of events A and B, and the probability of B:
This may be visualized as restricting the sample space to B. The logic behind this equation is that if the outcomes are restricted to B, this set serves as the new sample space.
Note that this is a definition but not a theoretical result. We just denote the quantity as and call it the conditional probability of A given B.
As an axiom of probability
Some authors, such as De Finetti, prefer to introduce conditional probability as an axiom of probability:
Although mathematically equivalent, this may be preferred philosophically; under major probability interpretations such as the subjective theory, conditional probability is considered a primitive entity. Further, this "multiplication axiom" introduces a symmetry with the summation axiom for mutually exclusive events:[4]
Definition with σ-algebra
If P(B) = 0, then the simple definition of P(A|B) is undefined. However, it is possible to define a conditional probability with respect to a σ-algebra of such events (such as those arising from a continuous random variable).
For example, if X and Y are non-degenerate and jointly continuous random variables with density ƒX,Y(x, y) then, if B has positive measure,
The case where B has zero measure can only be dealt with directly in the case that B = {y0}, representing a single point, in which case
If A has measure zero then the conditional probability is zero. An indication of why the more general case of zero measure cannot be dealt with in a similar way can be seen by noting that the limit, as all δyi approach zero, of
depends on their relationship as they approach zero. See conditional expectation for more information.
Conditioning on a random variable
Conditioning on an event may be generalized to conditioning on a random variable. Let X be a random variable taking some value from xn. Let A be an event. The conditional probability of A given X is defined as the random variable:
More formally:
The conditional probability P(A|X) is a function of X, e.g., if the function g is defined as
- ,
then
Note that P(A|X) and X are now both random variables. From the law of total probability, the expected value of P(A|X) is equal to the unconditional probability of A.
Example
Suppose that somebody secretly rolls two fair six-sided dice, and we must predict the outcome.
What is the probability that A = 2?
Table 1 shows the sample space of 36 outcomes
Clearly, A = 2 in exactly 6 of the 36 outcomes, thus P(A=2) = 6⁄36 = 1⁄6.
Table 1 + B=1 2 3 4 5 6 A=1 2 3 4 5 6 7 2 3 4 5 6 7 8 3 4 5 6 7 8 9 4 5 6 7 8 9 10 5 6 7 8 9 10 11 6 7 8 9 10 11 12
Suppose it is revealed that A+B ≤ 5
What is the probability A+B ≤ 5 ?
Table 2 shows that A+B ≤ 5 for exactly 10 of the same 36 outcomes, thus P(A+B ≤ 5) = 10⁄36
Table 2 + B=1 2 3 4 5 6 A=1 2 3 4 5 6 7 2 3 4 5 6 7 8 3 4 5 6 7 8 9 4 5 6 7 8 9 10 5 6 7 8 9 10 11 6 7 8 9 10 11 12
What is the probability that A = 2 given that A+B ≤ 5 ?
Table 3 shows that for 3 of these 10 outcomes, A = 2
Thus, the conditional probability P(A=2 | A+B ≤ 5) = 3⁄10 = 0.3.
Table 3 + B=1 2 3 4 5 6 A=1 2 3 4 5 6 7 2 3 4 5 6 7 8 3 4 5 6 7 8 9 4 5 6 7 8 9 10 5 6 7 8 9 10 11 6 7 8 9 10 11 12
Statistical independence
Events A and B are defined to be statistically independent if:
- .
That is, the occurrence of A does not affect the probability of B, and vice versa. Although the derived forms may seem more intuitive, they are not the preferred definition as the conditional probabilities may be undefined if P(A) or P(B) are 0, and the preferred definition is symmetrical in A and B.
Common fallacies
- These fallacies should not be confused with Robert K. Shope's 1978 "conditional fallacy", which deals with counterfactual examples that beg the question.
Assuming conditional probability is of similar size to its inverse

In general, it cannot be assumed that P(A|B) ≈ P(B|A). This can be an insidious error, even for those who are highly conversant with statistics.[5] The relationship between P(A|B) and P(B|A) is given by Bayes' theorem:
That is, P(A|B) ≈ P(B|A) only if P(B)/P(A) ≈ 1, or equivalently, P(A) ≈ P(B).
Alternatively, noting that A ∩ B = B ∩ A, and applying conditional probability:
Rearranging gives the result.
Assuming marginal and conditional probabilities are of similar size
In general, it cannot be assumed that P(A) ≈ P(A|B). These probabilities are linked through the law of total probability:
- .
where the events form a countable partition of .
This fallacy may arise through selection bias.[6] For example, in the context of a medical claim, let SC be the event that a sequela (chronic disease) S occurs as a consequence of circumstance (acute condition) C. Let H be the event that an individual seeks medical help. Suppose that in most cases, C does not cause S so P(SC) is low. Suppose also that medical attention is only sought if S has occurred due to C. From experience of patients, a doctor may therefore erroneously conclude that P(SC) is high. The actual probability observed by the doctor is P(SC|H).
Over- or under-weighting priors
Not taking prior probability into account partially or completely is called base rate neglect. The reverse, insufficient adjustment from the prior probability is conservatism.
Formal derivation
Formally, P(A|B) is defined as the probability of A according to a new probability function on the sample space, such that outcomes not in B have probability 0 and that it is consistent with all original probability measures.[7][8]
Let Ω be a sample space with elementary events {ω}. Suppose we are told the event B ⊆ Ω has occurred. A new probability distribution (denoted by the conditional notation) is to be assigned on {ω} to reflect this. For events in B, it is reasonable to assume that the relative magnitudes of the probabilities will be preserved. For some constant scale factor α, the new distribution will therefore satisfy:
Substituting 1 and 2 into 3 to select α:
So the new probability distribution is
Now for a general event A,
See also
- Borel–Kolmogorov paradox
- Chain rule (probability)
- Class membership probabilities
- Conditional probability distribution
- Conditioning (probability)
- Joint probability distribution
- Monty Hall problem
- Posterior probability
References
- ^ Gut, Allan (2013). Probability: A Graduate Course (2 ed.). New York, NY: Springer. ISBN 978-1-4614-4707-8.
- ^ Sheldon Ross, A First Course in Probability, 8th Edition (2010), Pearson Prentice Hall, ISBN 978-0-13-603313-4
- ^ George Casella and Roger L. Berger, Statistical Inference,(2002), Duxbury Press, ISBN 978-0-534-24312-8
- ^ Gillies, Donald (2000); "Philosophical Theories of Probability"; Routledge; Chapter 4 "The subjective theory"
- ^ Paulos, J.A. (1988) Innumeracy: Mathematical Illiteracy and its Consequences, Hill and Wang. ISBN 0-8090-7447-8 (p. 63 et seq.)
- ^ Thomas Bruss, F; Der Wyatt Earp Effekt; Spektrum der Wissenschaft; March 2007
- ^ George Casella and Roger L. Berger (1990), Statistical Inference, Duxbury Press, ISBN 0-534-11958-1 (p. 18 et seq.)
- ^ Grinstead and Snell's Introduction to Probability, p. 134
External links
- Weisstein, Eric W. "Conditional Probability". MathWorld.
- F. Thomas Bruss Der Wyatt-Earp-Effekt oder die betörende Macht kleiner Wahrscheinlichkeiten (in German), Spektrum der Wissenschaft (German Edition of Scientific American), Vol 2, 110–113, (2007).
- Conditional Probability Problems with Solutions
- Visual explanation of conditional probability