Talk:Pre- and post-test probability
![]() | Statistics B‑class Mid‑importance | |||||||||
|
Motivation for having own article
I forked this section from Likelihood ratios in diagnostic testing, partly to provide a common fork som that one and positive predictive value, and partly because so many incoming links (such as positive pre-test probability, negative post-test probability, negative post-test odds etc) cannot feasibly be redirected to a section. Mikael Häggström (talk) 08:30, 30 January 2011 (UTC)
Confusion about example
I'm confused about why go through all the rigamarole with odds and likelihood ratios, etc in the given example.
What I take it we're after is the post-test probability. I.e., what we want to know is .
But that can be read directly off of the chart given in the article, in one calculation, by the definition of conditional probability:
That seems way easier than the complicated multi-step process described in the example. So why would you ever do it that way?
In fact, it's easy to prove mathematically. Let's let the following table be true:
Disease = True | Disease = False | |
---|---|---|
Test = Positive | a | b |
Test = Negative | c | d |
Where a,b,c,d are probabilities. I.e. a+b+c+d=1. (This is without loss of generality.) Then, the definitions as given in the article are:
- Sensitivity =
- Specificity =
And now we can just follow the algorithm of the article:
- Likelihood ratio positive = sensitivity / (1 − specificity) =
- Pretest probability =
- Pretest odds = pretest prob / (1 - pretest prob) =
- Positive posttest odds = pretest odds * likelihood ratio positive =
- Positive posttest probability = positive posttest odds / (1+positive posttest odds) =
Thus we see that it would be way easier just to calculate the positive predictive value.
Now, all of this is assuming that the pretest probability for the patient in question is the same as the population probability. However, if that is not the case, then the entire chart is invalid. By using the chart you are assuming that the properties of the diagnostic test (i.e. the predictive values, sensitivity, specificity, etc.) are the same for the population (or the sample group) as they are for the patient in question. There's no reason to think that has to be the case. If we're willing to assume that the a+c for our patient is different than the sample group, why are we willing to assume that a/(a+c) is the same?
I just think that perhaps the article should point out some of this.
- I agree your example is more simple. The calculation from likelihood ratio is better only if the pre-test probability is different from the prevalence in the population, but, as you pointed out, that was not the case in the example, and therefore the example is a bit overkill (the reason I took it was that it was easy to copy-paste from Positive predictive value. I'm now doing a reorganization of the article to hopefully make it more simple. Mikael Häggström (talk) 19:19, 24 February 2011 (UTC)
Footnotes added
I added more footnotes from a reference, as requested by the tag [1]. The article surely needs more referenced entries, but I don't think it lacks specifically in in-line citations of existing references. Mikael Häggström (talk) 09:09, 27 August 2011 (UTC)
Disadvantage of Likelihood ratios
I deleted the disadvantage of LR in the table because it is possible to do a calculation of likelihood ratios for tests with continuous values or more than two outcomes which is similar to the calculation for dichotomous outcomes; a separate likelihood ratio is simply calculated for every level of test result and is called interval or stratum specific likelihood ratios.[1] Gcastellanos (talk) 10:56, 16 February 2015 (UTC)
- ^ Brown MD, Reeves MJ. (2003). "Evidence-based emergency medicine/skills for evidence-based emergency care. Interval likelihood ratios: another advantage for the evidence-based diagnostician". Ann Emerg Med. 42 (2): 292–297. doi:10.1067/mem.2003.274. PMID 12883521.