Talk:Positive and negative predictive values
![]() | Medicine Unassessed | |||||||||
|
Table and edits
See Talk:Sensitivity (tests) re past wish list for simpler description, setting what it is before launching in mathematical jargon. I have also added a table and in Sensitivity (tests) added a worked example. The table is now consistant in Sensitivity, Specificity, PPV & NPV with relevant row or column for calculation highlighted. David Ruben Talk 02:45, 11 October 2006 (UTC)
"Physician's Gold Standard" Remove?
"Physician's gold standard" seems to be an unhelpful phrase as it is used in this article.
My experience has been that when "gold standard" is used in this context it refers to the reference test against which the accuracy of a test is measured. As we all know, sensitivity, specificity, PPV, etc., require a "gold standard" test for reference -- otherwise we don't have a basis for claims about % true positives and % true negatives.
Here it seems that "physician's gold standard" means something like "it is the statistical property of a test that is most useful to physicians".
It seems that either the author was confused about the use of "gold standard" in biostatistics or there's another (unfortunate) use of the phrase that I'm not familiar with. Since I don't know which, I'm not editing the page. If others agree, perhaps this phrase should be replaced.
--will 02:19, 24 July 2007 (UTC)
the need for an unequivocal definition of positive predictive value
Let's consider following tabel (Grant Innes, 2006, CJEM. Clinical utility of novel cardiac markers: let the byer beware.)
Table 3. Diagnostic performance of ischemia modified albumin (IMA) in a low (5%) prevalence population.
ACS Yes No Total Sensitivity (true-positive rate) = 35/50 = 70% IMA + 35 722 757 Specificity (true-negative rate) = 228/950 = 24% IMA – 15 228 243 Positive predictive value = 35/757 = 4.6% 50 950 1000 Negative predictive value = 228/243 = 94%
The positive predictive value is smaller than the prevalence. We must conclude that a positive test result decreases the probability of disease or in other words that the post-test probability of disease, given a positive result, is smaller than the pre-test probability (prevalence): very strange and unusual conclusion.
From a statistical point of view this very strange conclusion can be avoided by interchanging the rows of thet table: IMA- becomes a positive test result. This operation results in a predictive value of 6.17%. The conclusion is that a positive test result, if the test is of any value at all, increases the post-test probability as it is expected to do and in no case decreases this value.
This example illustrates the need for an unequivocal definition of a positive test result. If a positive test result is unequivocally defined, the positive predictive value is mathematically unequivocally defined. A text providing such an unequivocal definition was removed by someone who called it 'garble'. I intend to put the text back, any objections? —Preceding unsigned comment added by Michel soete (talk • contribs) 18:57, 22 September 2007 (UTC)
Yes - makes no sense, 'garble' indeed. I've removed it and placed here in talk page where we can work on this.
And, alternatively, too:
PPV = PR * LR+ / (PR * (LR+ - 1) + 1)
wherein PR = the prevalence (pre-test probability) of the disease, * = the multiplication sign and LR+ = the positive likelihood ratio. LR+ = sensitivity / (1 - specificity). The prevalence, the sensitivity and the specificity must be expressend in per one, not in percentage or in pro mille a.s.o.. The frequency of the True Positives must be this frequency that exceeds or equals the expected value, mathematically expressed: True Positives >= (True positives + False Positives) (True Positives + False Negatives) / N wherein N = True Positives + False Positives + True Negatives + False Negatives. If this condition is not met and if the sensitivity differs from .50 (50%) then two different results after the calculation of sensitivity are possible since the rows of two by two tables can be interchanged and then a former positive result can be called a negative, a former negative result can be called a positive (Michel Soete, Wikipedia, dutch version, Sensitiviteit en Specificiteit, 2006, december 16th).
As a start, lets use same terminology as rest of article, ie call PR just Prevalence, no need explain maths symbols. If LR+ is "sensitivity / (1 - specificity)", then I get:
PPV = Prevalence * sensitivity / (1 - specificity) -------------------------------------------- Prevalence * ((sensitivity / (1 - specificity)) - 1) + 1
Lets multiply through by (1 - specificity):
PPV = Prevalence * sensitivity -------------------------------------------- (Prevalence * (sensitivity - (1 - specificity)) + (1 - specificity)
Which is:
PPV = Prevalence * sensitivity -------------------------------------------- Prevalence * sensitivity - Prevalence + specificityPrevalence + 1 - specificity
and so to:
PPV = Prevalence * sensitivity -------------------------------------------- Prevalence * sensitivity + (1-specificity)(1- prevalence)
ie exactly the same as the last formula already given in the article ! This fails to add therefore a new insight into its derivation or meaning.
As for "The frequency of the True Positives must be this frequency that exceeds or equals the expected value, mathematically expressed: True Positives >= (True positives + False Positives) (True Positives + False Negatives) / N wherein N = True Positives + False Positives + True Negatives + False Negatives. If this condition is not met and if the sensitivity differs from .50 (50%) then two different results after the calculation of sensitivity are possible since the rows of two by two tables can be interchanged and then a former positive result can be called a negative, a former negative result can be called a positive" - sorry can't even begin to get my head around this.
- Why must TP be larger than the expected values?
- The conditional formula your seek is the same as TP => Positive predictive value * Sensitivity, but what is this expressing in everyday words ?
- How can there be two different results possible ?
- Surely just needless convolution to start supposing what happens if switching rows about ? Might as well say switching a "test result that excluded a disease" to a "test result that confirmed a disease" - one can't start switching values. One defines at the start what a positive or negative result means (ie what the null hypothesis is) and then should stick to it thoughout the analysis. David Ruben Talk 11:46, 27 September 2007 (UTC)
allowing ambiguity
My mother tongue is dutch. Initially I did not understand quite well what garble is but now I think it is the same of nonsense.
Let us assume that allowing ambiguity is a good option. Following tables can then be constructed:
D+ D- D+ D- blue (P) 99 (a) 1 (b) red (P) 1 99 red (N) 1 (c) 99 (d) blue(N) 99 1
Constructing these tables I respected some conventions: The frequencies of diseased people are in the first column, the frequencies of the positives in the first row, the frequency of the true positives in cell a.... a.s.o..
Now we can write that sensitivity is a / (a + c). For those for whom blue is positive the sensitivity is 99%, for those for whom red is positive the sensitivity is 1%. The positive predictive value ( a / (a + b)) is 99% (blue is positive) or 1% (red is positive).
Such a possibility for ambiguity is not in line with traditional medical thinking and therefore it leads to (at least seemingly) contradicory statements and therefore confusion.
Megan Davdson writes (2002, The interpretation of diagnostic tests: A primer for physiotherapists): 'Where sensitivity or specificity is extremely high (98-100%, interpretation of test results is simple. If the sensitivity is extremely high, we can be sure that a negative test result will rule the disease out.' If ambiguity is allowed we have to add 'or extremely low (0-2%)' and 'If the sensitivity is extremely low, we can be sure that a positive test result will rule disease out'. Moreover, the relatively new concepts SpPIn and SnNOut are described in the article. It are acronyms. A SpPIn is a test with such an extreme high Specificity that if a test result is Positive disease can be ruled In. A SnNOut is a test with such an extremely high Sensitivity that if the test result is Negative the disease can be ruled Out.
Thus our demand that a > the expected value in cell a is a solid basis for these concepts and their names and for the classical ideas that they incorporate. Also the strong living idea that a positive test result always points to disease find in this demand a firm basis.
I hope that the argumentation above were convincing enough and that the removed text will be put back by the person that removed it.