Score test
In statistics , the Lagrange multiplier test (LM test) is one of three classical principles of hypothesis testing, together with the Wald test and the likelihood-ratio test, for testing a null hypothesis for a parameter of interest .[1] The basic idea of the LM test is that if the restricted estimator is near the maximum of the likelihood function, the gradient of the likelihood function—known as the score—evaluated at the restricted estimator should be close to zero, and in fact asymptotically follows a normal distribution with mean zero. This result has first been proved by C. R. Rao in 1948,[2] leading to the alternative name score test. An alternative and numerically identical version of the test was derived by S. D. Silvey in 1959,[3] using the vector of Lagrange multipliers associated with the constraint in the Lagrangian expression of the constrained likelihood function, which led to the test's more commonly used name.
The main advantage of the LM test over the Wald test and likelihood ratio test is that the LM test only requires the computation of the restricted estimator. This makes testing feasible when the unconstrained maximum likelihood estimate is a boundary point in the parameter space.[citation needed]
Single parameter test
The statistic
Let be the likelihood function which depends on a univariate parameter and let be the data. The score is defined as
The Fisher information is[4]
The statistic to test is
which has an asymptotic distribution of , when is true.
Note on notation
Note that some texts use an alternative notation, in which the statistic is tested against a normal distribution. This approach is equivalent and gives identical results.
Justification
![]() | This section needs expansion. You can help by adding to it. (June 2008) |
The case of a likelihood with nuisance parameters
![]() | This section is empty. You can help by adding to it. (June 2008) |
As most powerful test for small deviations
Where is the likelihood function, is the value of the parameter of interest under the null hypothesis, and is a constant set depending on the size of the test desired (i.e. the probability of rejecting if is true; see Type I error).
The score test is the most powerful test for small deviations from . To see this, consider testing versus . By the Neyman–Pearson lemma, the most powerful test has the form
Taking the log of both sides yields
The score test follows making the substitution (by Taylor series expansion)
and identifying the above with .
Relationship with other hypothesis tests
The likelihood ratio test, the Wald test, and the Score test are asymptotically equivalent tests of hypotheses.[5][6] When testing nested models, the statistics for each test converge to a Chi-squared distribution with degrees of freedom equal to the difference in degrees of freedom in the two models.
Multiple parameters
A more general score test can be derived when there is more than one parameter. Suppose that is the maximum likelihood estimate of under the null hypothesis while and are respectively, the score and the Fisher information matrices under the alternative hypothesis. Then
asymptotically under , where is the number of constraints imposed by the null hypothesis and
and
This can be used to test .
Special cases
In many situations, the score statistic reduces to another commonly used statistic.[7]
When the data follows a normal distribution, the score statistic is the same as the t statistic.[clarification needed]
When the data consists of binary observations, the score statistic is the same as the chi-squared statistic in the Pearson's chi-squared test.
When the data consists of failure time data in two groups, the score statistic for the Cox partial likelihood is the same as the log-rank statistic in the log-rank test. Hence the log-rank test for difference in survival between two groups is most powerful when the proportional hazards assumption holds.
See also
References
- ^ Breusch, T. S.; Pagan, A. R. (1980). "The Lagrange Multiplier Test and its Applications to Model Specification in Econometrics". Review of Economic Studies. 47 (1): 239–253. JSTOR 2297111.
- ^ Rao, C. Radhakrishna (1948). "Large sample tests of statistical hypotheses concerning several parameters with applications to problems of estimation". Mathematical Proceedings of the Cambridge Philosophical Society. 44 (1): 50–57. doi:10.1017/S0305004100023987.
- ^ Silvey, S. D. (1959). "The Lagrangian Multiplier Test". Annals of Mathematical Statistics. 30 (2): 389–407. JSTOR 2237089.
- ^ Lehmann and Casella, eq. (2.5.16).
- ^ Engle, Robert F. (1983). "Wald, Likelihood Ratio, and Lagrange Multiplier Tests in Econometrics". In Intriligator, M. D.; Griliches, Z. (eds.). Handbook of Econometrics. Vol. II. Elsevier. pp. 796–801. ISBN 978-0-444-86185-6.
- ^ Burzykowski, Andrzej Gałecki, Tomasz (2013). Linear mixed-effects models using R : a step-by-step approach. New York, NY: Springer. ISBN 1461438993.
{{cite book}}
: CS1 maint: multiple names: authors list (link) - ^ Cook, T. D.; DeMets, D. L., eds. (2007). Introduction to Statistical Methods for Clinical Trials. Chapman and Hall. pp. 296–297. ISBN 1-58488-027-9.
Further reading
- Buse, A. (1982). "The Likelihood Ratio, Wald, and Lagrange Multiplier Tests: An Expository Note". The American Statistician. 36 (3a): 153–157. doi:10.1080/00031305.1982.10482817.
- Gallant, A. Ronald (1997). An Introduction to Econometric Theory. Princeton: Princeton University Press. pp. 179–181. ISBN 0-691-01645-3.