Error analysis
Error analysis is the study of kind and quantity of error that occurs, particularly in the fields of applied mathematics (particularly numerical analysis), applied linguistics and statistics.
Error analysis in numerical modeling
In numerical simulation or modeling of real systems, error analysis is concerned with the changes in the output of the model as the parameters to the model vary about a mean.
For instance, in a system modeled as a function of two variables . Error analysis deals with the propagation of the numerical errors in and (around mean values and ) to error in (around a mean ).[1]
In numerical analysis, error analysis comprises both forward error analysis and backward error analysis. Forward error analysis involves the analysis of a function which is an approximation (usually a finite polynomial) to a function to determine the bounds on the error in the approximation; i.e., to find such that . Backward error analysis involves the analysis of the approximation function , to determine the bounds on the parameters such that the result .[2]
Error analysis in second language acquisition
In second language acquisition, error analysis studies the types and causes of language errors. Errors are classified[3] according to:
- modality (i.e., level of proficiency in speaking, writing, reading, listening)
- linguistic levels (i.e., pronunciation, grammar, vocabulary, style)
- form (e.g., omission, insertion, substitution)
- type (systematic errors/errors in competence vs. occasional errors/errors in performance)
- cause (e.g., interference, interlanguage)
- norm vs. system
Error analysis in SLA was established in the 1960s by Stephen Pit Corder and colleagues.[4] Error analysis was an alternative to contrastive analysis, an approach influenced by behaviorism through which applied linguists sought to use the formal distinctions between the learners' first and second languages to predict errors. Error analysis showed that contrastive analysis was unable to predict a great majority of errors, although its more valuable aspects have been incorporated into the study of language transfer. A key finding of error analysis has been that many learner errors are produced by learners making faulty inferences about the rules of the new language.
Error analysts distinguish between errors, which are systematic, and mistakes, which are not. They often seek to develop a typology of errors. Error can be classified according to basic type: omissive, additive, substitutive or related to word order. They can be classified by how apparent they are: overt errors such as "I angry" are obvious even out of context, whereas covert errors are evident only in context. Closely related to this is the classification according to domain, the breadth of context which the analyst must examine, and extent, the breadth of the utterance which must be changed in order to fix the error. Errors may also be classified according to the level of language: phonological errors, vocabulary or lexical errors, syntactic errors, and so on. They may be assessed according to the degree to which they interfere with communication: global errors make an utterance difficult to understand, while local errors do not. In the above example, "I angry" would be a local error, since the meaning is apparent.
From the beginning, error analysis was beset with methodological problems. In particular, the above typologies are problematic: from linguistic data alone, it is often impossible to reliably determine what kind of error a learner is making. Also, error analysis can deal effectively only with learner production (speaking and writing) and not with learner reception (listening and reading). Furthermore, it cannot account for learner use of communicative strategies such as avoidance, in which learners simply do not use a form with which they are uncomfortable. For these reasons, although error analysis is still used to investigate specific questions in SLA, the quest for an overarching theory of learner errors has largely been abandoned. In the mid-1970s, Corder and others moved on to a more wide-ranging approach to learner language, known as interlanguage.
Error analysis is closely related to the study of error treatment in language teaching. Today, the study of errors is particularly relevant for focus on form teaching methodology.
Error analysis in molecular dynamics simulation
In molecular dynamics (MD) simulations, there are errors due to inadequate sampling of the phase space or infrequently occurring events, these lead to the statistical error due to random fluctuation in the measurements.
For a series of M measurements of a fluctuating property A, the mean value is:
When these M measurements are independent, the variance of the mean <A> is:
but in most MD simulations, there is correlation between quantity A at different time, so the variance of the mean <A> will be underestimated as the effective number of independent measurements is actually less than M. In such situations we rewrite the variance as :
where is the autocorrelation function defined by
We can then use the autocorrelation function to estimate the error bar. Luckily, we have a much simpler method based on block averaging.[5]
See also
- Error bar
- Errors and residuals in statistics
- For error analysis in applied linguistics, see contrastive analysis
- Propagation of uncertainty
References
- ^ James W. Haefner (1996). Modeling Biological Systems: Principles and Applications. Springer. pp. 186–189. ISBN 0412042010.
- ^ Francis J. Scheid (1988). Schaum's Outline of Theory and Problems of Numerical Analysis. McGraw-Hill Professional. p. 11. ISBN 0070552215.
- ^ Cf. Bussmann, Hadumod (1996), Routledge Dictionary of Language and Linguistics, London: Routledge, s.v. error analysis. A comprehensive bibliography was published by Bernd Spillner (1991), Error Analysis, Amsterdam/Philadelphia: Benjamins.
- ^ Corder, S. P. (1967). "The significance of learners' errors". International Review of Applied Linguistics. 5: 160–170.
- ^ D. C. Rapaport, The Art of Molecular Dynamics Simulation, Cambridge University Press.