Jump to content

Computer experiment

From Wikipedia, the free encyclopedia
This is an old revision of this page, as edited by 129.234.4.10 (talk) at 11:14, 1 October 2004 (creation). The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.
(diff) ← Previous revision | Latest revision (diff) | Newer revision → (diff)

In a computer experiment a computer model is used to make inferences about some underlying system. The idea is that the computer model takes the the place of an experiment we cannot do: the phrase in silico experiment is also used. At the moment, for example, the debate on climate change is being informed largely from evaluations of climate simulators (running on some of the largest computers in the world), which are being used to investigate the impact of a substantial and prolonged increase in the atmospheric concentration of greenhouse gases like carbon dioxide. These kinds of experiments are at the cutting edge of E-Science.

Computer experiments are fundamentally a branch of Applied Statistics, because the user must account for three different sources of uncertainty. First, the models often contain parameters whose values are not certain; second, the models themselves are imperfect representations of the underlying system; and third, data collected from the system that might be used to calibrate the models are imperfectly measured. However, it is fair to say that most practitioners of computer experiments do not see themselves as statisticians.

The first computer experiments were probably conducted at Los Alamos, to study the behaviour of nuclear weapons. But since then, the use of computer models has branched out into large parts of the the physical and environmental sciences (where they are sometimes referred to as process models), and in medicine. Because computer experiments have developed in such a wide range of applications there is little standardisation of the terminology. As a general guide, in this article learning about the model parameters using data from the system is referred to as (model) calibration, while learning about the system behaviour itself as (system) prediction. Combining both of these, i.e. using the model and system data to make predictions about the system is referred to as calibrated prediction. Other terminology is discussed below, in #The "traditional" approach.

The model, the simulator, and the system

Sources of uncertainty

The "traditional" approach

The probabilistic or Bayesian approach

Challenges in large computer experiments