Jump to content

User:Kdabug/sandbox

From Wikipedia, the free encyclopedia
This is an old revision of this page, as edited by Kdabug (talk | contribs) at 20:10, 15 December 2018. The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.

EDITING FROM ARTICLE "FIELD EXPERIMENTS"


A field experiment applies the scientific method to experimentally examine researcher-designed interventions in real-world environments rather than in a laboratory setting. Field experiments, like lab experiments, generally randomize subjects (or other sampling units) into either treatment or control groups and use information gathered from both groups to test claims of causal relationships. Field experiments allow researchers opportunities to test context-specific hypotheses while minimizing reliance on assumptions. Field experiments are based on the notion that empirical research will show if exogenous variables (also called independent variables) have an effect on the behavior of data gathered regarding a different (dependent) variable. {this hypothesis comes from an a priori judgment... } Harrison and List -> https://www.jstor.org/stable/3594915.}


{The term 'field' in 'field experiment' is a defining feature that allows researchers to see how subjects respond to interventions in environments that accurately reflect the distribution of treatment outcomes in the real world. This is in contrast to laboratory experiments, which enforce scientific control by testing a hypothesis in the artificial and highly controlled setting of a laboratory. Field experiments have some contextual differences as well from naturally occurring experiments and quasi-experiments. Where naturally-occurring experiments occur when an external force (e.g. a government, nonprofit, etc.) decides the |randomization of sorting subjects into the treatment group, field experiments require researchers to have control over randomization and implementation methods for treatment. Similarly, quasi-experiments occur when no particular entity has the authority to separate groups into treated and control subjects. This includes U.S. Congressional districts where candidates win with slim-margins (seemingly random behavior separates near-winners from near losers) (Lee 2008), weather patterns, natural disasters, and other near-random phenomena.

Field experiments encompass a broad array of experimental designs, each with varying degrees of generality. Some criteria of generality (e.g. authenticity of treatments, participants, contexts, and outcome measures) refer to the contextual similarities between the subjects in the experimental sample and the rest of the population.} This allows field experiments to be used in a number of settings; however, they are often used in the social sciences, especially in economic analyses of education and health interventions.

Examples of diverse field experiments include:


Characteristics of Field Experiments

In a randomized field experiment, researchers would separate participants into two or more groups: a treatment group (or groups) and a control group. Members of the treatment group(s) then receive a particular development intervention being evaluated while the control group does not. Field experiment researchers assume that subjects sorted into different treatment or control groups have both observable and unobservable potential outcomes, and the causal effect of the treatment for each subject is the difference between these two outcomes. Since researchers can only measure the observed outcome for each subject, researchers use field experiments to find an unbiased estimator of the average treatment effect on a random sample of the population. Often, researchers will find the difference between the mean observed outcomes of the treated and control groups(called the difference-in-means estimator); however, analyzing whether or not this (or any) estimator is unbiased requires looking to the design, randomization, and implementation of the intervention. This means that not only should the subjects be a random subset of the population, but also that the subjects need to be randomly assigned to either the treated or control groups.

{Along with randomization of subjects into treated and nontreated groups, two other core assumptions underly the ability of the researcher to collect unbiased potential outcomes: excludability and non-interference. The excludability assumption provides that the only relevant causal agent is through the receipt of the treatment. Asymmetries in assignment, administration or measurement of treatment and control groups violate this assumption. The non-interference assumption, or Stable Unit Treatment Value Assumption (SUTVA), indicates that the value of the outcome depends only on whether or not the subject is assigned the treatment and not whether or not other subjects are assigned to the treatment. When these three core assumptions are met, researchers (are more likely to?) provide unbiased estimates through field experiments. }

After designing the field experiment and gathering the data, researchers can use statistical inference tests to determine the size and strength of the effect the intervention has on the subjects. Field experiments allow researchers to collect diverse amounts and types of data. For example, a researcher could design an experiment that uses pre- and post-trial information in an appropriate statistical inference method to see if an intervention has an effect on subject-level changes in outcomes.


Practical Uses

Field experiments are seen by some academics as a rigorous way of testing general theories about economic and political behavior. Field experiments have gained popularity in the field because they allow researchers to guard against selection bias. Selection bias refers to the fact that, in non-experimental settings, the group receiving a development intervention is likely different from a group that is not receiving the intervention. This may occur because of characteristics that make some people more likely to opt into a program, or because of program targeting.

Development economists have used field experiments to measure the effectiveness of poverty and health programs in developing countries. Organizations such as the Abdul Latif Jameel Poverty Action Lab (J-PAL) at the Massachusetts Institute of Technology, the Center of Evaluation for Global Action at the University of California, and Innovations for Poverty Action (IPA) in particular have received attention for their use of randomized field experiments to evaluate development programs. The aim of field experiments used in development research is to find causal relationships between policy interventions and development outcomes.

-- benchmarking

  • -- generalizing outside of the subject pool
  • -- generalizing the complier average causal effect (LATE)
  • -- adaptive design
  • -- machine learning


Limitations

There are limitations of and arguments against using field experiments in place of other research designs (e.g lab experiments, survey experiments, observational studies, etc). Some academics dispute the claim that findings from field experiments are sufficient for establishing and testing theories about behavior. In particular, a hotly contested issue with regards to field experiments is their external validity.[1] Given that field experiments necessarily take place in a specific geographic and political setting, the extent to which findings can be extrapolated to formulate a general theory regarding economic behavior is a concern.

There are some logistical limitations to data collection and unbiased statistical estimates in field experiments, and some scholars argue that these limitations make lab experiments a superior option for finding causal relationships. Difficulties affecting field experiments {one-sided noncompliance} difficulties occur when subjects who are assigned to treatment never receive treatment. This occurs when subjects who are assigned to treatment are hard to reach or when subjects refuse treatment. Similarly, there are cases where experiments have subjects assigned to control groups who mistakenly receive treatment [two-sided noncompliance}. Other problems to data collection include attrition (where subjects who are treated do not provide outcome data) which, under certain conditions, can also bias the collected data. These problems can lead to imprecise data analysis; however, researchers who use field experiments can use statistical methods in calculating useful information even when these difficulties occur.

Similarly, conducting a field experiment can also lead to concerns over interference between subjects. When a treated subject or groups affects the outcomes of the nontreated groups (through contagion, displacement, communication, social comparison, deterrence, etc.), nontreated groups might not have an outcome that is the true untreated outcome. A subset of interference is the spillover effect, which occurs when the treatment of treated groups has an effect on neighboring untreated groups. There is also the concern of heterogeneous treatment effects, which occur when different types of subjects react to treatment differently. This creates treatment variability and can present a problem when clusters of people who react with similar variability are --- While different methods are used to address these concerns, a popular method is to use a block design, which stratifies subjects based on observable traits and randomizes treatment and control within those strata.

Some limitations exist to the design and implementation of field experiments as well. Field experiments are expensive and time-consuming to conduct. If not planned carefully beforehand field experiments might miss crucial data or return biased estimates. At any point in the implementation of the field experiment, the researcher can face problems that (if left uncorrected) can ruin the entire process. Not only should the researcher be aware of randomization of subjects into groups, but also the how subjects will see the fairness of randomization (e.g. in 'negative income tax' experiments communities may lobby for their community to get a cash transfer so the assignment is not purely random). Similarly, those who implement the randomization could contaminate the randomization scheme. However, ethical considerations can factor into experimental designs, and field experiments can adopt a "block-wedge" design that will eventually give the entire sample access to the intervention on different timing schedules. As well, researchers can design a blinded field experiment to remove possibilities of manipulation.

There is also a certain difficulty of replicability (field experiments often require special access or permission, or technical detail—e.g., the instructions for precisely how to replicate a field experiment are rarely if ever available in economics). There is a limit on a researcher's ability to obtain the informed consent of all participants. Field testing is always less controlled than laboratory testing. This increases the variability of the types and magnitudes of stress in field testing. The resulting data, therefore, are more varied: larger standard deviation, less precision and accuracy, etc. This leads to the use of larger sample sizes for field testing. However, others argue that, even though replicability is difficult, if the results of the experiment are important then there a larger chance that the experiment will get replicated.

Noteworthy Field Experiments

The history of experiments in the lab and the field has left longstanding impacts in the physical, natural, and life sciences.

Field experiments in the physical sciences and clinical research: Geology has a long history of field experiments, since the time of Avicenna,[citation needed] while field experiments in anthropology date back to Biruni's study of India.[2] Social psychology also has a history of field experiments, including work by pioneering figures Philip Zimbardo, Kurt Lewin and Stanley Milgram. In the 1700s, James Lind utilized a controlled field experiment to identify a treatment for scurvy using different interventions.[3]


Field experiments in economics: In economics, Peter Bohm, University of Stockholm, was one of the first economists to take the tools of experimental economic methods and attempt to try them with field subjects. In the area of development economics, the pioneer work of Hans Binswanger in the late 1970s conducting experiments in India on risk behavior [1] should also be noted. The use of field experiments in economics has grown recently with the work of John A. List, Jeff Carpenter, Juan-Camilo Cardenas, Abigail Barr, Catherine Eckel, Michael Kremer, Paul Gertler, Glenn Harrison, Colin Camerer, Bradley Ruffle, Abhijit Banerjee, Esther Duflo, Dean Karlan, Edward "Ted" Miguel, Sendhil Mullainathan, David H. Reiley, among others.

Field experiments in political science:



See also


References

  1. ^ http://econ-www.mit.edu/files/800 Esther Duflo Field Experiments in Development Economics
  2. ^ Ahmed, Akbar S. 2009. The First Anthropologist. Rain: RAI.
  3. ^ http://www.jameslindlibrary.org/articles/james-lind-and-scurvy-1747-to-1795/ Tröhler U (2003). James Lind and scurvy: 1747 to 1795. JLL Bulletin: Commentaries on the history of treatment evaluation


Category:Design of experiments Category:Tests Category:Causal inference Category:Mathematical and quantitative methods (economics)