Jump to content

User:Kdabug/sandbox

From Wikipedia, the free encyclopedia
This is an old revision of this page, as edited by Kdabug (talk | contribs) at 00:03, 12 December 2018 (edit III). The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.

EDITING FROM ARTICLE "FIELD EXPERIMENTS"


A field experiment applies the scientific method to experimentally examine researcher-designed interventions in real-world environments rather than in a laboratory setting. Field experiments, like lab experiments, generally randomize subjects (or other sampling units) into treatment and control groups and use information gathered from both groups to test claims of |causal relationships. {Field experiments allow for context-specific hypotheses and minimize reliance on assumptions.} Field experiments are based on the notion that empirical research will show if |exogenous variables (also called |independent variables) have an effect on the behavior of data gathered regarding a different (|dependent) variable. {this hypothesis comes from an a priori judgment... } Harrison and List -> https://www.jstor.org/stable/3594915.


{The term 'field' in 'field experiment' is a defining feature that allows researchers to see how subjects respond to interventions in environments that accurately reflect the distribution of treatment outcomes in the real-world. This is in contrast to laboratory experiments, which enforce scientific control by testing a hypothesis in the artificial and highly controlled setting of a laboratory. {Field experiments have some contextual differences as well from naturally occurring experiments and quasi-experiments. Where naturally-occurring experiments occur when the an external force (e.g. a government, nonprofit, etc.) decides the randomization of sorting subjects into the treatment group, field experiments require researchers to have control over randomization and implementation methods for treatment. Similarly, quasi-experiments occur when no particular entity has the authority to separate groups into treated and control subjects. This includes U.S. Congressional districts where candidates win with slim-margins (seemingly random behavior separates near-winners from near losers) (Lee 2008), weather patterns, natural disasters, and other near-random phenomena.}

Field experiments encompass a broad array of experimental designs, each with varying degrees of generality. The criteria of generality (i.e. authenticity of treatments, participants, contexts, and outcome measures) refer to the contextual similarities between the subjects in the experimental sample and the rest of the population.} {This allows field experiments to be used in a number of settings; however, they are often used in the social sciences, especially in economic analyses of education and health interventions.}

Examples diverse field experiments include:

History

The use of experiments in the lab and the field have a long history in the physical, natural, and life sciences. Geology has a long history of field experiments, since the time of Avicenna,[citation needed] while field experiments in anthropology date back to Biruni's study of India.[1] Social psychology also has a history of field experiments, including work by pioneering figures Philip Zimbardo, Kurt Lewin and Stanley Milgram. In economics, Peter Bohm, University of Stockholm, was one of the first economists to take the tools of experimental economic methods and attempt to try them with field subjects. In the area of development economics, the pioneer work of Hans Binswanger in the late 1970s conducting experiments in India on risk behavior [1] should also be noted. The use of field experiments in economics has grown recently with the work of John A. List, Jeff Carpenter, Juan-Camilo Cardenas, Abigail Barr, Catherine Eckel, Michael Kremer, Paul Gertler, Glenn Harrison, Colin Camerer, Bradley Ruffle, Abhijit Banerjee, Esther Duflo, Dean Karlan, Edward "Ted" Miguel, Sendhil Mullainathan, David H. Reiley, among others.


TO EDIT

  • change intro to include social science
  • -- nature of experiments (fair tests, naturally occuring v quasi and real world v lab settings)
  • add sections (to give brief and referenced overviews of sections of field experiments_
  • -- causal inference (assumptions, random assignment, random sampling, average treatment effects, potential outcomes)
  • -- sampling distributions and statistic methods of inferences used in experiments, random assignment methods (cluster and block designs)
  • -- covariate relations in field experiments (Covariate adjustment: Saturation and automated covariate selection)
  • -- mediators and designing experiments (designs to assess causal mechanisms)
  • Practical uses
  • -- benchmarking
  • -- generalizing outside of the subject pool
  • -- generalizing the complier average causal effect (LATE)
  • -- adaptive design
  • changes caveats to "disadvantages and common problems"
  • --selection bias (leveraging panel data to improve precision)
  • --one sided noncompliance
  • --two sided noncompliance
  • --attrition
  • --spillover effects
  • --interference (designs for detection of interference)
  • --heterogeneous treatment effects
  • ---Machine learning to detect heterogeneous treatment effects
  • --ethics


Causal Inference

sampling distributions and statistic methods of inferences used in experiments, random assignment methods

covariate relations in field experiments

mediators and designing experiments

Caveats

  • Fairness of randomization (e.g. in 'negative income tax' experiments communities may lobby for their community to get a cash transfer so the assignment is not purely random)
  • Contamination of the randomization
  • General equilibrium and "scaling-up"
  • Difficulty of replicability (field experiments often require special access or permission, or technical detail—e.g., the instructions for precisely how to replicate a field experiment are rarely if ever available in economics)
  • Limits on ability to obtain informed consent of participants
  • Field testing is always less controlled than laboratory testing. This increases the variability of the types and magnitudes of stress in field testing. The resulting data, therefore, are more varied: larger standard deviation, less precision and accuracy, etc. This leads to the use of larger sample sizes for field testing.

Pratical Uses of Experimental Design

  • -- benchmarking
  • -- generalizing outside of the subject pool
  • -- generalizing the complier average causal effect (LATE)
  • -- adaptive design

Field Experiments in International Development Research

Development economists have used field experiments to measure the effectiveness of poverty and health programs in developing countries. Organizations such as the Abdul Latif Jameel Poverty Action Lab (J-PAL) at the Massachusetts Institute of Technology, the Center of Evaluation for Global Action at the University of California, and Innovations for Poverty Action (IPA) in particular have received attention for their use of randomized field experiments to evaluate development programs. The aim of field experiments used in development research is to find causal relationships between policy interventions and development outcomes. Field experiments are seen by some academics as a rigorous way of testing general theories about economic and political behavior and most recently, field experiments have been used by political scientists to study political behavior, institutional dynamics, and conflict in the developing world.[2]

In a randomized field experiment on an international development intervention, researchers would separate participants into two or more groups: a treatment group (or groups) and a control group. Members of the treatment group(s) then receive a particular development intervention being evaluated while the control group does not. (Often the control group receives the intervention later in the roll out of the study.) Field experiments have gained popularity in the field because they allow researchers to guard against selection bias, a problem present in many current studies of development interventions. Selection bias refers to the fact that, in non-experimental settings, the group receiving a development intervention is likely different from a group that is not receiving the intervention. This may occur because of characteristics that make some people more likely to opt into a program, or because of program targeting. Some academics dispute the claim that findings from field experiments are sufficient for establishing and testing theories about behavior. In particular, a hotly contested issue with regards to field experiments is their external validity.[3] Given that field experiments necessarily take place in a specific geographic and political setting, the extent to which findings can be extrapolated to formulate a general theory regarding economic behavior is a concern.


See also


References

  1. ^ Ahmed, Akbar S. 2009. The First Anthropologist. Rain: RAI.
  2. ^ http://www.columbia.edu/~mh2245/papers1/HW_ARPS09.pdf Field Experiments and the Political Economy of Development Macartan Humphreys, Jeremy M. Weinstein Annual Review of Political Science. Volume 12, Page 367-378, Jun 2009
  3. ^ http://econ-www.mit.edu/files/800 Esther Duflo Field Experiments in Development Economics


Category:Design of experiments Category:Tests Category:Causal inference Category:Mathematical and quantitative methods (economics)