Jump to content

Effect size

From Wikipedia, the free encyclopedia
This is an old revision of this page, as edited by Freerow@gmail.com (talk | contribs) at 03:17, 1 October 2008 (Cohen's f^{2}). The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.

In statistics, effect size is a measure of the strength of the relationship between two variables. In scientific experiments, it is often useful to know not only whether an experiment has a statistically significant effect, but also the size of any observed effects. In practical situations, effect sizes are helpful for making decisions. Effect size measures are the common currency of meta-analysis studies that summarize the findings from a specific area of research.

Summary

The concept of effect size appears in everyday language. For example, a weight loss program may boast that it leads to an average weight loss of 30 pounds. In this case, 30 pounds is an indicator of the claimed effect size. Another example is that a tutoring program may claim that it raises school performance by one letter grade. This grade increase is the claimed effect size of the program.

An effect size is best explained through an example: if you had no previous contact with humans, and one day visited England, how many people would you need to see before you realize that, on average, men are taller than women there? The answer relates to the effect size of the difference in average height between men and women. The larger the effect size, the easier it is to see that men are taller. If the height difference were small, then it would require knowing the heights of many men and women to notice that (on average) men are taller than women. This example is demonstrated further below.

In inferential statistics, an effect size helps to determine whether a statistically significant difference is a difference of practical concern. In other words, given a sufficiently large sample size, it is always possible to show that there is a difference between two means being compared out to some decimal position. The effects size helps us to know whether the difference observed is a difference that matters. Effect size, sample size, critical significance level (), and power in statistical hypothesis testing are related: any one of these values can be determined, given the others. In meta-analysis, effect sizes are used as a common measure that can be calculated for different studies and then combined into overall analyses.

The term effect size more often refers to a statistic, which relies on a replication of samples. However, just like the term variance, whether it means a population parameter or a samples' statistic, is contextual. In inferential statistics, the population parameter does not vary across replications or experiments, and has a confidence interval, while the samples' statistic varies replication by replication, and usually converges to one respective population parameter as sample size increases infinitely. Conventionally, Greek letters denote population parameters and Latin letters denote samples' statistics. Currently, most named effect sizes do not make an explicit distinction. So, Cumming & Finch (2001) advised Cohen's to denote the corresponding population parameter of Cohen's d.

The term effect size is most commonly used to describe standardized measures of effect (e.g., r, Cohen's d, odds ratio, etc.). However, unstandardized measures (e.g., the raw difference between group means, unstandardized regression coefficients, etc.) can equally be effect size measures. Standardized effect size measures are typically used when the metrics of variables being studied do not have intrinsic meaning to the reader (e.g., a score on a personality test on an arbitrary scale), or when results from multiple studies are being combined when some or all of the studies use different scales. Some students mistook the recommendation of Wilkinson & APA Task Force on Statistical Inference (1999, p. 599)--Always present effect sizes for primary outcomes--as that reporting standardized measures of effect like Cohen's d is the default requirement. Actually, just following the sentence the authors added that -- If the units of measurement are meaningful on a practical level (e.g., number of cigarettes smoked per day), then we usually prefer an unstandardized measure (regression coefficient or mean difference) to a standardized measure (r or d).

Presentation of effect size and confidence interval is highly recommended in biological journals [1]. Biologists should ultimately be interested in biological importance, which can be assessed using the magnitude on an effect, not statistical significance. Combined use of an effect size and its confidence interval enables to assess the relationship within data more effectively than the use of p values, regardless of statistical significance. Also, routine presentation of effect size will encourage researchers to view their results in the context of previous research and facilitate the incorporation of results in future meta-analysis. However, issues surrounding publication bias towards statistically significant results, coupled with inadequate statistical power will lead to an overestimation of effect sizes, consequently affecting meta-analyses and power-analyses.[2]

Types

Pearson r correlation

Pearson's r correlation, introduced by Karl Pearson, is one of the most widely used effect sizes. It can be used when the data are continuous or binary; thus the Pearson r is arguably the most versatile effect size. This was the first important effect size to be developed in statistics. Pearson's r can vary in magnitude from -1 to 1, with -1 indicating a perfect negative linear relation, 1 indicating a perfect positive linear relation, and 0 indicating no linear relation between two variables. Cohen (1988, 1992) gives the following guidelines for the social sciences: small effect size, r = 0.1; medium, r = 0.3; large, r = 0.5.

Another often-used measure of the strength of the relationship between two variables is the coefficient of determination (the square of r, referred to as "r-squared"). This is a measure of the proportion of variance shared by the two variables, and varies from 0 to 1. An of 0.21 means that 21% of the total variance is shared by the two variables.

Cohen's d

Cohen's d may be an appropriate effect size measure to use in the context of a t-test on means. d is defined as the difference between two means divided by the pooled standard deviation for those means. Thus, in the case where both samples are the same size,

where meani and SDi are the mean and standard deviation for group i, for i = 1, 2.

Different people offer different advice regarding how to interpret the resultant effect size, but the most accepted opinion is that of Cohen (1992) where 0.2 is indicative of a small effect, 0.5 a medium and 0.8 a large effect size.

So, in the example above of visiting England and observing men and women's height, the data (Aaron,Kromrey,& Ferron, 1998, November; from a 2004 UK representative sample of 2436 men and 3311 women) is:

  • Men: Mean Height = 1750 mm; Standard Deviation = 89.93 mm
  • Women: Mean Height = 1612 mm; Standard Deviation = 69.05 mm

The effect size (using Cohen's d) would equal 1.72 (95% confidence intervals: 1.66 - 1.78). This is very large and you should have no problem in detecting that there is a consistent height difference, on average, between men and women.

One point worth noting, though, is that in some cases it may be wise to use just one of the standard deviations (e.g., pre-treatment standard deviation in a therapeutic trial). Either way, note that sample size does not play a part in the calculation - points noted by Hedges.

Another way of calculating effect size is to subtract one mean away from the other (ignore the sign) and then divide the answer by the mean standard deviation.

Hedges' ĝ

Hedges and Olkin (1985) noted that one could adjust effect size estimates by taking into account the sample size. The problem with Cohen's d is that the outcome is heavily influenced by the denominator in the equation. If one standard deviation is larger than the other then the denominator is weighted in that direction and the effect size is more conservative. However, surely it makes more sense to put stock in the larger sample size? Hedges' ĝ incorporates sample size by both computing a denominator which looks at the sample sizes of the respective standard deviations and also makes an adjustment to the overall effect size based on this sample size. The formula for Hedges' ĝ (as used by software such as the Effect Size Generator) is:

In the above 'height' example, Hedges' ĝ effect size equals 1.76 (95% confidence intervals: 1.70 - 1.82). Notice how the large sample size has increased the effect size from Cohen's d? If, instead, the available data were from only 90 men and 80 women Hedges' ĝ would provide a more conservative estimate of effect size: 1.70 (with larger 95% confidence intervals: 1.35 - 2.05).

Cohen's

Cohen's is the appropriate effect size measure to use in the context of an F-test for ANOVA or multiple regression. The effect size measure for multiple regression is defined as:

where is the squared multiple correlation.

The effect size measure for hierarchical multiple regression is defined as:

where is the variance accounted for by a set of one or more independent variables A, and is the combined variance accounted for by A and another set of one or more independent variables B.

By convention, effect sizes of 0.02, 0.15, and 0.35 are termed small, medium, and large, respectively (Cohen, 1988).

Cohen's can also be found for factorial analysis of variance (ANOVA, aka the F-test) working backwards using :


In a balanced design (equivalent sample sizes across groups) of ANOVA, the corresponding population parameter of is : , wherein denotes the population mean within the jth group of the total K groups, and the equivalent population standard deviations within each groups. SS is the sum of squares manipulation in ANOVA.

φ, Cramer's φ, or Cramer's V

  

  

Phi (φ) Cramer's Phi (φc)

The best measure of association for the chi-square test is phi (or Cramer's phi or V). Phi is related to the point-biserial correlation coefficient and Cohen's d and estimates the extent of the relationship between two variables (2 x 2).[3] Cramer's Phi may be used with variables having more than two levels.

Phi can be computed by finding the square root of the chi-square statistic divided by the sample size.

Similarly, Cramer's phi can be found through a slightly more complex formula that takes the number of rows or columns into account (k).

Odds ratio

The odds ratio is another useful effect size. It is appropriate when both variables are binary. For example, consider a study on spelling. In a control group, two students pass the class for every one who fails, so the odds of passing are two to one (or more briefly 2/1 = 2). In the treatment group, six students pass for every one who fails, so the odds of passing are six to one (or 6/1 = 6). The effect size can be computed by noting that the odds of passing in the treatment group are three times higher than in the control group (because 6 divided by 2 is 3). Therefore, the odds ratio is 3. However, odds ratio statistics are on a different scale to Cohen's d. So, this '3' is not comparable to a Cohen's d of '3'.

References

  1. ^ Nakagawa, S & Cuthill, IC. (2007) Effect size, confidence interval and statistical significance: a practical guide for biologists. Biological Reviews, 82, 591 - 605.
  2. ^ Brand A, Bradley MT, Best LA, Stoica G (2008) Accuracy of effect size estimates from published psychological research. Percept Mot Skills. 2008 Apr;106(2):645-9.
  3. ^ Aaron, B., Kromrey, J. D., & Ferron, J. M. (1998, November). Equating r-based and d-based effect-size indices: Problems with a commonly recommended formula. Paper presented at the annual meeting of the Florida Educational Research Association, Orlando, FL. (ERIC Document Reproduction Service No. ED433353)
  • Aaron, B., Kromrey, J. D., & Ferron, J. M. (1998, November). Equating r-based and d-based effect-size indices: Problems with a commonly recommended formula. Paper presented at the annual meeting of the Florida Educational Research Association, Orlando, FL. (ERIC Document Reproduction Service No. ED433353)
  • Cohen, J. (1988). Statistical power analysis for the behavioral sciences (2nd ed.). Hillsdale, NJ: Erlbaum
  • Cohen, J. (1992). A power primer. Psychological Bulletin, 112, 155-159.
  • Cumming, G. and Finch, S. (2001). A primer on the understanding, use, and calculation of confidence intervals that are based on central and noncentral distributions. Educational and Psychological Measurement, 61, 530–572.
  • Hedges, L. V., & Olkin, I. (1985). Statistical methods for meta-analysis. San Diego, CA: Academic Press.
  • Lipsey, M.W., & Wilson, D.B. (2001). Practical meta-analysis. Sage: Thousand Oaks, CA.
  • Wilkinson, L., & APA Task Force on Statistical Inference. (1999). Statistical methods in psychology journals: Guidelines and explanations. American Psychologist, 54, 594-604.

Software

Further Explanations