| This article needs attention from an expert in Statistics. Please add a reason or a talk parameter to this template to explain the issue with the article. WikiProject Statistics may be able to help recruit an expert. (September 2009) |
In statistics, an explained sum of squares (ESS) is the sum of squared predicted values in a standard regression model (for example yi = a + bxi + εi), where yi is the response variable, xi is the explanatory variable, a and b are coefficients, i indexes the observations from 1 to n, and εi is the error term. In general, the greater the ESS, the better the model performs in its estimation.
If
and
are the estimated coefficients, then

is the predicted variable. The ESS is the sum of the squares of the differences of the predicted values and the grand mean:

In general: total sum of squares = explained sum of squares + residual sum of squares.
Type I SS
Type one estimates of the sum of squares explained by a model in a variable are obtained when sums of squares for a model are calculated sequentially (e.g. with the model Y = aX1 + bX2 + cX3). Sums of squares are calculated for a using the model Y = aX1 and sums of squares for b are calculated using the model Y = aX1 + bX2, and sums of squares for c are calculated using the model Y = aX1 + bX2 + cX3.
Type II SS
Type two sums of squares are calculated for a variable after adjusting for every other factor of equal or lesser order. For example in the model Y = A + B + C + AB + BC + AC + ABC, the type II sum of squares for A is obtained by adjusting for B and C.[clarification needed]
Type III SS
The type III sum of squares is calculated by comparing the full model, to the full model without the variable of interest. So it is considered to be the additional variability explained by adding the variable of interest. It is the same as the Type I ss when the variable is the last variable in the model.
Partitioning in simple linear regression
The following equality is generally true in simple linear regression.

Simple Derivation
Square the terms on both sides. In general, these sums would not be equal, but in this case, each
th term on both sides of the sum are the same.
simple linear regression gives us
[1]. What follows depends on this.
Again simple linear regression gives us [2]
See also
- ^ Mendenhall, William. Introduction to Probability and Statistics, Brooks/Cole ,2009, p. 507
- ^ Mendenhall, William. Introduction to Probability and Statistics, Brooks/Cole ,2009, p. 507