Jump to content

Linear regression

From Wikipedia, the free encyclopedia
This is an old revision of this page, as edited by Tedunning (talk | contribs) at 05:47, 20 May 2001. The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.
(diff) ← Previous revision | Latest revision (diff) | Newer revision → (diff)

Linear Regression is a method of data analysis intended to be used with a set of paired observations on two variables on the same set of Statistical Units. Conventionally, we refer to one of the variables as independent (usually labeled X) and the other as dependent (labeled Y). The notion of an independent variable often (but not always) implies the ability to choose the levels of the independent variable and that the dependent variable will respond naturally as in the Stimulus Response Model.


  • Y = alpha + beta*X


The analysis has several steps:

  1. Summarize the data by summing the data, their squares, and cross products
  1. Estimate the parameters, first b, the estimate of beta and then a, the estimate of alpha.
  1. Calculate and display the residuals between the equation using the estimated parameters and the observations, {Y - a - b*X}.
  1. Calculate several ancillary statistics which permit evaluation of the success of the experiment.


Note: A useful alternative to linear regression is robust regression in which mean absolute error is minimized instead of mean squared error as in linear regression. Robust regression is computationally much more intensive than linear regression and is somewhat more difficult to implement as well.

Summarizing the data
We sum the observations, the squares of the Y's and X's and the products of X*Y to obtain the following quantities.
  • SX = X1 + X2 +...+ Xn and SY similarly
  • SXX = X12 + X22 +...+ Xn2 and SYY similarly
  • SXY = X1Y1 + X2Y2 +...+ XnYn
Estimating beta
We use the summary statistics above to calculate b, the estimate of beta.
  • b = (n*SXY-SXSY)/(n*SXX-SXSX)
Estimating alpha
We use the estimate of beta and the other statistics to estimate alpha by:
  • a = (SY - b*SX)/n

Displaying the residuals
The first method of displaying the residuals use the Histogram or cumulative distribution to depict the similarity (or lack thereof) to a Normal distribution. Non-normality suggests that the model may not be a good summary description of the data.


We plot the residuals, (Y-a-bX) against the independent variable, X. There should be no discernible trend or pattern if the model is satisfactory for this data. Some of the possible problems are:
  • Residuals increase (or decrease) as the independent variable increases -- indicates mistakes in the calculations -- find the mistakes and correct them.
  • Residuals first rise and then fall (or first fall and then rise) -- indicates that the appropriate model is (at least) quadratic. See Polynomial Regression.
  • One residual is much larger than the others and opposite in sign -- suggests that there is one unusual observation which is distorting the fit --
    • Verify its value before publishing or
    • Eliminate it, document your decision to do so, and recalculate the statistics.

Ancillary statistics
The sum of squared deviations can be partitioned as in ANOVA to indicate what part of the dispersion of the dependent variable is explained by the independent variable.


The correlation coefficient, r, can be calculated by
  • r = (n*SXY-SXSY) / sqrt[(n*SXX-SX2) * (n*SYY-SY2)]
This statistic is a measure of how well a straight line describes the data. Values near zero suggest that the model is ineffective. r2 is frequently interpreted as the fraction of the variability explained by the independent variable, X.