Jump to content

Bayesian multivariate linear regression

From Wikipedia, the free encyclopedia
This is an old revision of this page, as edited by 85.178.168.135 (talk) at 20:38, 13 May 2011 (corrected \exp in formulas). The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.

In statistics, Bayesian multivariate linear regression is a Bayesian approach to multiple linear regression.

Details

Consider a collection of m linear regression problems for n observations, related through a set of common predictor variables , and a jointly normal errors  :

where the subscript c denotes a column vector of k observations for each measurement ().

The noise terms are jointly normal over each collection of k observations. That is, each row vector represents an m length vector of correlated observations on each of the dependent variables:

where the noise is i.i.d. and normally distributed for all rows .

where B is an matrix

We can write the entire regression problem in matrix form as:

where Y and E are matrices.

The classical, frequentists linear least squares solution is to simply estimate the matrix of regression coefficients using the Moore-Penrose pseudoinverse:

.

To obtain the Bayesian solution, we need to specify the conditional likelihood and then find the appropriate conjugate prior. As with the univariate case of linear Bayesian regression, we will find that we can specify a natural conditional conjugate prior (which is scale dependent).

Let us write our conditional likelihood as

writing the error E in terms Y,X, and B yields

We seek a natural conjugate prior—a joint density which is of the same functional form as the likelihood. Since the likelihood is quadratic in , we re-write the likelihood so it is normal in (the deviation from classical sample estimate)

Using the same technique as with Bayesian linear regression, we decompose the exponential term using a matrix-form of the sum-of-squares technique. Here, however, we will also need to use the Matrix Differential Calculus (Kronecker product and vectorization transformations).

First, let us apply sum-of-squares to obtain new expression for the likelihood:

We would like to develop a conditional form for the priors:

where is an inverse-Wishart distribution and is some form of normal distribution in the matrix . This is accomplished using the vectorization transformation, which converts the likelihood from a function of the matrices to a function of the vectors .

Write

Let

Then

which will lead to a likelihood which is normal in .

With the likelihood in a more tractable form, we can now find a natural (conditional) conjugate prior.

See also

References

  • Bradley P. Carlin and Thomas A. Louis, Bayes and Empirical Bayes Methods for Data Analysis, Chapman & Hall/CRC, Second edition 2000,
  • Peter E. Rossi, Greg M. Allenby, and Robert McCulloch, Bayesian Statistics and Marketing, John Wiley & Sons, Ltd, 2006