Import

From formulasearchengine
Revision as of 10:59, 26 January 2014 by en>Guest2625 (Undid revision 592146202 by 77.106.185.3 (talk))
Jump to navigation Jump to search

Template:Bayesian statistics Template:Regression bar In statistics, Bayesian linear regression is an approach to linear regression in which the statistical analysis is undertaken within the context of Bayesian inference. When the regression model has errors that have a normal distribution, and if a particular form of prior distribution is assumed, explicit results are available for the posterior probability distributions of the model's parameters.

Model setup

Consider a standard linear regression problem, in which for i=1,...,n we specify the conditional distribution of yi given a k×1 predictor vector xi:

yi=xiTβ+ϵi,

where β is a k×1 vector, and the ϵi are independent and identical normally distributed random variables:

ϵiN(0,σ2).

This corresponds to the following likelihood function:

ρ(y|X,β,σ2)(σ2)n/2exp(12σ2(yXβ)T(yXβ)).

The ordinary least squares solution is to estimate the coefficient vector using the Moore-Penrose pseudoinverse:

β^=(XTX)1XTy

where X is the n×k design matrix, each row of which is a predictor vector xiT; and y is the column n-vector [y1yn]T.

This is a frequentist approach, and it assumes that there are enough measurements to say something meaningful about β. In the Bayesian approach, the data are supplemented with additional information in the form of a prior probability distribution. The prior belief about the parameters is combined with the data's likelihood function according to Bayes theorem to yield the posterior belief about the parameters β and σ. The prior can take different functional forms depending on the domain and the information that is available a priori.

With conjugate priors

Conjugate prior distribution

For an arbitrary prior distribution, there may be no analytical solution for the posterior distribution. In this section, we will consider a so-called conjugate prior for which the posterior distribution can be derived analytically.

A prior ρ(β,σ2) is conjugate to this likelihood function if it has the same functional form with respect to β and σ. Since the log-likelihood is quadratic in β, the log-likelihood is re-written such that the likelihood becomes normal in (ββ^). Write

(yXβ)T(yXβ)=(yXβ^)T(yXβ^)+(ββ^)T(XTX)(ββ^).

The likelihood is now re-written as

ρ(y|X,β,σ2)(σ2)v/2exp(vs22σ2)(σ2)(nv)/2×exp(12σ2(ββ^)T(XTX)(ββ^)),

where

vs2=(yXβ^)T(yXβ^),andv=nk,

where k is the number of regression coefficients.

This suggests a form for the prior:

ρ(β,σ2)=ρ(σ2)ρ(β|σ2),

where ρ(σ2) is an inverse-gamma distribution

ρ(σ2)(σ2)(v0/2+1)exp(v0s022σ2).

In the notation introduced in the inverse-gamma distribution article, this is the density of an Inv-Gamma(a0,b0) distribution with a0=v0/2 and b0=12v0s02 with v0 and s02 as the prior values of v and s2, respectively. Equivalently, it can also be described as a scaled inverse chi-squared distribution, Scale-inv-χ2(v0,s02)..

Further the conditional prior density ρ(β|σ2) is a normal distribution,

ρ(β|σ2)(σ2)k/2exp(12σ2(βμ0)TΛ0(βμ0))).

In the notation of the normal distribution, the conditional prior distribution is 𝒩(μ0,σ2Λ01).

Posterior distribution

With the prior now specified, the posterior distribution can be expressed as

ρ(β,σ2|y,X)ρ(y|X,β,σ2)ρ(β|σ2)ρ(σ2)
(σ2)n/2exp(12σ2(yXβ)T(yXβ))
×(σ2)k/2exp(12σ2(βμ0)TΛ0(βμ0))
×(σ2)(a0+1)exp(b0σ2).

With some re-arrangement, the posterior can be re-written so that the posterior mean μn of the parameter vector β can be expressed in terms of the least squares estimator β^ and the prior mean μ0, with the strength of the prior indicated by the prior precision matrix Λ0

μn=(XTX+Λ0)1(XTXβ^+Λ0μ0).

To justify that μn is indeed the posterior mean, the quadratic terms in the exponential can be re-arranged as a quadratic form in βμn.[1]

(yXβ)T(yXβ)+(βμ0)TΛ0(βμ0)=(βμn)T(XTX+Λ0)(βμn)+yTyμnT(XTX+Λ0)μn+μ0TΛ0μ0.

Now the posterior can be expressed as a normal distribution times an inverse-gamma distribution:

ρ(β,σ2|y,X)(σ2)k/2exp(12σ2(βμn)T(XTX+Λ0)(βμn))
×(σ2)(n+v0)/21exp(b0+yTyμnT(XTX+Λ0)μn+μ0TΛ0μ0)2σ2).

Therefore the posterior distribution can be parametrized as follows.

ρ(β,σ2|y,X)ρ(β|σ2,y,X)ρ(σ2|y,X),

where the two factors correspond to the densities of 𝒩(μn,σ2Λn1) and Inv-Gamma(an,bn) distributions, with the parameters of these given by

Λn=(XTX+Λ0),μn=(Λn)1(XTXβ^+Λ0μ0),
an=12(n+v0),bn=b0+12(yTy+μ0TΛ0μ0μnTΛnμn).

This can be interpreted as Bayesian learning where the parameters are updated according to the following equations.

μn=(XTX+Λ0)1(Λ0μ0+XTXβ^)=(XTX+Λ0)1(Λ0μ0+XTy),
Λn=(XTX+Λ0),
an=a0+n2,
bn=b0+12(yTy+μ0TΛ0μ0μnTΛnμn).

Model evidence

The model evidence p(y|m) is the probability of the data given the model m. It is also known as the marginal likelihood, and as the prior predictive density. Here, the model is defined by the likelihood function p(y|X,β,σ) and the prior distribution on the parameters, i.e. p(β,σ). The model evidence captures in a single number how well such a model explains the observations. The model evidence of the Bayesian linear regression model presented in this section can be used to compare competing linear models by Bayesian model comparison. These models may differ in the number and values of the predictor variables as well as in their priors on the model parameters. Model complexity is already taken into account by the model evidence, because it marginalizes out the parameters by integrating p(y,β,σ|X) over all possible values of β and σ.

p(y|m)=p(y|X,β,σ)p(β,σ)dβdσ

This integral can be computed analytically and the solution is given in the following equation.[2]

p(y|m)=1(2π)n/2det(Λ0)det(Λn)b0a0bnanΓ(an)Γ(a0)

Here Γ denotes the gamma function. Because we have chosen a conjugate prior, the marginal likelihood can also be easily computed by evaluating the following equality for arbitrary values of β and σ.

p(y|m)=p(β,σ|m)p(y|X,β,σ,m)p(β,σ|y,X,m)

Note that this equation is nothing but a re-arrangement of Bayes theorem. Inserting the formulas for the prior, the likelihood, and the posterior and simplifying the resulting expression leads to the analytic expression given above.

Other cases

In general, it may be impossible or impractical to derive the posterior distribution analytically. However, it is possible to approximate the posterior by an approximate Bayesian inference method such as Monte Carlo sampling[3] or variational Bayes.

The special case μ0=0,Λ0=cI is called ridge regression.

A similar analysis can be performed for the general case of the multivariate regression and part of this provides for Bayesian estimation of covariance matrices: see Bayesian multivariate linear regression.

See also

Template:More footnotes

Notes

  1. The intermediate steps are in Fahrmeir et al. (2009) on page 188.
  2. The intermediate steps of this computation can be found in O'Hagan (1994) on page 257.
  3. Carlin and Louis(2008) and Gelman, et al. (2003) explain how to use sampling methods for Bayesian linear regression.

References

External links

Template:Least Squares and Regression Analysis Template:Statistics