Bayes' rule: Difference between revisions

From formulasearchengine
Jump to navigation Jump to search
m fix % bug in <math> use \% to avoid parse errors
 
(One intermediate revision by one other user not shown)
Line 1: Line 1:
In [[mathematics]], [[probability theory|probability]], and [[statistics]], a '''multivariate random variable''' or '''random vector''' is a list of mathematical [[Variable (mathematics)|variable]]s each of whose value is unknown, either because the value has not yet occurred or because there is imperfect knowledge of its value.  The individual variables in a random vector are grouped together because there may be [[correlation]]s among them — often they represent different properties of an individual [[statistical unit]] (e.g. a particular person, event, etc.). Normally each element of a random vector is a [[real number]].
Very happy to match you! My name is Rogelio It's not a standard thing-but what I like doing is the game of golf and I would never give up itGeorgia has always been her property and she has everything that she requires thereHiring is what I do and that I'll be marketed shortly<br><br>[http://www.amazon.com/How-To-Become-Singer-Rapper-ebook/dp/B00JLYQZD2 Kindle Store]
 
Random vectors are often used as the underlying implementation of various types of aggregate [[random variable]]s, e.g. a [[random matrix]], [[random tree]], [[random sequence]], [[random process]], etc.
 
More formally, a multivariate random variable is a [[Column vector|column]] [[vector space|vector]] <math>\mathbf{X}=(X_1,...,X_n)^T </math> (or its [[transpose]], which is a [[row vector]]) whose components are [[scalar (mathematics)|scalar]]-valued [[random variable]]s on the same [[probability space]] <math>(\Omega, \mathcal{F}, P)</math>, where <math>\Omega</math> is the [[sample space]], <math>\mathcal{F}</math> is the [[sigma-algebra]] (the collection of all events), and <math>P</math> is the [[probability measure]] (a function returning each event's [[probability]]).
 
==Probability distribution==
Every random vector gives rise to a probability measure on <math>\mathbb{R}^n</math> with the [[Borel algebra]] as the underlying sigma-algebra. This measure is also known as the [[joint probability distribution]], the joint distribution, or the multivariate distribution of the random vector.
 
The [[Probability distribution|distributions]] of each of the component random variables <math>X_i</math> are called [[marginal distribution]]s. The [[conditional probability distribution]] of <math>X_i</math> given <math>X_j</math> is the probability distribution of <math>X_i</math> when <math>X_j</math> is known to be a particular value.
 
==Operations on random vectors==
Random vectors can be subjected to the same kinds of [[Euclidean vector#Basic properties|algebraic operations]] as can non-random vectors: addition, subtraction, multiplication by a [[Scalar (mathematics)|scalar]], and the taking of [[Dot product|inner products]].
 
Similarly, a new random vector <math>\mathbf{Y}</math> can be defined by applying an affine transformation <math>g\colon \mathbb{R}^n \to \mathbb{R}^n</math> to a random vector <math>\mathbf{X}</math>:
 
:<math>\mathbf{Y}=\mathcal{A}\mathbf{X}+b</math>, where <math>\mathcal{A}</math> is an <math>n \times n</math> matrix and <math>b</math> is an <math>n \times  1</math> column vector.
 
If <math>\mathcal{A}</math> is invertible and the probability density of <math>\textstyle\mathbf{X}</math> is <math>f_{\mathbf{X}}</math>, then the probability density of <math>\mathbf{Y}</math> is
 
:<math>f_{\mathbf{Y}(y)}=\frac{f_{\mathbf{X}}(\mathcal{A}^{-1}(y-b))}{|\det\mathcal{A}|}</math>.
 
==Expected value, covariance, and cross-covariance==
The [[expected value]] or mean of a random vector <math>\mathbf{X}</math> is a fixed vector <math>\operatorname{E}[\mathbf{X}]</math> whose elements are the expected values of the respective random variables.
 
The [[covariance matrix]] (also called the variance-covariance matrix) of an <math>n \times 1</math> random vector is an <math>n \times n</math> [[Matrix (mathematics)|matrix]] whose <math>i,j^{th}</math> element is the [[covariance]] between the <math>i^{th}</math> and the <math>j^{th}</math> random variablesThe covariance matrix is the expected value, element by element, of the <math>n \times n</math> matrix [[matrix multiplication|computed as]] <math>[\mathbf{X}-\operatorname{E}[\mathbf{X}]][\mathbf{X}-\operatorname{E}[\mathbf{X}]]^T</math>, where the superscript T refers to the transpose of the indicated vector:
 
:<math>\operatorname{Var}[\mathbf{X}]=\operatorname{E}[(\mathbf{X}-\operatorname{E}[\mathbf{X}])(\mathbf{X}-\operatorname{E}[\mathbf{X}])^{T}]. </math>
 
By extension, the [[cross-covariance matrix]] between two random vectors <math>\mathbf{X}</math> and <math>\mathbf{Y}</math> (<math>\mathbf{X}</math> having <math>n</math> elements and <math>\mathbf{Y}</math> having <math>p</math> elements) is the <math>n \times p</math> matrix
 
:<math>\operatorname{Cov}[\mathbf{X},\mathbf{Y}]=\operatorname{E}[(\mathbf{X}-\operatorname{E}[\mathbf{X}])(\mathbf{Y}-\operatorname{E}[\mathbf{Y}])^{T}], </math>
 
where again the indicated matrix expectation is taken element-by-element in the matrix. The cross-covariance matrix <math>\operatorname{Cov}[\mathbf{Y},\mathbf{X}]</math> is simply the transpose of the matrix <math>\operatorname{Cov}[\mathbf{X},\mathbf{Y}]</math>.
 
==Further properties==
===Expectation of a quadratic form===
One can take the expectation of a quadratic form in the random vector ''X'' as follows:<ref name=Kendrick>Kendrick, David, ''Stochastic Control for Economic Models'', McGraw-Hill, 1981.</ref>{{rp|p.170-171}}
 
:<math>\operatorname{E}(X^{T}AX) = [\operatorname{E}(X)]^{T}A[\operatorname{E}(X)] + \operatorname{tr}(AC),</math>
 
where ''C'' is the covariance matrix of ''X'' and tr refers to the [[Trace (linear algebra)|trace]] of a matrix — that is, to the sum of the elements on its main diagonal (from upper left to lower right).  Since the quadratic form is a scalar, so is its expectation.
 
'''Proof''': Let <math>\mathbf{z}</math> be an <math>m \times 1</math> random vector with <math>\operatorname{E}[\mathbf{z}] = \mu</math> and <math>\operatorname{Cov}[\mathbf{z}]= V</math> and let <math>A</math> be an <math>m \times m</math> non-stochastic matrix.
 
Based on the [[Computational_formula_for_the_variance#Generalization_to_covariance|formula of the covariance]], then if we call <math>\mathbf{z}' = \mathbf{X}</math> and <math>\mathbf{z}'A' = \mathbf{Y}</math>, we see that:
 
:<math>\operatorname{Cov}[\mathbf{X},\mathbf{Y}] = \operatorname{E}[\mathbf{X}\mathbf{Y}']-\operatorname{E}[\mathbf{X}]\operatorname{E}[\mathbf{Y}]'</math>
 
Hence
 
:<math>\begin{align}
E(XY')  &= \operatorname{Cov}(X,Y)+E(X)E(Y)' \\
E(z'Az) &= \operatorname{Cov}(z',z'A')+E(z')E(z'A')'  \\
&=\operatorname{Cov}(z', z'A') + \mu' (\mu'A')' \\
&=\operatorname{Cov}(z', z'A') + \mu' A \mu ,
\end{align}</math>
 
which leaves us to show that
 
:<math>\operatorname{Cov}(z', z'A')=\operatorname{t}(AV).</math>
 
This is true based on the fact that one can [[Trace_(linear_algebra)#Properties|cyclically permute matrices when taking a trace]] without changing the end result (e.g.: trace(''AB'') = trace(''BA'')).
 
We see [[Covariance#Definition|that]]
 
:<math>\begin{align}
\operatorname{Cov}(z',z'A') &= E\left[\left(z' - E(z') \right)\left(z'A' - E\left(z'A'\right) \right)' \right] \\
&= E\left[ (z' - \mu') (z'A' - \mu' A' )' \right]\\
&= E\left[ (z - \mu)' (Az - A\mu) \right].
\end{align}</math>
 
And since
 
:<math>\left( {z - \mu } \right)'\left( {Az - A\mu } \right)</math>
 
is a fixed number, then
 
:<math>(z - \mu)' ( Az - A\mu)= \operatorname{trace}\left[ {(z - \mu )'(Az - A\mu )} \right] = \operatorname{trace} \left[(z - \mu )'A(z - \mu ) \right]</math>
 
trivially. Using the permutation we get:
 
:<math>\operatorname{trace}\left[ {(z - \mu )'A(z - \mu )} \right] = \operatorname{trace}\left[ {A(z - \mu )'(z - \mu )} \right],</math>
 
and by plugging this into the original formula we get:
 
:<math>\begin{align}
\operatorname{Cov} \left( {z',z'A'} \right) &= E\left[ {\left( {z - \mu } \right)' (Az - A\mu)} \right] \\
&= E \left[ \operatorname{trace}\left[ A(z - \mu )'(z - \mu )\right] \right] \\
&= \operatorname{trace} \left[ {A \cdot E \left[(z - \mu )'(z - \mu )\right] } \right] \\
&= \operatorname{trace} [A V].
\end{align}</math>
 
===Expectation of the product of two different quadratic forms===
One can take the expectation of the product of two different quadratic forms in a zero-mean [[Joint normality|Gaussian]] random vector ''X'' as follows:<ref name=Kendrick/>{{rp|pp. 162-176}}
 
:<math>\operatorname{E}[X^{T}AX][X^{T}BX] = 2\operatorname{trace}(ACBC) + \operatorname{trace}(AC)\operatorname{trace}(BC)</math>
 
where again ''C'' is the covariance matrix of ''X''. Again, since both quadratic forms are scalars and hence their product is a scalar, the expectation of their product is also a scalar.
 
==Applications==
===Portfolio theory===
In [[portfolio theory]] in [[finance]], an objective often is to choose a portfolio of risky assets such that the distribution of the random portfolio return has desirable properties. For example, one might want to choose the portfolio return having the lowest variance for a given expected value.  Here the random vector is the vector ''r'' of random returns on the individual assets, and the portfolio return ''p'' (a random scalar) is the inner product of the vector of random returns with a vector ''w'' of portfolio weights — the fractions of the portfolio placed in the respective assets. Since ''p'' = ''w''<sup>T</sup>''r'', the expected value of the portfolio return is ''w''<sup>T</sup>E(''r'') and the variance of the portfolio return can be shown to be ''w''<sup>T</sup>C''w'', where C is the covariance matrix of ''r''.
 
===Regression theory===
In [[linear regression]] theory, we have data on ''n'' observations on a dependent variable ''y'' and ''n'' observations on each of ''k'' independent variables ''x<sub>j</sub>''. The observations on the dependent variable are stacked into a column vector ''y''; the observations on each independent variable are also stacked into column vectors, and these latter column vectors are combined into a matrix ''X'' of observations on the independent variables.  Then the following regression equation is postulated as a description of the process that generated the data:
 
:<math>y = X \beta + e,</math>
 
where β is a postulated fixed but unknown vector of ''k'' response coefficients, and ''e'' is an unknown random vector reflecting random influences on the dependent variable.  By some chosen technique such as [[ordinary least squares]], a vector <math>\hat \beta</math> is chosen as an estimate of β, and the estimate of the vector ''e'', denoted <math>\hat e</math>, is computed as
 
:<math>\hat e = y - X \hat \beta.</math>
 
Then the statistician must analyze the properties of <math>\hat \beta</math> and <math>\hat e</math>, which are viewed as random vectors since a randomly different selection of  ''n'' cases to observe would have resulted in different values for them.
 
==References==
{{reflist}}
 
[[Category:Probability theory]]
[[Category:Multivariate statistics]]
[[Category:Algebra of random variables]]
 
[[de:Zufallsvariable#Mehrdimensionale Zufallsvariable]]
[[pl:Zmienna losowa#Uogólnienia]]

Latest revision as of 22:17, 27 October 2014

Very happy to match you! My name is Rogelio It's not a standard thing-but what I like doing is the game of golf and I would never give up it. Georgia has always been her property and she has everything that she requires there. Hiring is what I do and that I'll be marketed shortly

Kindle Store