|
|
Line 1: |
Line 1: |
| In [[linear algebra]] and [[statistics]], the '''pseudo-determinant'''<ref name="minka">{{cite web | author = Minka, T.P. | title = Inferring a Gaussian Distribution | url = http://research.microsoft.com/en-us/um/people/minka/papers/gaussian.html | year = 2001}} [http://research.microsoft.com/en-us/um/people/minka/papers/minka-gaussian.pdf PDF]</ref> is the product of all non-zero [[eigendecomposition (matrix)|eigenvalues]] of a [[square matrix]]. It coincides with the regular [[determinant]] when the matrix is [[invertible matrix|non-singular]].
| | Hi there! :) My name is Scotty, I'm a student studying Educational Policy Studies from Dunkerque, France.<br><br>Take a look at my webpage; [http://www.douloschurch.com/xe/index.php?document_srl=2640657&mid=banner диета дней] |
| | |
| == Definition ==
| |
| The pseudo-determinant of a square ''n''-by-''n'' matrix '''A''' may be defined as:
| |
| :<math>
| |
| |\mathbf{A}|_+ = \lim_{\alpha\to 0} \frac{|\mathbf{A} + \alpha \mathbf{I}|}{\alpha^{n-\operatorname{rank}(\mathbf{A})}}
| |
| </math>
| |
| where |'''A'''| denotes the usual [[determinant]], '''I''' denotes the [[identity matrix]] and rank('''A''') denotes the [[rank (linear algebra)| rank]] of '''A'''.
| |
| | |
| ==Definition of pseudo determinant using Vahlen Matrix==
| |
| The Vahlen matrix of a conformal transformation, the Möbius transformation (i.e. <math>(ax+b)(cx+d)^{-1}</math> for <math>a,b,c,d\in \mathcal {G}(p,q)</math>)) is defined as <math>[f]=::\begin{bmatrix}a & b \\c & d \end{bmatrix}</math>. By the pseudo determinant of the Vahlen matrix for the conformal transformation, we mean
| |
| | |
| <math> pdet:: \begin{bmatrix}a & b\\ c& d\end{bmatrix} =ad^\dagger -bc^\dagger</math>
| |
| | |
| If <math>pdet[f]>0</math>, the transformation is sense-preserving (rotation) whereas if the <math>pdet[f]<0</math>, the transformation is sense-preserving (reflection).
| |
| | |
| == Computation for positive semi-definite case ==
| |
| If <math>A</math> is [[positive-definite matrix | positive semi-definite]], then the [[singular value decomposition | singular values]] and [[eigendecomposition (matrix)|eigenvalues]] of <math>A</math> coincide. In this case, if the [[singular value decomposition]] ('''SVD''') is available, then <math>|\mathbf
| |
| {A}|_+</math> may be computed as the product of the non-zero singular values. If all singular values are zero, then the pseudo-determinant is 1.
| |
| | |
| ==Application in statistics==
| |
| If a statistical procedure ordinarily compares distributions in terms of the determinants of variance-covariance matrices then, in the case of singular matrices, this comparison can be undertaken by using a combination of the ranks of the matrices and their pseudo-determinants, with the matrix of higher rank being counted as "largest" and the pseudo-determinants only being used if the ranks are equal.<ref>[http://support.sas.com/documentation/cdl/en/statug/63347/HTML/default/viewer.htm#statug_rreg_sect021.htm SAS documentation on "Robust Distance"]</ref> Thus pseudo-determinants are sometime presented in the outputs of statistical programs in cases where covariance matrices are singular.<ref>Bohling, Geoffrey C. (1997) "GSLIB-style programs for discriminant analysis and regionalized classification", ''Computers & Geosciences'', 23 (7), 739–761 {{DOI| 10.1016/S0098-3004(97)00050-2}}</ref>
| |
| | |
| == See also ==
| |
| *[[Matrix determinant]]
| |
| *[[Moore-Penrose pseudoinverse]], which can also be obtained in terms of the non-zero singular values.
| |
| | |
| == References ==
| |
| <references/>
| |
| | |
| | |
| | |
| [[Category:Multivariate statistics]]
| |
| [[Category:Matrices]]
| |
| | |
| {{statistics-stub}}
| |
Hi there! :) My name is Scotty, I'm a student studying Educational Policy Studies from Dunkerque, France.
Take a look at my webpage; диета дней