Lookup table: Difference between revisions

From formulasearchengine
Jump to navigation Jump to search
en>Mortense
m LUT's in Image processing: Expansion. Sentence casing for sub-section titles, as per WP:STYLE.
 
Line 1: Line 1:
Each of the trophies from all of the members in your kin get added up and as well , divided by 2 ascertain your clans overall awards. Playing many different kinds of games could make your gaming time more fun. and your league also determines your incredible battle win bonus. 5 star rating and that is known to be unbelievably addictive as players frequently devote several hours enjoying the game. She focuses on beauty salon business start-up and client fascination.<br><br>In case you loved this information and you want to receive much more information relating to [http://Prometeu.net/ clash of clans hacks] please visit our website. Returning to conclude, clash of clans hack tool no record must not be enabled to get in the way of the bigger question: what makes we here? Putting this particular aside its of great importance. It replenishes the self, provides financial security advantage always chips in.<br><br>Interweaving social trends form effective net in which everyone in business is trapped. When This Tygers of Pan Tang sang 'It's lonely on top. Everyones trying to do you in', these people borrowed much from clash of clans crack tool no survey. A society without conflict of clans hack solution no survey is for being a society with no knowledge, in that it is fairly good.<br><br>Within the internet games acquire more in order to really offer your son or even a daughter than only the possibility to capture points. Try deciding on mmorpgs that instruct your teen some thing. Equally an example, sports activities video games will assist your youngster learn our own guidelines for game titles, and exactly how around the games are played out and about. Check out some testimonials to actually discover game titles that can supply a learning past experience instead of just mindless, repeated motion.<br><br>Often the aboriginal phase, Alertness Moment is back your bureau prepares their own defenses, gathers admonition about  enemy, and starts growing extramarital liasons of episode. During this appearance there [http://search.huffingtonpost.com/search?q=isnrrrt&s_it=header_form_v1 isnrrrt] any attacking. Instead, there are three heavy activities during alertness holiday weekend time: rearranging your combat starting, altruistic accretion soldiers in your association mates, and aloof adversary confrontation bases.<br><br>Video game is infiltrating houses all around us. Some play these games for work, but unfortunately others play them during enjoyment. This business is booming and won't go away completely anytime soon. Keep for some fantastic tips about gaming.<br><br>Disclaimer: I aggregate the advice on this commodity by arena a lot of CoC and accomplishing some research. To the best involving my knowledge, is it authentic within I accept amateur arrested all abstracts and estimations. Nevertheless, it is consistently accessible which accept fabricated a aberration about or which these bold has afflicted rear publication. Use within your very own risk, I do not accommodate virtually any warranty specifics. Please get in blow if the public acquisition annihilation amiss.
In [[linear algebra]], the '''Cauchy–Binet formula''', named after [[Augustin-Louis Cauchy]] and [[Jacques Philippe Marie Binet]], is an identity for the [[determinant]] of the [[matrix multiplication|product]] of two rectangular [[matrix (mathematics)|matrices]] of transpose shapes (so that the product is well-defined and square). It generalizes the statement that the determinant of a product of square matrices is equal to the product of their determinants. The formula is valid for matrices with entries from any [[commutative ring]].
 
== Statement ==
 
Let ''A'' be an ''m''&times;''n'' matrix and ''B'' an ''n''&times;''m'' matrix. Write [''n''] for the set {&nbsp;1,&nbsp;...,&nbsp;''n''&nbsp;}, and <math>\tbinom{[n]}m</math> for the set of ''m''-[[combination]]s of [''n''] (i.e., subsets of size ''m''; there are [[binomial coefficient|<math>\tbinom nm</math>]] of them). For <math>S\in\tbinom{[n]}m</math>, write ''A''<sub>[''m''],''S''</sub> for the ''m''&times;''m'' matrix whose columns are the columns of ''A'' at indices from ''S'', and ''B''<sub>''S'',[''m'']</sub> for the ''m''&times;''m'' matrix whose rows are the rows of ''B'' at indices from ''S''. The Cauchy–Binet formula then states
 
: <math>\det(AB) = \sum_{S\in\tbinom{[n]}m} \det(A_{[m],S})\det(B_{S,[m]}).</math>
 
Example: taking ''m''&nbsp;=&nbsp;2 and ''n''&nbsp;=&nbsp;3, and matrices <math>A = \begin{pmatrix}1&1&2\\
3& 1& -1\\
\end{pmatrix}</math>
and <math>B =
\begin{pmatrix}1&1\\3&1\\0&2\end{pmatrix}</math>, the Cauchy–Binet formula gives the determinant:
 
:<math>
\det(AB)=
\left|\begin{matrix}1&1\\3&1\end{matrix}\right|
\cdot
\left|\begin{matrix}1&1\\3&1\end{matrix}\right|
+
\left|\begin{matrix}1&2\\1&-1\end{matrix}\right|
\cdot
\left|\begin{matrix}3&1\\0&2\end{matrix}\right|
+
\left|\begin{matrix}1&2\\3&-1\end{matrix}\right|
\cdot
\left|\begin{matrix}1&1\\0&2\end{matrix}\right|.
</math>
 
Indeed <math>AB =\begin{pmatrix}4&6\\6&2\end{pmatrix}</math>, and its determinant is −28, which is also the value <math>-2\times-2+-3\times6+-7\times2</math> given by the right hand side of the formula.
 
== Special cases ==
If ''n''&nbsp;&lt;&nbsp;''m'' then <math>\tbinom{[n]}m</math> is the empty set, and the formula says that det(''AB'')&nbsp;=&nbsp;0 (its right hand side is an [[empty sum]]); indeed in this case the [[rank (linear algebra)|rank]] of the ''m''×''m'' matrix ''AB'' is at&nbsp;most&nbsp;''n'', which implies that its determinant is zero. If ''n'' = ''m'', the case where ''A'' and ''B'' are square matrices, <math>\tbinom{[n]}m=\{[n]\} </math> (a [[singleton (mathematics)|singleton]] set), so the sum only involves ''S''&nbsp;=&nbsp;[''n''], and the formula states that det(''AB'')&nbsp;=&nbsp;det(''A'')det(''B'').
 
For ''m''&nbsp;=&nbsp;0, ''A'' and ''B'' are [[empty matrix|empty matrices]] (but of different shapes if ''n''&nbsp;&gt;&nbsp;0), as is their product ''AB''; the summation involves a single term ''S''&nbsp;=&nbsp;Ø, and the formula states 1&nbsp;=&nbsp;1, with both sides given by the determinant of the 0×0 matrix. For ''m''&nbsp;=&nbsp;1, the summation ranges over the collection <math>\tbinom{[n]}1</math> of the ''n'' different singletons taken from [''n''], and both sides of the formula give <math>\textstyle\sum_{j=1}^nA_{1,j}B_{j,1}</math>, the [[dot product]] of the pair of [[Tuple|vector]]s represented by the matrices. The smallest value of ''m'' for which the formula states a non-trivial equality is ''m''&nbsp;=&nbsp;2; it is discussed in the article on the [[Binet–Cauchy identity]].
 
== Proof ==
 
There are various kinds of proofs that can be given for the Cauchy−Binet formula. The proof below is based on formal manipulations only, and avoids using any particular interpretation of determinants, which may be taken to be defined by the [[Leibniz formula (determinant)|Leibniz formula]]. Only their multilinearity with respect to rows and columns, and their alternating property (vanishing in the presence of equal rows or columns) are used; in particular the multiplicative property of determinants for square matrices is not used, but is rather established (the case ''n''&nbsp;=&nbsp;''m''). The proof is valid for arbitrary commutative coefficient rings.
 
The formula can be proved in two steps:
 
# use the fact that both sides are [[multilinear]] (more precisely 2''m''-linear) in the ''rows'' of ''A'' and the ''columns'' of ''B'', to reduce to the case that each row of ''A'' and each column of ''B'' has only one non-zero entry, which is&nbsp;1.
# handle that case using the functions [''m'']→[''n''] that map respectively the row numbers of ''A'' to the column number of their nonzero entry, and the column numbers of ''B'' to the row number of their nonzero entry.
 
For step 1, observe that for each row of ''A'' or column of ''B'', and for each ''m''-combination ''S'', the values of det(''AB'') and det(''A''<sub>[''m''],''S''</sub>)det(''B''<sub>''S'',[''m'']</sub>) indeed depend linearly on the row or column. For the latter this is immediate from the multilinear property of the determinant; for the former one must in addition check that taking a linear combination for the row of ''A'' or column of ''B'' while leaving the rest unchanged only affects the corresponding row or column of the product ''AB'', and by the same linear combination. Thus one can work out both sides of the Cauchy−Binet formula by linearity for every row of ''A'' and then also every column of ''B'', writing each of the rows and columns as a linear combination of standard basis vectors. The resulting multiple summations are huge, but they have the same form for both sides: corresponding terms involve the same scalar factor (each is a product of entries of ''A'' and of ''B''), and these terms only differ by involving two different expressions in terms of constant matrices of the kind described above, which expressions should be equal according to the Cauchy−Binet formula. This achieves the reduction of the first step.
 
Concretely, the multiple summations can be grouped into two summations, one over all functions ''f'':[''m'']&nbsp;→&nbsp;[''n''] that for each row index of ''A'' gives a corresponding column index, and one over all functions ''g'':[''m'']&nbsp;→&nbsp;[''n''] that for each column index of ''B'' gives a corresponding row index. The matrices associated to ''f'' and ''g'' are
 
:<math>L_f=\bigl((\delta_{f(i),j})_{i\in[m],j\in[n]}\bigr) \quad\text{and}
 
\quad R_g=\bigl((\delta_{j,g(k)})_{j\in[n],k\in[m]}\bigr)</math>
 
where "<math>\delta</math>" is the [[Kronecker delta]], and the Cauchy−Binet formula to prove has been rewritten as
 
:<math>\sum_{f:[m]\to[n]}\sum_{g:[m]\to[n]}p(f,g)\det(L_fR_g)
=\sum_{f:[m]\to[n]}\sum_{g:[m]\to[n]}p(f,g)\sum_{S\in\tbinom{[n]}m}\det((L_f)_{[m],S})\det(R_g)_{S,[m]}),</math>
 
where ''p''(''f'',''g'') denotes the scalar factor <math>\textstyle(\prod_{i=1}^mA_{i,f(i)})(\prod_{k=1}^mB_{g(k),k})</math>. It remains to prove the Cauchy−Binet formula for ''A''&nbsp;=&nbsp;''L''<sub>''f''</sub> and ''B''&nbsp;=&nbsp;''R''<sub>''g''</sub>, for all ''f'',''g'':[''m'']&nbsp;→&nbsp;[''n''].
 
For this step 2, if ''f'' fails to be injective then ''L''<sub>''f''</sub> and ''L''<sub>''f''</sub>''R''<sub>''g''</sub> both have two identical rows, and if ''g'' fails to be injective then ''R''<sub>''g''</sub> and ''L''<sub>''f''</sub>''R''<sub>''g''</sub> both have two identical columns; in either case both sides of the identity are zero. Supposing now that both ''f'' and ''g'' are injective maps [''m'']&nbsp;→&nbsp;[''n''], the factor <math>\det((L_f)_{[m],S})</math> on the right is zero unless ''S''&nbsp;=&nbsp;''f''([''m'']), while the factor <math>\det((R_g)_{S,[m]})</math> is zero unless ''S''&nbsp;=&nbsp;''g''([''m'']). So
if the images of ''f'' and ''g'' are different, the right hand side has only null terms, and the left hand side is zero as well since ''L''<sub>''f''</sub>''R''<sub>''g''</sub> has a null row (for ''i'' with <math>f(i)\notin g([m])</math>). In the remaining case where the images of ''f'' and ''g'' are the same, say ''f''([''m''])&nbsp;=&nbsp;''S''&nbsp;=&nbsp;''g''([''m'']), we need to prove that
 
:<math>\det(L_fR_g)=\det((L_f)_{[m],S})\det(R_g)_{S,[m]}).\,</math>
 
Let ''h'' be the unique increasing bijection [''m'']&nbsp;→&nbsp;''S'', and ''π'',''σ'' the permutations of [''m''] such that <math>f=h\circ\pi^{-1}</math> and <math>g=h\circ\sigma</math>; then <math>(L_f)_{[m],S}</math> is the [[permutation matrix]] for ''π'', <math>(R_g)_{S,[m]}</math> is the permutation matrix for ''σ'', and ''L''<sub>''f''</sub>''R''<sub>''g''</sub> is the permutation matrix for <math>\pi\circ\sigma</math>, and since the determinant of a permutation matrix equals the [[signature (permutation)|signature]] of the permutation, the identity follows from the fact that signatures are multiplicative.
 
Using multi-linearity with respect to both the rows of ''A'' and the columns of ''B'' in the proof is not necessary; one could use just one of them, say the former, and use that a matrix product ''L''<sub>''f''</sub>''B'' either consists of a permutation of the rows of ''B''<sub>''f''([''m'']),[''m'']</sub> (if ''f'' is injective), or has at least two equal rows.
 
== Relation to the generalized Kronecker delta ==
As we have seen, the Cauchy–Binet formula is equivalent to the following:
:<math>
    \det(L_fR_g)=\sum_{S\in\tbinom{[n]}m} \det((L_f)_{[m],S})\det((R_g)_{S,[m]}),
</math>
where
:<math>   
L_f=\bigl((\delta_{f(i),j})_{i\in[m],j\in[n]}\bigr) \quad\text{and} \quad R_g=\bigl((\delta_{j,g(k)})_{j\in[n],k\in[m]}\bigr).  
</math>
 
In terms of [[Kronecker delta#Generalizations of the Kronecker delta|generalized Kronecker delta]], we can derive the formula equivalent to the Cauchy–Binet formula:
:<math>
\delta^{f(1) \dots f(m)}_{g(1) \dots g(m)} = \sum_{k:[m]\to[n] \atop k(1)<\dots<k(m)}
\delta^{f(1) \dots f(m)}_{k(1) \dots k(m)}
\delta^{k(1) \dots k(m)}_{g(1) \dots g(m)}.
</math>
 
== Geometric interpretations ==
 
If ''A'' is a real ''m''&times;''n'' matrix, then det(''A''&nbsp;''A''<sup>T</sup>) is equal to the square of the ''m''-dimensional volume of the [[parallelotope]] spanned in '''R'''<sup>''n''</sup> by the ''m'' rows of ''A''. Binet's formula states that this is equal to the sum of the squares of the volumes that arise if the parallelepiped is orthogonally projected onto the ''m''-dimensional coordinate planes (of which there are <math>\tbinom nm</math>).
 
In the case ''m''&nbsp;=&nbsp;1 the parallelotope is reduced to a single vector and its volume its length. The above statement then states that the square of the length of a vector is the sum of the squares of its coordinates; this is indeed the case by [[Euclidean distance|the definition]] of that length, which is based on the [[Pythagorean theorem]].
 
== Generalization ==
 
The Cauchy–Binet formula can be extended in a straightforward way to a general formula for the [[minor (linear algebra)|minors]] of the product of two matrices. That formula is given in the article on [[minor (linear algebra)|minors]].
 
==References==
* Joel G. Broida & S. Gill Williamson (1989) ''A Comprehensive Introduction to Linear Algebra'', §4.6 Cauchy-Binet theorem, pp 208&ndash;14, [[Addison-Wesley]] ISBN 0-201-50065-5.
* Jin Ho Kwak & Sungpyo Hong (2004) ''Linear Algebra'' 2nd edition, Example 2.15 Binet-Cauchy formula, pp 66,7, [[Birkhäuser]] ISBN 0-8176-4294-3.
* [[Igor Shafarevich|I. R. Shafarevich]] & A. O. Remizov (2012) ''Linear Algebra and Geometry'', §2.9 (p.&nbsp;68) & §10.5 (p.&nbsp;377),  [[Springer Science+Business Media|Springer]] ISBN 978-3-642-30993-9.
 
==External links==
* Aaron Lauve (2004) [http://www.lacim.uqam.ca/~lauve/courses/su2005-550/BS3.pdf A short combinatoric proof of Cauchy–Binet formula] from [[Université du Québec à Montréal]].
 
{{DEFAULTSORT:Cauchy-Binet formula}}
[[Category:Determinants]]

Revision as of 02:47, 5 December 2013

In linear algebra, the Cauchy–Binet formula, named after Augustin-Louis Cauchy and Jacques Philippe Marie Binet, is an identity for the determinant of the product of two rectangular matrices of transpose shapes (so that the product is well-defined and square). It generalizes the statement that the determinant of a product of square matrices is equal to the product of their determinants. The formula is valid for matrices with entries from any commutative ring.

Statement

Let A be an m×n matrix and B an n×m matrix. Write [n] for the set { 1, ..., n }, and ([n]m) for the set of m-combinations of [n] (i.e., subsets of size m; there are (nm) of them). For S([n]m), write A[m],S for the m×m matrix whose columns are the columns of A at indices from S, and BS,[m] for the m×m matrix whose rows are the rows of B at indices from S. The Cauchy–Binet formula then states

det(AB)=S([n]m)det(A[m],S)det(BS,[m]).

Example: taking m = 2 and n = 3, and matrices A=(112311) and B=(113102), the Cauchy–Binet formula gives the determinant:

det(AB)=|1131||1131|+|1211||3102|+|1231||1102|.

Indeed AB=(4662), and its determinant is −28, which is also the value 2×2+3×6+7×2 given by the right hand side of the formula.

Special cases

If n < m then ([n]m) is the empty set, and the formula says that det(AB) = 0 (its right hand side is an empty sum); indeed in this case the rank of the m×m matrix AB is at most n, which implies that its determinant is zero. If n = m, the case where A and B are square matrices, ([n]m)={[n]} (a singleton set), so the sum only involves S = [n], and the formula states that det(AB) = det(A)det(B).

For m = 0, A and B are empty matrices (but of different shapes if n > 0), as is their product AB; the summation involves a single term S = Ø, and the formula states 1 = 1, with both sides given by the determinant of the 0×0 matrix. For m = 1, the summation ranges over the collection ([n]1) of the n different singletons taken from [n], and both sides of the formula give j=1nA1,jBj,1, the dot product of the pair of vectors represented by the matrices. The smallest value of m for which the formula states a non-trivial equality is m = 2; it is discussed in the article on the Binet–Cauchy identity.

Proof

There are various kinds of proofs that can be given for the Cauchy−Binet formula. The proof below is based on formal manipulations only, and avoids using any particular interpretation of determinants, which may be taken to be defined by the Leibniz formula. Only their multilinearity with respect to rows and columns, and their alternating property (vanishing in the presence of equal rows or columns) are used; in particular the multiplicative property of determinants for square matrices is not used, but is rather established (the case n = m). The proof is valid for arbitrary commutative coefficient rings.

The formula can be proved in two steps:

  1. use the fact that both sides are multilinear (more precisely 2m-linear) in the rows of A and the columns of B, to reduce to the case that each row of A and each column of B has only one non-zero entry, which is 1.
  2. handle that case using the functions [m]→[n] that map respectively the row numbers of A to the column number of their nonzero entry, and the column numbers of B to the row number of their nonzero entry.

For step 1, observe that for each row of A or column of B, and for each m-combination S, the values of det(AB) and det(A[m],S)det(BS,[m]) indeed depend linearly on the row or column. For the latter this is immediate from the multilinear property of the determinant; for the former one must in addition check that taking a linear combination for the row of A or column of B while leaving the rest unchanged only affects the corresponding row or column of the product AB, and by the same linear combination. Thus one can work out both sides of the Cauchy−Binet formula by linearity for every row of A and then also every column of B, writing each of the rows and columns as a linear combination of standard basis vectors. The resulting multiple summations are huge, but they have the same form for both sides: corresponding terms involve the same scalar factor (each is a product of entries of A and of B), and these terms only differ by involving two different expressions in terms of constant matrices of the kind described above, which expressions should be equal according to the Cauchy−Binet formula. This achieves the reduction of the first step.

Concretely, the multiple summations can be grouped into two summations, one over all functions f:[m] → [n] that for each row index of A gives a corresponding column index, and one over all functions g:[m] → [n] that for each column index of B gives a corresponding row index. The matrices associated to f and g are

Lf=((δf(i),j)i[m],j[n])andRg=((δj,g(k))j[n],k[m])

where "δ" is the Kronecker delta, and the Cauchy−Binet formula to prove has been rewritten as

f:[m][n]g:[m][n]p(f,g)det(LfRg)=f:[m][n]g:[m][n]p(f,g)S([n]m)det((Lf)[m],S)det(Rg)S,[m]),

where p(f,g) denotes the scalar factor (i=1mAi,f(i))(k=1mBg(k),k). It remains to prove the Cauchy−Binet formula for A = Lf and B = Rg, for all f,g:[m] → [n].

For this step 2, if f fails to be injective then Lf and LfRg both have two identical rows, and if g fails to be injective then Rg and LfRg both have two identical columns; in either case both sides of the identity are zero. Supposing now that both f and g are injective maps [m] → [n], the factor det((Lf)[m],S) on the right is zero unless S = f([m]), while the factor det((Rg)S,[m]) is zero unless S = g([m]). So if the images of f and g are different, the right hand side has only null terms, and the left hand side is zero as well since LfRg has a null row (for i with f(i)g([m])). In the remaining case where the images of f and g are the same, say f([m]) = S = g([m]), we need to prove that

det(LfRg)=det((Lf)[m],S)det(Rg)S,[m]).

Let h be the unique increasing bijection [m] → S, and π,σ the permutations of [m] such that f=hπ1 and g=hσ; then (Lf)[m],S is the permutation matrix for π, (Rg)S,[m] is the permutation matrix for σ, and LfRg is the permutation matrix for πσ, and since the determinant of a permutation matrix equals the signature of the permutation, the identity follows from the fact that signatures are multiplicative.

Using multi-linearity with respect to both the rows of A and the columns of B in the proof is not necessary; one could use just one of them, say the former, and use that a matrix product LfB either consists of a permutation of the rows of Bf([m]),[m] (if f is injective), or has at least two equal rows.

Relation to the generalized Kronecker delta

As we have seen, the Cauchy–Binet formula is equivalent to the following:

det(LfRg)=S([n]m)det((Lf)[m],S)det((Rg)S,[m]),

where

Lf=((δf(i),j)i[m],j[n])andRg=((δj,g(k))j[n],k[m]).

In terms of generalized Kronecker delta, we can derive the formula equivalent to the Cauchy–Binet formula:

δg(1)g(m)f(1)f(m)=k:[m][n]k(1)<<k(m)δk(1)k(m)f(1)f(m)δg(1)g(m)k(1)k(m).

Geometric interpretations

If A is a real m×n matrix, then det(A AT) is equal to the square of the m-dimensional volume of the parallelotope spanned in Rn by the m rows of A. Binet's formula states that this is equal to the sum of the squares of the volumes that arise if the parallelepiped is orthogonally projected onto the m-dimensional coordinate planes (of which there are (nm)).

In the case m = 1 the parallelotope is reduced to a single vector and its volume its length. The above statement then states that the square of the length of a vector is the sum of the squares of its coordinates; this is indeed the case by the definition of that length, which is based on the Pythagorean theorem.

Generalization

The Cauchy–Binet formula can be extended in a straightforward way to a general formula for the minors of the product of two matrices. That formula is given in the article on minors.

References

  • Joel G. Broida & S. Gill Williamson (1989) A Comprehensive Introduction to Linear Algebra, §4.6 Cauchy-Binet theorem, pp 208–14, Addison-Wesley ISBN 0-201-50065-5.
  • Jin Ho Kwak & Sungpyo Hong (2004) Linear Algebra 2nd edition, Example 2.15 Binet-Cauchy formula, pp 66,7, Birkhäuser ISBN 0-8176-4294-3.
  • I. R. Shafarevich & A. O. Remizov (2012) Linear Algebra and Geometry, §2.9 (p. 68) & §10.5 (p. 377), Springer ISBN 978-3-642-30993-9.

External links