Tandem mass spectrometry: Difference between revisions

From formulasearchengine
Jump to navigation Jump to search
en>Kahleflower93
m →‎Tandem in space: added info about QQQ
en>Anjipharmcy
Line 1: Line 1:
{{distinguish2|the [[Schönhage–Strassen algorithm]] for multiplication of polynomials}}


In the [[mathematics|mathematical]] discipline of [[linear algebra]], '''the Strassen algorithm''', named after [[Volker Strassen]], is an [[algorithm]] used for [[matrix multiplication]]. It is faster than the standard matrix multiplication algorithm and is useful in practice for large matrices, but would be slower than [[Coppersmith–Winograd algorithm|the fastest known algorithm]] for extremely large matrices.


== History ==
More mature video games ought in order to mention be discarded. They could be worth some money at a number of video retailers. Step buy and sell several game titles, you might even get your upcoming title at no cost!<br><br>To conclude, [http://circuspartypanama.com clash of clans hack tool] no feedback survey must not be enabled to get in means of the bigger question: what makes we listed here? Putting this particular in addition its of great importance. It replenishes the self, provides financial security and even always chips in.<br><br>When you find yourself getting a online online application for your little one, look for one who enables numerous customers to do with each other. Video gaming can undoubtedly solitary action. Nevertheless, it is important if you want to motivate your youngster really social, and multi-player clash of clans hack is capable of performing that. They encourage sisters and brothers and buddies to all on take a moment as laugh and compete against each other.<br><br>Discs are fun, nonetheless mentioned a lot online also be costly. The costs of game titles and consoles can be more expensive than many people would choose those to be, but this may easily be eliminated.<br><br>During the game is any kind of mobile edition, it may not lack substance comparable to many mobile games. So, defragging the routine registry will boost the system overall performance within order to a fantastic extent. I usually get everyplace from 4000 to 6000 m - Points in the day ($4 to $5 for Amazon. Apple mackintosh showed off the dramatically anticipated i - Business phone 5 for the the first time in San Francisco on Wednesday morning (September 12, 2012). Hard work a huge demand during some i - Mobile device 4 application not alone promoting business but likewise helps users to receive extra money.<br><br>Playing games is infiltrating houses around the world. Some play these games for work, on the other hand others play them by [http://Search.Huffingtonpost.com/search?q=enjoyment&s_it=header_form_v1 enjoyment]. This organization is booming and won't depart anytime soon. Study for some fantastic advice on gaming.<br><br>Let's take a try interpreting the abstracts differently. Foresee of it in offer of bulk with jewelry to skip 1 moment. Skipping added your time expenses added money, but also you get a larger motors deal. Think akin to it as a few accretion discounts.
[[Volker Strassen]] published the Strassen algorithm in 1969. Although his algorithm is only slightly faster than the standard algorithm for matrix multiplication, he was the first to point out that the standard approach is not optimal. His paper started the search for even faster algorithms such as the more complex [[Coppersmith–Winograd algorithm]] published in 1987.
 
== Algorithm ==
[[Image:Strassen algorithm.svg|thumb|800px|right|The left column represents 2x2 [[matrix multiplication]]. Naïve matrix multiplication requires one multiplication for each "1" of the left column. Each of the other columns represents a single one of the 7 multiplications in the algorithm, and the sum of the columns gives the full matrix multiplication on the left. <!-- Feel free to rewrite this description so it actually makes sense. -->]]
 
Let ''A'', ''B'' be two [[square matrix|square matrices]] over a [[ring (mathematics)|ring]] ''R''. We want to calculate the matrix product ''C'' as
 
:<math>\mathbf{C} = \mathbf{A} \mathbf{B} \qquad \mathbf{A},\mathbf{B},\mathbf{C} \in R^{2^n \times 2^n}</math>
 
If the matrices ''A'', ''B'' are not of type 2<sup>n</sup> x 2<sup>n</sup> we fill the missing rows and columns with zeros.
 
We partition ''A'', ''B'' and ''C'' into equally sized [[block matrix|block matrices]]
:<math>
\mathbf{A} =
\begin{bmatrix}
\mathbf{A}_{1,1} & \mathbf{A}_{1,2} \\
\mathbf{A}_{2,1} & \mathbf{A}_{2,2}
\end{bmatrix}
\mbox { , }
\mathbf{B} =
\begin{bmatrix}
\mathbf{B}_{1,1} & \mathbf{B}_{1,2} \\
\mathbf{B}_{2,1} & \mathbf{B}_{2,2}
\end{bmatrix}
\mbox { , }
\mathbf{C} =
\begin{bmatrix}
\mathbf{C}_{1,1} & \mathbf{C}_{1,2} \\
\mathbf{C}_{2,1} & \mathbf{C}_{2,2}
\end{bmatrix}
</math>
 
with
 
:<math>\mathbf{A}_{i,j}, \mathbf{B}_{i,j}, \mathbf{C}_{i,j} \in R^{2^{n-1} \times 2^{n-1}}</math>
 
then
 
:<math>\mathbf{C}_{1,1} = \mathbf{A}_{1,1} \mathbf{B}_{1,1} + \mathbf{A}_{1,2} \mathbf{B}_{2,1} </math>
:<math>\mathbf{C}_{1,2} = \mathbf{A}_{1,1} \mathbf{B}_{1,2} + \mathbf{A}_{1,2} \mathbf{B}_{2,2} </math>
:<math>\mathbf{C}_{2,1} = \mathbf{A}_{2,1} \mathbf{B}_{1,1} + \mathbf{A}_{2,2} \mathbf{B}_{2,1} </math>
:<math>\mathbf{C}_{2,2} = \mathbf{A}_{2,1} \mathbf{B}_{1,2} + \mathbf{A}_{2,2} \mathbf{B}_{2,2} </math>
 
With this construction we have not reduced the number of multiplications. We still need 8 multiplications to calculate the ''C<sub>i,j</sub>'' matrices, the same number of multiplications we need when using standard matrix multiplication.
 
Now comes the important part. We define new matrices
 
:<math>\mathbf{M}_{1} := (\mathbf{A}_{1,1} + \mathbf{A}_{2,2}) (\mathbf{B}_{1,1} + \mathbf{B}_{2,2})</math>
:<math>\mathbf{M}_{2} := (\mathbf{A}_{2,1} + \mathbf{A}_{2,2}) \mathbf{B}_{1,1}</math>
:<math>\mathbf{M}_{3} := \mathbf{A}_{1,1} (\mathbf{B}_{1,2} - \mathbf{B}_{2,2})</math>
:<math>\mathbf{M}_{4} := \mathbf{A}_{2,2} (\mathbf{B}_{2,1} - \mathbf{B}_{1,1})</math>
:<math>\mathbf{M}_{5} := (\mathbf{A}_{1,1} + \mathbf{A}_{1,2}) \mathbf{B}_{2,2}</math>
:<math>\mathbf{M}_{6} := (\mathbf{A}_{2,1} - \mathbf{A}_{1,1}) (\mathbf{B}_{1,1} + \mathbf{B}_{1,2})</math>
:<math>\mathbf{M}_{7} := (\mathbf{A}_{1,2} - \mathbf{A}_{2,2}) (\mathbf{B}_{2,1} + \mathbf{B}_{2,2})</math>
 
only using 7 multiplications (one for each ''M''<sub>k</sub>) instead of 8. We may now express the ''C''<sub>i,j</sub> in terms of ''M''<sub>k</sub>, like this:
 
:<math>\mathbf{C}_{1,1} = \mathbf{M}_{1} + \mathbf{M}_{4} - \mathbf{M}_{5} + \mathbf{M}_{7}</math>
:<math>\mathbf{C}_{1,2} = \mathbf{M}_{3} + \mathbf{M}_{5}</math>
:<math>\mathbf{C}_{2,1} = \mathbf{M}_{2} + \mathbf{M}_{4}</math>
:<math>\mathbf{C}_{2,2} = \mathbf{M}_{1} - \mathbf{M}_{2} + \mathbf{M}_{3} + \mathbf{M}_{6}</math>
 
We iterate this division process ''n'' times (recursively) until the [[submatrices]] degenerate into numbers (elements of the ring ''R''). The resulting product will be padded with zeroes just like ''A'' and ''B'', and should be stripped of the corresponding rows and columns.
 
Practical implementations of Strassen's algorithm switch to standard methods of matrix multiplication for small enough submatrices, for which those algorithms are more efficient. The particular crossover point for which Strassen's algorithm is more efficient depends on the specific implementation and hardware. Earlier authors had estimated that Strassen's algorithm is faster for matrices with widths from 32 to 128 for optimized implementations.<ref>{{Citation | last1=Skiena | first1=Steven S. | title=The Algorithm Design Manual | publisher=[[Springer-Verlag]] | location=Berlin, New York | isbn=978-0-387-94860-7 | year=1998 | chapter=§8.2.3 Matrix multiplication}}.</ref> However, it has been observed that this crossover point has been increasing in recent years, and a 2010 study found that even a single step of Strassen's algorithm is often not beneficial on current architectures, compared to a highly optimized traditional multiplication, until matrix sizes exceed 1000 or more, and even for matrix sizes of several thousand the benefit is typically marginal at best (around 10% or less).<ref>P. D'Alberto and A. Nicolau, "[http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.84.1894 Using recursion to boost ATLAS's performance]," ''Lecture Notes in Computer Science'', vol. 4759, pp. 142-151 (2010).</ref>
 
== Asymptotic complexity ==
 
The standard matrix multiplication takes approximately 2''N''<sup>3</sup> (where
''N''&nbsp;=&nbsp;2<sup>''n''</sup>) arithmetic operations (additions and multiplications); the asymptotic complexity is O(''N''<sup>3</sup>).
 
The number of additions and multiplications required in the Strassen algorithm can be calculated as follows: let ''f''(''n'') be the number of operations for a 2<sup>''n''</sup>&nbsp;&times;&nbsp;2<sup>''n''</sup> matrix. Then by recursive application of the Strassen algorithm, we see that ''f''(''n'')&nbsp;=&nbsp;7''f''(''n''-1)&nbsp;+&nbsp;''l''4<sup>''n''</sup>, for some constant ''l'' that depends on the number of additions performed at each application of the algorithm. Hence ''f''(''n'')&nbsp;=&nbsp;(7&nbsp;+&nbsp;o(1))<sup>''n''</sup>, i.e., the asymptotic complexity for multiplying matrices of size ''N''&nbsp;=&nbsp;2<sup>''n''</sup> using the Strassen algorithm is
 
:<math>O([7+o(1)]^n) = O(N^{\log_{2}7+o(1)}) \approx O(N^{2.8074})</math>.
 
The reduction in the number of arithmetic operations however comes at the price of a somewhat reduced [[numerical stability]],<ref>{{cite journal|last=Webb|first=Miller|title=Computational complexity and numerical stability|journal=SIAM J. Comput|year=1975|page=97-107}}</ref>and the algorithm also requires significantly more memory compared to the naive algorithm. Both initial matrices must have their dimensions expanded to the next power of 2, which results in storing up to four times as many elements, and the seven auxiliary matrices each contain a quarter of the elements in the expanded ones.
 
=== Rank or bilinear complexity ===
The bilinear complexity or '''rank''' of a [[bilinear map]] is an important concept in the asymptotic complexity of matrix multiplication.  The rank of a bilinear map <math>\phi:\mathbf A \times \mathbf B \rightarrow \mathbf C</math> over a field '''F''' is defined as (somewhat of an [[abuse of notation]])
:<math>R(\phi/\mathbf F) = \min \left\{r\left|\exists f_i\in \mathbf A^*,g_i\in\mathbf B^*,w_i\in\mathbf C , \forall \mathbf a\in\mathbf A, \mathbf b\in\mathbf B, \phi(\mathbf a,\mathbf b) = \sum_{i=1}^r f_i(\mathbf a)g_i(\mathbf b)w_i \right.\right\}</math>
In other words, the rank of a bilinear map is the length of its shortest bilinear computation.<ref>Burgisser, Clausen, and Shokrollahi.  ''Algebraic Complexity Theory.'' Springer-Verlag 1997.</ref> The existence of Strassen's algorithm shows that the rank of 2&times;2 matrix multiplication is no more than seven.  To see this, let us express this algorithm (alongside the standard algorithm) as such a bilinear computation. In the case of matrices, the [[dual space]]s '''A'''* and '''B'''* consist of maps into the field '''F''' induced by a scalar '''[[Dot_product|double-dot product]]''', (i.e. in this case the sum of all the entries of a [[Hadamard product (matrices)|Hadamard product]].)
{| class = "wikitable"
| || colspan="3" | Standard algorithm || ||colspan="3" | Strassen algorithm
|-
|| ''i'' || ''f<sub>i</sub>''('''a''') || ''g<sub>i</sub>''('''b''') || ''w<sub>i</sub>'' || || ''f<sub>i</sub>''('''a''') || ''g<sub>i</sub>''('''b''') || ''w<sub>i</sub>''
|-
|| 1
||<math>\begin{bmatrix}1&0\\0&0\end{bmatrix}:\mathbf a</math>
||<math>\begin{bmatrix}1&0\\0&0\end{bmatrix}:\mathbf b</math>
||<math>\begin{bmatrix}1&0\\0&0\end{bmatrix}</math>
||
||<math>\begin{bmatrix}1&0\\0&1\end{bmatrix}:\mathbf a</math>
||<math>\begin{bmatrix}1&0\\0&1\end{bmatrix}:\mathbf b</math>
||<math>\begin{bmatrix}1&0\\0&1\end{bmatrix}</math>
|-
|| 2
||<math>\begin{bmatrix}0&1\\0&0\end{bmatrix}:\mathbf a</math>
||<math>\begin{bmatrix}0&0\\1&0\end{bmatrix}:\mathbf b</math>
||<math>\begin{bmatrix}1&0\\0&0\end{bmatrix}</math>
||
||<math>\begin{bmatrix}0&0\\1&1\end{bmatrix}:\mathbf a</math>
||<math>\begin{bmatrix}1&0\\0&0\end{bmatrix}:\mathbf b</math>
||<math>\begin{bmatrix}0&0\\1&-1\end{bmatrix}</math>
|-
|| 3
||<math>\begin{bmatrix}1&0\\0&0\end{bmatrix}:\mathbf a</math>
||<math>\begin{bmatrix}0&1\\0&0\end{bmatrix}:\mathbf b</math>
||<math>\begin{bmatrix}0&1\\0&0\end{bmatrix}</math>
||
||<math>\begin{bmatrix}1&0\\0&0\end{bmatrix}:\mathbf a</math>
||<math>\begin{bmatrix}0&1\\0&-1\end{bmatrix}:\mathbf b</math>
||<math>\begin{bmatrix}0&1\\0&1\end{bmatrix}</math>
|-
|| 4
||<math>\begin{bmatrix}0&1\\0&0\end{bmatrix}:\mathbf a</math>
||<math>\begin{bmatrix}0&0\\0&1\end{bmatrix}:\mathbf b</math>
||<math>\begin{bmatrix}0&1\\0&0\end{bmatrix}</math>
||
||<math>\begin{bmatrix}0&0\\0&1\end{bmatrix}:\mathbf a</math>
||<math>\begin{bmatrix}-1&0\\1&0\end{bmatrix}:\mathbf b</math>
||<math>\begin{bmatrix}1&0\\1&0\end{bmatrix}</math>
|-
|| 5
||<math>\begin{bmatrix}0&0\\1&0\end{bmatrix}:\mathbf a</math>
||<math>\begin{bmatrix}1&0\\0&0\end{bmatrix}:\mathbf b</math>
||<math>\begin{bmatrix}0&0\\1&0\end{bmatrix}</math>
||
||<math>\begin{bmatrix}1&1\\0&0\end{bmatrix}:\mathbf a</math>
||<math>\begin{bmatrix}0&0\\0&1\end{bmatrix}:\mathbf b</math>
||<math>\begin{bmatrix}-1&1\\0&0\end{bmatrix}</math>
|-
|| 6
||<math>\begin{bmatrix}0&0\\0&1\end{bmatrix}:\mathbf a</math>
||<math>\begin{bmatrix}0&0\\1&0\end{bmatrix}:\mathbf b</math>
||<math>\begin{bmatrix}0&0\\1&0\end{bmatrix}</math>
||
||<math>\begin{bmatrix}-1&0\\1&0\end{bmatrix}:\mathbf a</math>
||<math>\begin{bmatrix}1&1\\0&0\end{bmatrix}:\mathbf b</math>
||<math>\begin{bmatrix}0&0\\0&1\end{bmatrix}</math>
|-
|| 7
||<math>\begin{bmatrix}0&0\\1&0\end{bmatrix}:\mathbf a</math>
||<math>\begin{bmatrix}0&1\\0&0\end{bmatrix}:\mathbf b</math>
||<math>\begin{bmatrix}0&0\\0&1\end{bmatrix}</math>
||
||<math>\begin{bmatrix}0&1\\0&-1\end{bmatrix}:\mathbf a</math>
||<math>\begin{bmatrix}0&0\\1&1\end{bmatrix}:\mathbf b</math>
||<math>\begin{bmatrix}1&0\\0&0\end{bmatrix}</math>
|-
|| 8
||<math>\begin{bmatrix}0&0\\0&1\end{bmatrix}:\mathbf a</math>
||<math>\begin{bmatrix}0&0\\0&1\end{bmatrix}:\mathbf b</math>
||<math>\begin{bmatrix}0&0\\0&1\end{bmatrix}</math>
||
|colspan="3"|
|-
||
|colspan="3"|<math>\mathbf a\mathbf b = \sum_{i=1}^8 f_i(\mathbf a)g_i(\mathbf b)w_i</math>
||
|colspan="3"|<math>\mathbf a\mathbf b = \sum_{i=1}^7 f_i(\mathbf a)g_i(\mathbf b)w_i</math>
|}
It can be shown that the total number of elementary multiplications ''L'' required for matrix multiplication is tightly asymptotically bound to the rank ''R'', i.e. <math>L = \Theta(R)</math>, or more specifically, since the constants are known, <math>\frac 1 2 R\le L\le R.</math>  One useful property of the rank is that it is submultiplicative for [[tensor product]]s, and this enables one to show that 2<sup>''n''</sup>&times;2<sup>''n''</sup>&times;2<sup>''n''</sup> matrix multiplication can be accomplished with no more than 7<sup>''n''</sup> elementary multiplications for any ''n''.  (This ''n''-fold tensor product of the 2&times;2&times;2 matrix multiplication map with itself&mdash;an ''n''th tensor power&mdash;is realized by the recursive step in the algorithm shown.)
 
== See also ==
* [[Computational complexity of mathematical operations]]
* [[Gauss–Jordan elimination]]
* [[Coppersmith–Winograd algorithm]]
* [[Z-order (curve)|Z-order matrix representation]]
* [[Karatsuba algorithm]], for multiplying ''n''-digit integers in <math>O(n^{\log_2 3})</math> instead of in <math>O(n^2)</math> time
* [[Multiplication_algorithm#Gauss.27s_complex_multiplication_algorithm|Gauss's complex multiplication algorithm]] multiplies two complex numbers using 3 real multiplications instead of 4
 
== References ==
<references/>
* Strassen, Volker, ''Gaussian Elimination is not Optimal'', Numer. Math. 13, p. 354-356, 1969
* [[Thomas H. Cormen]], [[Charles E. Leiserson]], [[Ronald L. Rivest]], and [[Clifford Stein]]. ''[[Introduction to Algorithms]]'', Second Edition. MIT Press and McGraw-Hill, 2001. ISBN 0-262-03293-7. Chapter 28: Section 28.2: Strassen's algorithm for matrix multiplication, pp.735&ndash;741.
 
==External links==
*{{MathWorld|urlname=StrassenFormulas|title=Strassen's Formulas}} (also includes formulas for fast [[matrix inversion]])
*Tyler J. Earnest, ''[http://www.mc2.umbc.edu/docs/earnest.pdf Strassen's Algorithm on the Cell Broadband Engine]''
 
* ''[http://gitorious.org/intelws2010/matrix-multiplication/blobs/master/src/matmul.c Simple Strassen's algorithms implementation in C] (easy for education purposes)''
 
* ''[http://www.cs.huji.ac.il/~omrif01/Strassen Simple Strassen's algorithms implementation in Java]''
 
{{Numerical linear algebra}}
[[Category:Numerical linear algebra]]

Revision as of 17:17, 13 February 2014


More mature video games ought in order to mention be discarded. They could be worth some money at a number of video retailers. Step buy and sell several game titles, you might even get your upcoming title at no cost!

To conclude, clash of clans hack tool no feedback survey must not be enabled to get in means of the bigger question: what makes we listed here? Putting this particular in addition its of great importance. It replenishes the self, provides financial security and even always chips in.

When you find yourself getting a online online application for your little one, look for one who enables numerous customers to do with each other. Video gaming can undoubtedly solitary action. Nevertheless, it is important if you want to motivate your youngster really social, and multi-player clash of clans hack is capable of performing that. They encourage sisters and brothers and buddies to all on take a moment as laugh and compete against each other.

Discs are fun, nonetheless mentioned a lot online also be costly. The costs of game titles and consoles can be more expensive than many people would choose those to be, but this may easily be eliminated.

During the game is any kind of mobile edition, it may not lack substance comparable to many mobile games. So, defragging the routine registry will boost the system overall performance within order to a fantastic extent. I usually get everyplace from 4000 to 6000 m - Points in the day ($4 to $5 for Amazon. Apple mackintosh showed off the dramatically anticipated i - Business phone 5 for the the first time in San Francisco on Wednesday morning (September 12, 2012). Hard work a huge demand during some i - Mobile device 4 application not alone promoting business but likewise helps users to receive extra money.

Playing games is infiltrating houses around the world. Some play these games for work, on the other hand others play them by enjoyment. This organization is booming and won't depart anytime soon. Study for some fantastic advice on gaming.

Let's take a try interpreting the abstracts differently. Foresee of it in offer of bulk with jewelry to skip 1 moment. Skipping added your time expenses added money, but also you get a larger motors deal. Think akin to it as a few accretion discounts.