|
|
(One intermediate revision by one other user not shown) |
Line 1: |
Line 1: |
| In [[arithmetic]] and [[computer programming]], the '''extended Euclidean algorithm''' is an extension to the [[Euclidean algorithm]], which computes, besides the [[greatest common divisor]] of integers ''a'' and ''b'', the coefficients of [[Bézout's identity]], that is integers ''x'' and ''y'' such that
| |
| : <math>ax + by = \gcd(a, b).</math>
| |
| It allows to compute also, with almost no extra cost, the quotients of ''a'' and ''b'' by their greatest common divisor.
| |
|
| |
|
| '''[[Polynomial greatest common divisor#Bézout's identity and extended GCD algorithm|Extended Euclidean algorithm]]''' refers also to a very similar algorithm for computing the [[polynomial greatest common divisor]] and the coefficients of Bézout's identity of two [[univariate polynomial]]s.
| |
|
| |
|
| The extended Euclidean algorithm is particularly useful when ''a'' and ''b'' are [[coprime]], since ''x'' is the [[modular multiplicative inverse]] of ''a'' [[modular arithmetic|modulo]] ''b'', and ''y'' is the modular multiplicative inverse of ''b'' modulo ''a''. Similarly, the polynomial extended Euclidean algorithm allows to compute the [[multiplicative inverse]] in [[algebraic field extension]]s and, in particular in [[finite field]]s of non prime order. It follows that both extended Euclidean algorithms are widely used in [[cryptography]]. In particular, the computation of the modular multiplicative inverse is an essential step in [[RSA (algorithm)|RSA]] public-key encryption method.
| | Most states wait until June, July or even August to host their family backyard get togethers and festivities. Here in Tucson, Arizona, though the weather fantastic by mid-March for outdoor activities, and Tucsonans are ready to start grilling!<br><br>A strong personal brand is built on useful. The story of Clinton meeting President Kennedy when on a youth leadership camp was used to great effect. Distinct was it mentioned in the introduction but that famous photo of Clinton shaking JFK's hand was also used on marketing contents. Other brand building shots included an intimate moment with Hilary, a trial of him playing the saxophone, a jogging photo, one with Chelsea and featuring Clinton lined at the 3 past Presidents. All of them helped to define Clinton the lover.<br><br><br><br>It one more vital you just pull out and treat all weeds as soon as prospective. For this reason horse owners demands a weed control programme. Your paddock should be sprayed with safe modern pesticides may tackle the weeds found by the soil analysis.<br><br>When doing home improvement, watch the highly visible improvements. Home buyers will make their judgments based their very own first opinions. If your paint is peeling or your hedges are unruly, great have a horrible time selling your home. It's true that some improvements aren't obvious, but the obvious ones always be the ones get been seen.<br><br>Your walls cannot possess defects in them, like gaping holes from big nails, and so forth .. Hopefully your walls will be freshly painted that they need this task. Taking away any photos that are personal is a little psychological trick in which may have a bearing. The rationale for this move is that they removes household from the property and enables them to see themselves living in the home. So, all personal photos of your have to stowed, not visible. In fact, it is optimal to all walls of any posters or art that other people might consider atypical. This is all about giving the buyer a opportunity to dream up the space, in their mind's talent.<br><br>Here's my analysis goods I learnt from hearing Bill Clinton in person and noting how he was performed. You should be able to adapt at least some of those points to suit your own environment.<br><br>Next anything to an item furniture. Possibilities for outdoor living are amazing - there's something available for every taste and affordability. Today's furniture runs the gamut from a $20 plastic chair to deliciously stylish dining sets up. Whatever you choose, think function and beauty; require to to be drawn for this space. Inside your are planning an outdoor kitchen, in addition to the pieces of furniture you'll want to include an outdoor cook space, counter tops, heat lamp and herb garden.<br><br>Those blades spin around very fast and will throw up any debris that is caught on them. For this reason just a few ingredients to confident they are evident of sticks and stones before you begin the program.<br><br>If you have any sort of inquiries relating to where and how you can utilize [http://www.hopesgrovenurseries.co.uk/ www.hopesgrovenurseries.co.uk], you can call us at the web-page. |
| | |
| == Description of the algorithm ==
| |
| The standard Euclidean algorithm proceeds by a succession of [[Euclidean division]]s whose quotients are not used, only the ''remainders'' are kept. For the extended algorithm, the successive quotient are used. More precisely, the standard Euclidean algorithm with ''a'' and ''b'' as input, consists in computing sequence <math>q_1,\ldots, q_k</math> of quotients and a sequence <math>r_0,\ldots, r_{k+1}</math> of remainders such that
| |
| :<math>
| |
| \begin{array}{l}
| |
| r_0=a\\
| |
| r_1=b\\
| |
| \ldots\\
| |
| r_{i+1}=r_{i-1}-q_i r_i \quad \text {and} \quad 0\le r_{i+1} < |r_i|\\
| |
| \ldots
| |
| \end{array}
| |
| </math>
| |
| It is the main property of [[Euclidean division]] that the inequalities on the right define uniquely <math>r_{i+1}</math> from <math>r_{i-1}</math> and <math>r_{i}.</math>
| |
| | |
| The computation stops when one reaches a remainder <math>r_{k+1}</math> which is zero; the greatest common divisor is then the last non zero remainder <math>r_{k}.</math> | |
| | |
| The extended Euclidean algorithm proceeds similarly, but adds two other sequences defined by
| |
| | |
| :<math>
| |
| \begin{array}{l}
| |
| s_0=1\qquad s_1=0\\
| |
| t_0=0\qquad t_1=1\\
| |
| \ldots\\
| |
| s_{i+1}=s_{i-1}-q_i s_i\\
| |
| t_{i+1}=t_{i-1}-q_i t_i\\
| |
| \ldots
| |
| \end{array}
| |
| </math>
| |
| | |
| The computation stops also when <math>r_{k+1}=0</math> and gives
| |
| *<math>r_{k}</math> is the greatest common divisor of the input <math>a=r_{0}</math> and <math>b=r_{1}.</math>
| |
| * The Bézout coefficients are <math>s_{k}</math> and <math>t_{k},</math> that is <math>GCD(a,b)=r_{k}=as_k+bt_k</math>
| |
| * The quotients of ''a'' and ''b'' by their greatest common divisor are given by <math>s_{k+1}=\pm\frac{b}{GCD(a,b)}</math> and <math>t_{k+1}=\pm\frac{a}{GCD(a,b)}</math>
| |
| | |
| Moreover, if ''a'' and ''b'' are both positive, we have
| |
| :<math>|s_k|<\frac{b}{GCD(a,b)}\quad \text{and} \quad |t_k|<\frac{a}{GCD(a,b)}.</math>
| |
| This means that the pair of Bézout's coefficients provided by the extended Euclidean algorithm is one of the two minimal pairs of Bézout coefficients.
| |
| | |
| === Example ===
| |
| | |
| The following table shows how the extended Euclidean algorithm proceeds with input {{math|{{green|240}}}} and {{math|{{green|46}}}}. The greatest common divisor is the last non zero entry, {{math|{{red|2}}}} in the column "remainder". The computation stops at row 6, because the remainder in it is {{math|{{red|0}}}}. Bézout coefficients appear in the last two entries of the second-to-last row. In fact, it is easy to verify that {{math|1={{magenta|-9}} × {{green|240}} + {{magenta|47}} × {{green|46}} = {{red|2}}}}. Finally the last two entries {{math|{{cyan|23}}}} and {{math|{{cyan|-120}}}} of the last row are, up to the sign, the quotients of the input {{math|{{green|46}}}} and {{math|{{green|240}}}} by the greatest common divisor {{red|2}}.
| |
| | |
| {| class="wikitable" style="text-align:center; font-weight:bold;"
| |
| ! index ''i''!! {{blue|quotient ''q''<sub>''i''-1</sub> }}!! {{olive|Remainder ''r''<sub>''i''</sub>}}!! {{brown|''s''<sub>''i''</sub> }}!! ''t''<sub>''i''</sub>
| |
| |-
| |
| | 0 || ||{{green|240}}||{{brown|1}} || 0
| |
| |-
| |
| | 1 || ||{{green|46}} || {{brown|0}} || 1
| |
| |-
| |
| | 2 ||{{green|240}} ÷ {{green|46}} = {{blue|5}}
| |
| ||{{green|240}} − {{blue|5}} × {{green|46}} = {{olive|10}}
| |
| ||{{brown|1}} − {{blue|5}} × {{brown|0}} = {{brown|1}}
| |
| || 0 - {{blue|5}} × 1 = -5
| |
| |-
| |
| | 3 ||{{green|46}} ÷ {{olive|10}} = {{blue|4}}
| |
| ||{{green|46}} − {{blue|4}} × {{olive|10}} = {{olive|6}}
| |
| ||{{brown|0}} − {{blue|4}} × {{brown|1}} = {{brown|-4}}
| |
| || 1 - {{blue|4}} × -5 = 21
| |
| |-
| |
| | 4 ||{{olive|10}} ÷ {{olive|6}} = {{blue|1}}
| |
| ||{{olive|10}} − {{blue|1}} × {{olive|6}} = {{olive|4}}
| |
| ||{{brown|1}} − {{blue|1}} × {{brown|-4}} = {{brown|5}}
| |
| || -5 - {{blue|1}} × 21 = -26
| |
| |-
| |
| | 5 ||{{olive|6}} ÷ {{olive|4}} = {{blue|1}}
| |
| ||{{olive|6}} − {{blue|1}} × {{olive|4}} = {{red|2}}
| |
| ||{{brown|-4}} − {{blue|1}} × {{brown|5}} = {{magenta|-9}}
| |
| || 21 - {{blue|1}} × -26 = {{magenta|47}}
| |
| |-
| |
| | 6 ||{{olive|4}} ÷ {{olive|2}} = {{blue|2}}
| |
| ||{{olive|4}} − {{blue|2}} × {{olive|2}} = {{red|0}}
| |
| ||{{brown|5}} − {{blue|2}} × {{brown|-9}} = {{cyan|23}}
| |
| || -26 - {{blue|2}} × 47 = {{cyan|-120}}
| |
| |}
| |
| | |
| === Proof ===
| |
| As <math> 0\le r_{i+1}<|r_i|, </math> the sequence of the <math> r_i </math> is a decreasing sequence nonnegative integers (from ''i'' = 2 on). Thus is must stop with some <math> r_{k+1}=0. </math> This proves that the algorithm stops eventually.
| |
| | |
| As <math> r_{i+1}= r_{i-1} - r_i q_i,</math> the common divisors and the greatest one are the same for <math> (r_{i-1}, r_i)</math> and <math> (r_{i}, r_{i+1}).</math> This shows that the greatest common divisor of the input <math> a=r_0, b=r_1 </math> is the same as the one of <math> r_k, r_{k+1}=0. </math> This proves that <math> r_k </math> is the greatest common divisor of ''a'' and ''b''. (Until this point, the proof is the same as that of the classical Euclidean algorithm.)
| |
| | |
| As <math> a=r_0</math> and <math> b=r_1,</math> we have <math>as_i+bt_i=r_i</math> for ''i'' = 0 and 1. A <math> r_i, </math> <math> s_i </math> and <math> t_i </math> satisfy the same linear [[recurrence relation]], the relation <math>as_i+bt_i=r_i</math> holds for every ''i'' and, in particular <math>as_k+bt_k=r_k, </math> showing that <math> (s_k, t_k) </math> are Bézout coefficients.
| |
| | |
| Let us consider the matrix
| |
| :<math>A_i=\begin{pmatrix}
| |
| s_{i-1} & s_i\\
| |
| t_{i-1} & t_i
| |
| \end{pmatrix} \,.
| |
| </math> | |
| The recurrence relation may be rewritten in matrix form
| |
| :<math>A_{i+1} = A_i
| |
| \,.\,
| |
| \begin{pmatrix}
| |
| 0 & 1\\
| |
| 1 & -q_i
| |
| \end{pmatrix} \,.
| |
| </math>
| |
| The matrix <math> A_1</math> is the identity matrix and its determinant is one. The determinant of the rightmost matrix in the preceding formula is -1. It follows tha the determinant of <math> A_i</math> is <math>(-1)^{i-1}.</math> In particular, for ''i'' = ''k'' + 1, we have <math> s_k t_{k+1}-t_k s_{k+1} = (-1)^k. </math> Viewing this as a Bézout's identity, this show that <math> s_{k+1}</math> and <math> t_{k+1}</math> are [[coprime]]. The relation <math>as_{k+1}+bt_{k+1}=0</math> that has been proved above and [[Euclid's lemma]] show that <math> s_{k+1}</math> divides ''b'' and <math> t_{k+1}</math> divides ''a''. As they are coprime, they are, up to their sign the quotients of ''b'' and ''a'' by their greatest common divisor.
| |
| | |
| == Polynomial extended Euclidean algorithm ==
| |
| {{see also|Polynomial greatest common divisor#Bézout's identity and extended GCD algorithm}}
| |
| | |
| For [[univariate polynomial]]s with coefficients in a [[field (mathematics)|field]], everything works in a similar way, Euclidean division, Bézout's identity and extended Euclidean algorithm. The first difference is that, in the Euclidean division and the algorithm, the inequality <math>0\le r_{i+1}<|r_i|</math> has to be replaced by an inequality on the degrees <math>\deg r_{i+1}<\deg r_i.</math> Otherwise, everything which precedes in this article remains the same, simply by replacing integers by polynomials.
| |
| | |
| A second difference lies in the bound on the size of the Bézout coefficients provided by the extended Euclidean algorithm, which is more accurate in the polynomial case, leading to the following theorem.
| |
| | |
| ''If a and b are two nonzero polynomials, then the extended Euclidean algorithm produces the unique pair of polynomials'' (''s'', ''t'') ''such that''
| |
| :<math>as+bt=\gcd(a,b)</math>
| |
| ''and''
| |
| :<math>\deg s < \deg b - \deg (\gcd(a,b)), \quad \deg t < \deg a - \deg (\gcd(a,b)).</math>
| |
| | |
| A third difference is that, in the polynomial case, the greatest common divisor is defined only up to the multiplication by a non zero constant. There are several ways to define the greatest common divisor unambiguously.
| |
| | |
| In mathematics, it is common to require that the greatest common divisor be a [[monic polynomial]]. To get this, it suffices to divide every element of the output by the [[leading coefficient]] of <math>r_{k}.</math> This allows that, if ''a'' and ''b'' are coprime, one gets 1 in the right-hand side of Bézout's inequality. Otherwise, one may get any non-zero constant. In [[computer algebra]], the polynomials have commonly integers coefficients, and this way of normalizing the greatest common divisor introduces too many fractions to be convenient.
| |
| | |
| The second way to normalize the greatest common divisor in the case of polynomials with integers coefficients is to divide every output by the [[content (algebra)|content]] of <math>r_{k},</math> to get a [[primitive polynomial (ring theory)|primitive]] greatest common divisor. If the input polynomials are coprime, this normalization provides also a greatest common divisor equal to 1. The drawback of this approach is that a lot of fractions should be computed and simplified during the computation.
| |
| | |
| A third approach consists in extending the algorithm of [[Polynomial greatest common divisor#Subresultant pseudo-remainder sequence|subresultant pseudo-remainder sequence]]s in a way that is similar to the extension of the Euclidean algorithm to the extended Euclidean algorithm. This allows that, when starting with polynomials with integer coefficients, all polynomials that are computed have integer coefficients. Moreover, every computed remainder <math>r_i</math> is a [[subresultant|subresultant polynomial]]. In particular, if the input polynomials are coprime, then the Bézout's identity becomes
| |
| :<math>as+bt=\operatorname{Res}(a,b),</math>
| |
| where <math>\operatorname{Res}(a,b)</math> denotes the [[resultant]] of ''a'' and ''b''. In this form of Bézout's identity there is no denominator in the formula. If one divides everything by the resultant one gets the classical Bézout's identity, with an explicit common denominator for the rational numbers that appear in it.
| |
| | |
| == Pseudocode==
| |
| | |
| To implement the algorithm that is described above, one should first remark that only the two last values of the indexed variables are needed at each step. Thus, for saving memory, each indexed variable must be replaced by only two variables.
| |
| | |
| For simplicity, the following algorithm (and the other algorithms in this article) uses [[parallel assignment]]s. In a programming language which does not have this feature, the parallel assignments need to be simulated with an auxiliary variable. For example, the first one,
| |
| (old_r, r) := (r, old_r - quotient *r)
| |
| is equivalent with | |
| prov := r;
| |
| r := old_r - quotient * prov;
| |
| old_r := prov;
| |
| and similarly for the other parallel assignments.
| |
| This leads to the following code:
| |
| | |
| '''function''' extended_gcd(a, b)
| |
| s := 0; old_s := 1
| |
| t := 1; old_t := 0
| |
| r := b; old_r := a
| |
| '''while''' r ≠ 0
| |
| quotient := old_r '''div''' r
| |
| (old_r, r) := (r, old_r - quotient *r)
| |
| (old_s, s) := (s, old_s - quotient *s)
| |
| (old_t, t) := (t, old_t - quotient *t)
| |
| '''output''' "Bézout coefficients:", (old_s, old_t)
| |
| '''output''' "greatest common divisor:", old_r
| |
| '''output''' "quotients by the gcd:", (t, s)
| |
| It should be noted that the quotients of ''a'' and ''b'' by their greatest common divisor, which are output, may have an incorrect sign. This is easy to correct at the end of the computation, but has not been done here for simplifying the code. Similarly, if either ''a'' or ''b'' is zero and the other is negative, the greatest common divisor that is output is negative, and all the signs of the output must be changed.
| |
| | |
| ==Simplification of fractions==
| |
| A fraction <math>\frac{a}{b}</math> is in canonical simplified form if {{math|''a''}} and {{math|''b''}} are [[coprime]] and {{math|''b''}} is positive. This canonical simplified form can be obtained by replacing the three '''output''' lines of the preceding pseudo code by
| |
| '''if''' s = 0 '''then output''' "Division by zero"
| |
| '''if''' s = 1 '''then output''' <math>-t</math> (''Optional line, for avoiding output like <math>\frac{-t}{1}</math>
| |
| '''else if''' s > 0 '''then output''' <math>\frac{-t}{s}</math>
| |
| '''else return''' <math>\frac{t}{-s}</math>
| |
| | |
| The proof of this algorithm relies on the fact that ''s'' and ''t'' are two coprime integers such that ''as'' + ''bt'' = 0, and thus <math>\frac{a}{b} = -\frac{t}{s}</math>. To get the canonical simplified form, it suffices to move the minus sign for having a positive denominator.
| |
| | |
| If ''b'' divides ''a'' evenly, the algorithm executes only one iteration, and we have ''s'' = 1 at the end of the algorithm. It the only case where the output is an integer.
| |
| | |
| ==Computing multiplicative inverses in modular structures==
| |
| | |
| Extended Euclidean algorithm is the basic tool for computing [[multiplicative inverse]]s in modular structures, typically the [[modular arithmetic|modular integer]]s and the [[algebraic field extension]]s. An important instance of the latter case are the finite fields of non-prime order.
| |
| | |
| ===Modular integers===
| |
| {{main|Modular arithmetic}}
| |
| If {{math|''n''}} is a positive integer, the ring [[Z/nZ|{{math|'''Z'''/''n'''''Z'''}}]] may be identified with the set {{math|{0, 1, ..., ''n''}{{void}}}} of the remainders of [[Euclidean division]] by {{math|''n''}}, the addition and the multiplication consisting in taking the remainder by {{math|''n''}} of the result of the addition and the multiplication of integers. An element {{math|''a''}} of {{math|'''Z'''/''n'''''Z'''}} has a multiplicative inverse (that is, it is a [[unit (ring theory)|unit]]) if it is [[coprime]] to {{math|''n''}}. In particular, if {{math|''n''}} is [[prime number|prime]], {{math|''a''}} has a multiplicative inverse if it is not zero (modulo {{math|''n''}}). Thus {{math|'''Z'''/''n'''''Z'''}} is a field if and only if {{math|''n''}} is prime.
| |
| | |
| Bézout's identity asserts that {{math|''a''}} and {{math|''n''}} are coprime if and only if there exist integers {{math|''s''}} and {{math|''t''}} such that
| |
| :<math>ns+at=1</math>
| |
| Reducing this identity modulo {{math|''n''}} gives
| |
| :<math>at=1 \mod n.</math>
| |
| Thus {{math|''t''}}, or, more exactly, the remainder of the division of {{math|''t''}} by {{math|''n''}}, is the multiplicative inverse of {{math|''a''}} modulo {{math|''n''}}.
| |
| | |
| To adapt the extended Euclidean algorithm to this problem, one should remark that the Bézout coefficient of {{math|''n''}} is not needed, and thus does not need to be computed. Also, for getting a result which is positive and lower than ''n'', one may use the fact that the integer {{math|''t''}} provided by the algorithm satisfies {{math|{{!}}''t''{{!}} < ''n''}}. That is, if {{math|''t'' < 0}}, one must add {{math|''n''}} to it at the end. This results in the pseudocode, in which the input ''n'' is an integer larger than 1.
| |
| | |
| '''function''' inverse(a, n)
| |
| t := 0; newt := 1;
| |
| r := n; newr := a;
| |
| '''while''' newr ≠ 0
| |
| quotient := r '''div''' newr
| |
| (t, newt) := (newt, t - quotient * newt)
| |
| (r, newr) := (newr, r - quotient * newr)
| |
| '''if''' r > 1 then '''return''' "a is not invertible"
| |
| '''if''' t < 0 '''then''' t := t + n
| |
| '''return''' t
| |
| | |
| === Simple algebraic field extensions ===
| |
| | |
| Extended Euclidean algorithm is also the main tool for computing [[multiplicative inverse]]s in [[simple extension|simple algebraic field extensions]]. An important case, widely used in [[cryptography]] and [[coding theory]] is that of [[finite field]]s of non-prime order. In fact, if {{math|''p''}} is a prime number, and {{math|1=''q'' = ''p''<sup>''d''</sup>}}, the field of order {{math|''q''}} is a simple algebraic extension of the [[prime field]] of {{math|''p''}} elements, generated by a root of an [[irreducible polynomial]] of degree {{math|''d''}}.
| |
| | |
| A simple algebraic extension {{math|''L''}} of a field {{math|''K''}}, generated by the root of an irreducible polynomial {{math|''p''}} of degree {{math|''d''}} may be identified to the [[quotient ring]] <math>K[X]/\langle p\rangle,</math>, and its elements are in [[bijective|bijective correspondence]] with the polynomials of degree less than {{math|''d''}}. The addition in {{math|''L''}} is the addition of polynomials. The multiplication in {{math|''L''}} is the remainder of the [[Euclidean division of polynomials|Euclidean division]] by {{math|''p''}} of the product of polynomials. Thus, to complete the arithmetic in {{math|''L''}}, it remains only to define how to compute multiplicative inverses. This is done by the extended Euclidean algorithm.
| |
| | |
| The algorithm is very similar to that provided above for computing the modular multiplicative inverse. There are two main differences: firstly the last but one line is not needed, because the Bézout coefficient that is provided has always a degree less than {{math|''d''}}. Secondly, the greatest common divisor which is provided, when the input polynomials are coprime, may be any non zero element of {{math|''K''}}; this Bézout coefficient (a polynomial generally of positive degree) has thus to be multiplied by the inverse of this element of {{math|''K''}}. In the pseudocode which follows, {{math|''p''}} is a polynomial of degree greater than one, and {{math|''a''}} is a polynomial. Moreover, '''div''' is an auxiliary function that computes the quotient of the Euclidean division.
| |
| | |
| '''function''' inverse(a, p)
| |
| t := 0; newt := 1;
| |
| r := p; newr := a;
| |
| '''while''' newr ≠ 0
| |
| quotient := r '''div''' newr
| |
| (r, newr) := (newr, r - quotient * newr)
| |
| (t, newt) := (newt, t - quotient * newt)
| |
| '''if''' degree(r) > 0 then
| |
| '''return''' "Either p is not irreducible or a is a multiple of p"
| |
| '''return''' (1/r) * t
| |
| | |
| ====Example====
| |
| | |
| For example, if the polynomial used to define the finite field GF(2<sup>8</sup>) is ''p'' = ''x''<sup>8</sup> + ''x''<sup>4</sup> + ''x''<sup>3</sup> + ''x'' + 1, and ''a'' = ''x''<sup>6</sup> + ''x''<sup>4</sup> + ''x'' + 1 is the element whose inverse is desired, then performing the algorithm results in the computation described in the following table. Let us recall that in fields of order 2<sup>''n''</sup>, one has -''z'' = ''z'' and ''z'' + ''z'' = 0 for every element ''z'' in the field). Note also that 1 being the only nonzero element of GF(2), the adjustment in the last line of the pseudocode is not needed.
| |
| {| class="wikitable"
| |
| |-
| |
| ! step
| |
| ! quotient
| |
| ! r, newr
| |
| ! t, newt
| |
| |-
| |
| |
| |
| |
| |
| | ''p'' = ''x''<sup>8</sup> + ''x''<sup>4</sup> + ''x''<sup>3</sup> + ''x'' + 1
| |
| | 0
| |
| |-
| |
| | align="center" |
| |
| |
| |
| | ''a'' = ''x''<sup>6</sup> + ''x''<sup>4</sup> + ''x'' + 1
| |
| | 1
| |
| |-
| |
| | align="center" | 1
| |
| | ''x''<sup>2</sup> + 1
| |
| | ''x''<sup>2</sup> = ''p'' - ''a'' (''x''<sup>2</sup> + 1)
| |
| | ''x''<sup>2</sup> + 1 = 0 - 1 × (''x''<sup>2</sup> + 1)
| |
| |-
| |
| | align="center" | 2
| |
| | ''x''<sup>4</sup> + ''x''<sup>2</sup>
| |
| | ''x'' + 1 = ''a'' - ''x''<sup>2</sup> (''x''<sup>4</sup> + ''x''<sup>2</sup>)
| |
| | ''x''<sup>6</sup> + ''x''<sup>2</sup> + 1 = 1 - (''x''<sup>4</sup> + ''x''<sup>2</sup>) (''x''<sup>2</sup> + 1)
| |
| |-
| |
| | align="center" | 3
| |
| | ''x'' + 1
| |
| | 1 = ''x''<sup>2</sup> - (''x'' + 1) (''x'' + 1)
| |
| | ''x''<sup>7</sup> + ''x''<sup>6</sup> + ''x''<sup>3</sup> + ''x'' = (''x''<sup>2</sup> + 1) - (''x'' + 1) (''x''<sup>6</sup> + ''x''<sup>2</sup> + 1)
| |
| |-
| |
| | align="center" | 4
| |
| | ''x'' + 1
| |
| | 0 = (''x'' + 1) - 1 × (''x'' + 1)
| |
| |
| |
| |}
| |
| | |
| Thus, the inverse is ''x''<sup>7</sup> + ''x''<sup>6</sup> + ''x''<sup>3</sup> + ''x'', as can be confirmed by [[finite field arithmetic|multiplying the two elements together]], and taking the remainder by {{math|''p''}} of the result.
| |
| | |
| ==The case of more than two numbers ==
| |
| One can handle the case of more than two numbers iteratively. First we show that <math>\gcd(a,b,c) = \gcd(\gcd(a,b),c)</math>. To prove this let <math>d=\gcd(a,b,c)</math>. By definition of gcd <math>d</math> is a divisor of <math>a</math> and <math>b</math>. Thus <math>\gcd(a,b)=k d</math> for some <math>k</math>. Similarly <math>d</math> is a divisor of <math>c</math> so <math>c=jd</math> for some <math>j</math>. Let <math>u=\gcd(k,j)</math>. By our construction of <math>u</math>, <math>ud | a,b,c</math> but since <math>d</math> is the greatest divisor <math>u</math> is a [[Unit (ring theory)|unit]]. And since <math>ud=\gcd(gcd(a,b),c)</math> the result is proven.
| |
| | |
| So if <math>na + mb = \gcd(a,b)</math> then there are <math>x</math> and <math>y</math> such that <math>x\gcd(a,b) + yc = \gcd(a,b,c)</math> so the final equation will be
| |
| | |
| : <math>x(na + mb) + yc = (xn)a + (xm)b + yc = \gcd(a,b,c).\,</math> | |
| | |
| So then to apply to n numbers we use induction
| |
| | |
| :<math>\gcd(a_1,a_2,\dots,a_n) =\gcd(a_1,\, \gcd(a_2,\, \gcd(a_3,\dots, \gcd(a_{n-1}\,,a_n)))\dots),</math>
| |
| | |
| with the equations following directly.
| |
| | |
| ==See also==
| |
| *[[Euclidean domain]]
| |
| *[[Linear congruence theorem]]
| |
| | |
| == References ==
| |
| * {{Cite book |title=[[The Art of Computer Programming]] |authorlink=Donald Knuth |first=Donald |last=Knuth |location= |publisher=Addison-Wesley |isbn= |year= }} Volume 2, Chapter 4.
| |
| * [[Thomas H. Cormen]], [[Charles E. Leiserson]], [[Ronald L. Rivest]], and [[Clifford Stein]]. ''[[Introduction to Algorithms]]'', Second Edition. MIT Press and McGraw-Hill, 2001. ISBN 0-262-03293-7. Pages 859–861 of section 31.2: Greatest common divisor.
| |
| | |
| ==External links==
| |
| {{wikibooks|Algorithm Implementation|Mathematics/Extended Euclidean algorithm|Extended Euclidean algorithm}}
| |
| * [http://mathforum.org/library/drmath/view/51675.html Source for the form of the algorithm used to determine the multiplicative inverse in GF(2^8)]
| |
| | |
| {{number theoretic algorithms}}
| |
| | |
| [[Category:Number theoretic algorithms]]
| |
| [[Category:Articles with example pseudocode]]
| |
| [[Category:Euclid]]
| |