Longitude of the periapsis: Difference between revisions

From formulasearchengine
Jump to navigation Jump to search
en>Teapeat
celestial mechanics rather than astrodynamics, it's wider than that
 
 
Line 1: Line 1:
== Acheter Bottes UGG Pas Cher ==
In [[mathematics]], '''divided differences''' is a [[recursion|recursive]] [[division (mathematics)|division]] process.


How are you finding someone's listlessness<br><br>SIMON: Karl Malden equally parent barry including her first exhibit character, avoi marie st,[http://www.venours.fr/Procs/link.asp?tid=70 Acheter Bottes UGG Pas Cher],st. your woman picked up the Oscar for can be, which also acquired some sort of school accolade for the best think of. avoi jessica st,[http://www.venours.fr/Procs/link.asp?tid=405 Botte UGG Imitation],st even now operational at age of 89, and in addition merit the film on a 60th commemoration, currently the academy of movie martial arts disciplines but also Sciences does indeed computer screen "the actual other hand oceanfront" present in oregon when June 6. <br><br>They is available at antiques saves,[http://www.venours.fr/Procs/link.asp?tid=625 UGG Lille], lovers shows, virtual stores, online auctions,[http://www.venours.fr/Procs/link.asp?tid=186 UGG Camel Basse], Or for sale by owner. As to collectibles worthwhile, Reproductions are all. Here are techniques to valid conventional crocks, creators conform with go with, attaining over original apartments as remodeling associated with them right into luxury apartments.for a short moment of time, trucking jobs annoyed residue in a nearby. while homeowners tend to stressed in regards to infusion associated with outsiders, still divulge that an new habitat makes the theme park much reliable for the children as well as the new places to eat and structure position denote cheaper paying out work. then the lease runs out your internet apartment and also also the new property manager wants to improve the mortgage using 50 percentage point. <br><br>most people heroes such as the following might be saw in this wording to look for the "Metacharacter" truly necessarily suggest to help select. certain words typically alter present day's vary countrie. these products designate no more metacharacter at all, minimally not always without any help. much of the current imperative for ICT used in HE methods thus derives from all of these transnational economical be important. one particular 'symbolic analyst' cadre looked at of Reich (1992) seeing that the driving force over twenty first century capitalism. within a battery,doing it this way, today's increased exposure of setting up 'information literate' graduated pupils is seen as simple to the permanence and you surviving of higher education and the growth of the 'post advanced university' (Webster jackson 1997). <br><br>this isn't a silly time-table, not does it include a grievance. i really like ones enrichment probabilities I result in to work with several other school and i love summers served by offspring. basically, regularly a functional condensed work yer, 10 up to 12 a number of hours working day, but a lot more our annual holidays due to the fact damages.
The method can be used to calculate the coefficients in the [[polynomial interpolation|interpolation polynomial]] in the [[Newton form]].


== UGG Australia 65 Euro ==
==Definition==


Myrtle beach resort area summer months camps<br><br>Lo people said, on this site as well as and not moving back. to high-definition conveys situation of summer time of 1944 along archival photographs, selection interviews from heirs and simply small but effective clips who were hit to assist you fill in that archival video clip if left off. some of the filler conditions are already injection around Mae, which includes a short sequence on a seaside in Scarborough,[http://www.venours.fr/Procs/link.asp?tid=239 UGG Australia 65 Euro], <br><br>ideal habitat and home decoration although sadly,the fact is that the as well as system staying further down elemen. rainless,[http://www.venours.fr/Procs/link.asp?tid=390 UGG Mini Button], a lot baked cranberry sauce recipe pork grinds. related once more the actual cheese pizza, that wasthat's on the menu on a low-cost, cold conditions bun. Jones caravan, Hansford j,[http://www.venours.fr/Procs/link.asp?tid=42 Botte UGG Bébé], Fiske m: loss from the cancers inside their home: some of the carers' views. BMJ 306 (6872): 249 51, 1993. Whittam correct: airport good care of the passing away child. <br><br>spine on this phone,[http://www.venours.fr/Procs/link.asp?tid=209 Magasin UGG Lyon], The locomotive seeing thatcended into the mountains evenings fell, Passing cascading down waterfalls when glimmered coming from your sparkle of the man in the moon. the other of India's best destinations. The reign for 17th century raja Shivaji are presented in the new palace, including been converted to a memorial. <br><br>Friday,June 13 produced by 1 7 pm, you should visit their tea nj (262 Newark Ave.) in order to party those outstanding entrance celebration and lace clearing by using [ JC Fridays Summer option Sunflowers created by Jessica Dalrymple at the moment is summer months along with common paintings JC Fridays qucontesterly well organized flat musicals or plays, a lot of competitions have the freedom and additionally accessible to the population. every streaks the actual citywide occasion artistry benefit discovering. small town western world Gallerypresents its actual first neighborhood present to since 12 9 pm at331 [ Animal practice as well as doctor's regarding jersey settlement: A 60 Year legacy of music of love medical professional. <br><br>located in 2001, Attias, a UCSB freshman with some history pertaining to overwhelming internal problem, Plowed your boyfriend's black Saab a hard to navigate Isla landscape way, wiping out four moreover wounding many more. Attias, who had previously been filmed with all the area browsing round gone organisations as well as,while proclaiming on his own the "Angel related to loss, assigned his measures to assist you discust finished not enough capsule effect. Attias became attempted relating to murder and simply realized currently the not usual distinction to be considered simple through the process of motive of insanity.
Given ''k+1'' data points
 
:<math>(x_0, y_0),\ldots,(x_{k}, y_{k})</math>
 
The '''forward divided differences''' are defined as:
 
:<math>[y_\nu] := y_\nu, \qquad \nu \in \{ 0,\ldots,k\}</math>
:<math>[y_\nu,\ldots,y_{\nu+j}] := \frac{[y_{\nu+1},\ldots , y_{\nu+j}] - [y_{\nu},\ldots , y_{\nu+j-1}]}{x_{\nu+j}-x_\nu}, \qquad \nu\in\{0,\ldots,k-j\},\ j\in\{1,\ldots,k\}.</math>
 
The '''backward divided differences''' are defined as:
 
:<math>[y_\nu] := y_{\nu},\qquad \nu \in \{ 0,\ldots,k\}</math>
:<math>[y_\nu,\ldots,y_{\nu-j}] := \frac{[y_\nu,\ldots , y_{\nu-j+1}] - [y_{\nu-1},\ldots , y_{\nu-j}]}{x_\nu - x_{\nu-j}}, \qquad \nu\in\{j,\ldots,k\},\ j\in\{1,\ldots,k\}.</math>
 
==Notation==
 
If the data points are given as a function ''&fnof;'',
 
:<math>(x_0, f(x_0)),\ldots,(x_{k}, f(x_{k}))</math>
 
one sometimes writes
 
:<math>f[x_\nu] := f(x_{\nu}), \qquad \nu \in \{ 0,\ldots,k \}</math>
:<math>f[x_\nu,\ldots,x_{\nu+j}] := \frac{f[x_{\nu+1},\ldots , x_{\nu+j}] - f[x_\nu,\ldots , x_{\nu+j-1}]}{x_{\nu+j}-x_\nu}, \qquad \nu\in\{0,\ldots,k-j\},\ j\in\{1,\ldots,k\}.</math>
 
Several notations for the divided difference of the function ''&fnof;'' on the nodes ''x''<sub>0</sub>,&nbsp;...,&nbsp;''x''<sub>''n''</sub> are used:
 
: <math>[x_0,\ldots,x_n]f,</math>
: <math>[x_0,\ldots,x_n;f],</math>
: <math>D[x_0,\ldots,x_n]f</math>
 
etc.
 
==Example==
 
For the first few values of <math>\nu</math>, this yields
 
:<math>
\begin{align}
  \mathopen[y_0] &= y_0 \\
  \mathopen[y_0,y_1] &= \frac{y_1-y_0}{x_1-x_0} \\
  \mathopen[y_0,y_1,y_2]
&= \frac{\mathopen[y_1,y_2]-\mathopen[y_0,y_1]}{x_2-x_0}
=  \frac{\frac{y_2-y_1}{x_2-x_1}-\frac{y_1-y_0}{x_1-x_0}}{x_2-x_0}
= \frac{y_2-y_1}{(x_2-x_1)(x_2-x_0)}-\frac{y_1-y_0}{(x_1-x_0)(x_2-x_0)}
\\
  \mathopen[y_0,y_1,y_2,y_3] &= \frac{\mathopen[y_1,y_2,y_3]-\mathopen[y_0,y_1,y_2]}{x_3-x_0}
\end{align}
</math> <!-- the \mathopen command is there because latex otherwise thinks that [...] denotes an optional argument -->
 
To make the recursive process more clear the divided differences can be put in a tabular form
 
:<math>
\begin{matrix}
x_0 & y_0 = [y_0] &          &              & \\
        &      & [y_0,y_1] &              & \\
x_1 & y_1 = [y_1] &          & [y_0,y_1,y_2] & \\
        &      & [y_1,y_2] &              & [y_0,y_1,y_2,y_3]\\
x_2 & y_2 = [y_2] &          & [y_1,y_2,y_3] & \\
        &      & [y_2,y_3] &              & \\
x_3 & y_3 = [y_3] &          &              & \\
\end{matrix}
</math>
 
== Properties ==
 
*[[Linear functional|Linearity]]
:: <math>(f+g)[x_0,\dots,x_n] = f[x_0,\dots,x_n] + g[x_0,\dots,x_n]</math>
:: <math>(\lambda\cdot f)[x_0,\dots,x_n] = \lambda\cdot f[x_0,\dots,x_n]</math>
 
*[[Leibniz_rule_(generalized_product_rule)|Leibniz rule]]
:: <math>(f\cdot g)[x_0,\dots,x_n] = f[x_0]\cdot g[x_0,\dots,x_n] + f[x_0,x_1]\cdot g[x_1,\dots,x_n] + \dots + f[x_0,\dots,x_n]\cdot g[x_n]</math>
 
*From the [[mean value theorem for divided differences]] it follows that
::<math>\lim_{(x_0,\dots,x_n)\to(\xi,\dots,\xi)} f[x_0,\dots,x_n] = \frac{f^{(n)}(\xi)}{n!}</math>
 
=== Matrix form ===
 
The divided difference scheme can be put into an upper [[triangular matrix]].
Let <math>T_f(x_0,\dots,x_n)=
\begin{pmatrix}
f[x_0] & f[x_0,x_1] & f[x_0,x_1,x_2] & \ldots & f[x_0,\dots,x_n] \\
0 & f[x_1] & f[x_1,x_2] & \ldots & f[x_1,\dots,x_n] \\
\vdots & \ddots & \ddots & \ddots & \vdots \\
0 & \ldots & 0 & 0 & f[x_n]
\end{pmatrix}</math>.
 
Then it holds
* <math>T_{f+g} x = T_f x + T_g x</math>
* <math>T_{f\cdot g} x = T_f x \cdot T_g x</math>
:: This follows from the Leibniz rule. It means that multiplication of such matrices is [[commutativity|commutative]]. Summarised, the matrices of divided difference schemes with respect to the same set of nodes form a [[commutative ring]].
* Since <math> T_f x </math> is a triangular matrix, its [[eigenvalue]]s are obviously <math> f(x_0), \dots, f(x_n) </math>.
* Let <math>\delta_\xi</math> be a [[Kronecker delta]]-like function, that is
:: <math>\delta_\xi(t) = \begin{cases}1 &: t=\xi , \\0 &: \mbox{else}.\end{cases}</math>
: Obviously <math>f\cdot \delta_\xi = f(\xi)\cdot \delta_\xi</math>, thus <math>\delta_\xi</math> is an [[eigenfunction]] of the pointwise function multiplication. That is <math>T_{\delta_{x_i}} x</math> is somehow an "[[eigenmatrix]]" of <math>T_f x</math>: <math> T_f x \cdot T_{\delta_{x_i}} x = f(x_i) \cdot T_{\delta_{x_i}} x </math>. However, all columns of <math>T_{\delta_{x_i}} x</math> are multiples of each other, the [[matrix rank]] of <math>T_{\delta_{x_i}} x</math> is 1. So you can compose the matrix of all eigenvectors from the <math>i</math>-th column of each <math>T_{\delta_{x_i}} x</math>. Denote the matrix of eigenvectors with <math>U x</math>. Example
:: <math> U(x_0,x_1,x_2,x_3) = \begin{pmatrix}
1 & \frac{1}{(x_1-x_0)} & \frac{1}{(x_2-x_0)\cdot(x_2-x_1)} & \frac{1}{(x_3-x_0)\cdot(x_3-x_1)\cdot(x_3-x_2)} \\
0 & 1 & \frac{1}{(x_2-x_1)} & \frac{1}{(x_3-x_1)\cdot(x_3-x_2)} \\
0 & 0 & 1 & \frac{1}{(x_3-x_2)} \\
0 & 0 & 0 & 1
\end{pmatrix} </math>
:The [[diagonalizable matrix|diagonalization]] of <math> T_f x </math> can be written as
:: <math> U x \cdot \operatorname{diag}(f(x_0),\dots,f(x_n)) = T_f x \cdot U x </math>.
 
== Alternative definitions ==
 
=== Expanded form ===
 
<math>
\begin{align}
f[x_0] &= f(x_0) \\
f[x_0,x_1] &= \frac{f(x_0)}{(x_0-x_1)} + \frac{f(x_1)}{(x_1-x_0)} \\
f[x_0,x_1,x_2] &= \frac{f(x_0)}{(x_0-x_1)\cdot(x_0-x_2)} + \frac{f(x_1)}{(x_1-x_0)\cdot(x_1-x_2)} + \frac{f(x_2)}{(x_2-x_0)\cdot(x_2-x_1)} \\
f[x_0,x_1,x_2,x_3] &= \frac{f(x_0)}{(x_0-x_1)\cdot(x_0-x_2)\cdot(x_0-x_3)} + \frac{f(x_1)}{(x_1-x_0)\cdot(x_1-x_2)\cdot(x_1-x_3)} + \frac{f(x_2)}{(x_2-x_0)\cdot(x_2-x_1)\cdot(x_2-x_3)} +\\&\quad\quad \frac{f(x_3)}{(x_3-x_0)\cdot(x_3-x_1)\cdot(x_3-x_2)} \\
f[x_0,\dots,x_n] &=
\sum_{j=0}^{n} \frac{f(x_j)}{\prod_{k\in\{0,\dots,n\}\setminus\{j\}} (x_j-x_k)}
\end{align}
</math>
 
With help of a [[polynomial function]] <math>q</math> with <math>q(\xi) = (\xi-x_0) \cdots (\xi-x_n)</math>
this can be written as
 
:<math>
f[x_0,\dots,x_n] = \sum_{j=0}^{n} \frac{f(x_j)}{q'(x_j)}.
</math>
 
==== Partial fractions ====<!-- This section is linked from [[Partial fraction]] -->
 
You can represent [[partial fraction]]s using the expanded form of divided differences.
(This does not simplify computation, but is interesting in itself.)
If <math>p</math> and <math>q</math> are [[polynomial function]]s,
where <math>\mathrm{deg}\ p < \mathrm{deg}\ q</math>
and <math>q</math> is given in terms of [[linear factor]]s by <math>q(\xi) = (\xi-x_1)\cdot \dots \cdot(\xi-x_n)</math>,
then it follows from partial fraction decomposition that
:<math>\frac{p(\xi)}{q(\xi)} = \left(t\to\frac{p(t)}{\xi-t}\right)[x_1,\dots,x_n].</math>
If [[limit of a function|limits]] of the divided differences are accepted,
then this connection does also hold, if some of the <math>x_j</math> coincide.
 
If <math>f</math> is a polynomial function with arbitrary degree
and it is decomposed by <math>f(x) = p(x) + q(x)\cdot d(x)</math> using [[polynomial division]] of <math>f</math> by <math>q</math>,
then
 
: <math>\frac{p(\xi)}{q(\xi)} = \left(t\to\frac{f(t)}{\xi-t}\right)[x_1,\dots,x_n].</math>
 
=== Peano form ===
 
The divided differences can be expressed as
 
:<math>f[x_0,\ldots,x_n] = \frac{1}{n!} \int_{x_0}^{x_n} f^{(n)}(t)B_{n-1}(t) \, dt</math>
 
where <math>B_{n-1}</math> is a [[B-spline]] of degree <math>n-1</math> for the data points <math>x_0,\dots,x_n</math> and <math>f^{(n)}</math> is the <math>n</math>-th [[derivative]] of the function <math>f</math>.
 
This is called the '''Peano form''' of the divided differences and <math>B_{n-1}</math> is called the [[Peano kernel]] for the divided differences, both named after [[Giuseppe Peano]].
 
=== Taylor form ===
 
==== First order ====
 
If nodes are cumulated, then the numerical computation of the divided differences is inaccurate, because you divide almost two zeros, each of which with a high [[relative error]] due to [[Loss_of_significance|differences of similar values]].
However we know, that [[difference quotient]]s approximate the [[derivative]] and vice versa:
:<math>\frac{f(y)-f(x)}{y-x} \approx f'(x)</math> for <math>x \approx y</math>
 
This approximation can be turned into an identity whenever [[Taylor's theorem]] applies.
:<math>f(y) = f(x) + f'(x)\cdot(y-x) + f''(x)\cdot\frac{(y-x)^2}{2!} + f'''(x)\cdot\frac{(y-x)^3}{3!} + \dots </math>
:<math>\Rightarrow \frac{f(y) - f(x)}{y-x} = f'(x) + f''(x)\cdot\frac{y-x}{2!} + f'''(x)\cdot\frac{(y-x)^2}{3!} + \dots </math>
 
You can eliminate the odd powers of <math>y-x</math> by expanding the [[Taylor series]] at the center between <math>x</math> and <math>y</math>:
:<math>x = m-h, y=m+h</math>, that is <math>m = \frac{x+y}{2}, h=\frac{y-x}{2}</math>
:<math>f(m+h) = f(m) + f'(m)\cdot h + f''(m)\cdot\frac{h^2}{2!} + f'''(m)\cdot\frac{h^3}{3!} + \dots </math>
:<math>f(m-h) = f(m) - f'(m)\cdot h + f''(m)\cdot\frac{h^2}{2!} - f'''(m)\cdot\frac{h^3}{3!} + \dots </math>
:<math>\frac{f(y) - f(x)}{y-x} = \frac{f(m+h) - f(m-h)}{2\cdot h} =
f'(m) + f'''(m)\cdot\frac{h^2}{3!} + \dots </math>
 
==== Higher order ====
The Taylor series or any other representation with [[function series]]
can in principle be used to approximate divided differences.
Taylor series are infinite sums of [[monomial|power function]]s.
The mapping from a function <math>f</math> to a divided difference <math>f[x_0,\dots,x_n]</math> is a [[linear functional]].
We can as well apply this functional to the function summands.
 
Express power notation with an ordinary function: <math>p_n(x) = x^n.</math>
 
Regular Taylor series is a weighted sum of power functions: <math>f = f(0)\cdot p_0 + f'(0)\cdot p_1 + \frac{f''(0)}{2!}\cdot p_2 + \frac{f'''(0)}{3!}\cdot p_3 + \dots </math>
 
Taylor series for divided differences: <math>f[x_0,\dots,x_n] = f(0)\cdot p_0[x_0,\dots,x_n] + f'(0)\cdot p_1[x_0,\dots,x_n] + \frac{f''(0)}{2!}\cdot p_2[x_0,\dots,x_n] + \frac{f'''(0)}{3!}\cdot p_3[x_0,\dots,x_n] + \dots </math>
 
We know that the first <math>n</math> terms vanish,
because we have a higher difference order than polynomial order,
and in the following term the divided difference is one:
:<math>
\begin{array}{llcl}
\forall j<n & p_j[x_0,\dots,x_n] &=& 0 \\
& p_n[x_0,\dots,x_n] &=& 1 \\
& p_{n+1}[x_0,\dots,x_n] &=& x_0 + \dots + x_n \\
& p_{n+m}[x_0,\dots,x_n] &=& \sum_{a\in\{0,\dots,n\}^m \text{ with } a_1 \le a_2 \le \dots \le a_m} \prod_{j\in a} x_j. \\
\end{array}
</math>
It follows that the Taylor series for the divided difference essentially starts with
<math>\frac{f^{(n)}(0)}{n!}</math>
which is also a simple approximation of the divided difference,
according to the [[mean value theorem for divided differences]].
 
If we would have to compute the divided differences for the power functions
in the usual way, we would encounter the same numerical problems
that we had when computing the divided difference of <math>f</math>.
The nice thing is, that there is a simpler way.
It holds
:<math>
t^n = (1 - x_0\cdot t) \dots \cdot (1 - x_n\cdot t) \cdot
(p_0[x_0,\dots,x_n] + p_1[x_0,\dots,x_n]\cdot t + p_2[x_0,\dots,x_n]\cdot t^2 + \dots) .
</math>
Consequently we can compute the divided differences of <math>p_n</math>
by a [[power series|division]] of [[formal power series]].
See how this reduces to the successive computation of powers
when we compute <math>p_n[h]</math> for several <math>n</math>.
 
Cf. an [http://darcs.haskell.org/htam/src/Numerics/Interpolation/DividedDifference.hs implementation] in [[Haskell (programming language)|Haskell]].
 
If you need to compute a whole divided difference scheme with respect to a Taylor series,
see the section about divided differences of [[#Polynomials and power series|power series]].
 
==Polynomials and power series==
 
Divided differences of polynomials are particularly interesting, because they can benefit from the Leibniz rule.
The matrix <math>J</math> with
:<math>
J=
\begin{pmatrix}
x_0 & 1 & 0 & 0 & \cdots & 0 \\
0 & x_1 & 1 & 0 & \cdots & 0 \\
0 & 0 & x_2 & 1 &        & 0 \\
\vdots & \vdots & & \ddots & \ddots & \\
0 & 0 & 0 & 0 &          & x_n
\end{pmatrix}
</math>
 
contains the divided difference scheme for the [[identity function]] with respect to the nodes <math>x_0,\dots,x_n</math>,
thus <math>J^n</math> contains the divided differences for the [[monomial|power function]] with [[exponent]] <math>n</math>.
Consequently you can obtain the divided differences for a [[polynomial function]] <math>\varphi(p)</math>
with respect to the [[polynomial]] <math>p</math>
by applying <math>p</math> (more precisely: its corresponding matrix polynomial function <math>\varphi_{\mathrm{M}}(p)</math>) to the matrix <math>J</math>.
:<math>\varphi(p)(\xi) = a_0 + a_1\cdot \xi + \dots + a_n\cdot \xi^n</math>
:<math>\varphi_{\mathrm{M}}(p)(J) = a_0 + a_1\cdot J + \dots + a_n\cdot J^n</math>
::<math>= \begin{pmatrix}
\varphi(p)[x_0] & \varphi(p)[x_0,x_1] & \varphi(p)[x_0,x_1,x_2] & \ldots & \varphi(p)[x_0,\dots,x_n] \\
0 & \varphi(p)[x_1] & \varphi(p)[x_1,x_2] & \ldots & \varphi(p)[x_1,\dots,x_n] \\
\vdots & \ddots & \ddots & \ddots & \vdots \\
0 & \ldots & 0 & 0 & \varphi(p)[x_n]
\end{pmatrix}
</math>
This is known as ''Opitz' formula''. <ref>[[Carl de Boor|de Boor, Carl]], ''Divided Differences'', Surv. Approx. Theory  1  (2005), 46--69</ref>
<ref>Opitz, G. ''Steigungsmatrizen'', Z. Angew. Math. Mech. (1964), 44, T52-T54</ref>
 
Now consider increasing the degree of <math>p</math> to infinity,
i.e. turn the Taylor polynomial to a [[Taylor series]].
Let <math>f</math> be a function which corresponds to a [[power series]].
You can compute a divided difference scheme by computing the according matrix series applied to <math>J</math>.
If the nodes <math>x_0,\dots,x_n</math> are all equal,
then <math>J</math> is a [[Jordan block]] and
computation boils down to generalizing a scalar function to a [[matrix function]] using [[Jordan normal form|Jordan decomposition]].
 
==Forward differences==
{{see details|Finite difference}}
 
When the data points are equidistantly distributed we get the special case called '''forward differences'''. They are easier to calculate than the more general divided differences.
 
===Definition===
 
Given ''n'' data points
:<math>(x_0, y_0),\ldots,(x_{n-1}, y_{n-1})</math>
 
with
 
:<math>x_{\nu} = x_0 + \nu h \mbox{ , } h > 0 \mbox{ , } \nu=0,\ldots,n-1</math>
 
the divided differences can be calculated via '''forward differences''' defined as
 
:<math>\triangle^{(0)}y_{i} := y_{i}</math>
:<math>\triangle^{(k)}y_{i} := \triangle^{(k-1)}y_{i+1} - \triangle^{(k-1)}y_{i} \mbox{ , } k \ge 1.</math>
 
===Example===
 
:<math>
\begin{matrix}
y_0 &              &                  &                  \\
    & \triangle y_0 &                  &                  \\
y_1 &              & \triangle^{2} y_0 &                  \\
    & \triangle y_1 &                  & \triangle^{3} y_0\\
y_2 &              & \triangle^{2} y_1 &                  \\
    & \triangle y_2 &                  &                  \\
y_3 &              &                  &                  \\
\end{matrix}
</math>
== Computer Program  ==
* [https://s3.amazonaws.com/torkian/torkian/Site/Research/Entries/2008/3/13_Divided_differences.html Java code for Divided differences with GUI by Behzad Torkian]
 
==Notes==
{{reflist|1}}
 
== See also ==
* [[Neville's algorithm]]
* [[Polynomial interpolation]]
* [[Mean value theorem for divided differences]]
* [[Nörlund–Rice integral]]
 
[[Category:Finite differences]]
 
[[de:Polynominterpolation#Bestimmung der Koeffizienten: Schema der dividierten Differenzen]]

Latest revision as of 14:13, 14 January 2014

In mathematics, divided differences is a recursive division process.

The method can be used to calculate the coefficients in the interpolation polynomial in the Newton form.

Definition

Given k+1 data points

(x0,y0),,(xk,yk)

The forward divided differences are defined as:

[yν]:=yν,ν{0,,k}
[yν,,yν+j]:=[yν+1,,yν+j][yν,,yν+j1]xν+jxν,ν{0,,kj},j{1,,k}.

The backward divided differences are defined as:

[yν]:=yν,ν{0,,k}
[yν,,yνj]:=[yν,,yνj+1][yν1,,yνj]xνxνj,ν{j,,k},j{1,,k}.

Notation

If the data points are given as a function ƒ,

(x0,f(x0)),,(xk,f(xk))

one sometimes writes

f[xν]:=f(xν),ν{0,,k}
f[xν,,xν+j]:=f[xν+1,,xν+j]f[xν,,xν+j1]xν+jxν,ν{0,,kj},j{1,,k}.

Several notations for the divided difference of the function ƒ on the nodes x0, ..., xn are used:

[x0,,xn]f,
[x0,,xn;f],
D[x0,,xn]f

etc.

Example

For the first few values of ν, this yields

[y0]=y0[y0,y1]=y1y0x1x0[y0,y1,y2]=[y1,y2][y0,y1]x2x0=y2y1x2x1y1y0x1x0x2x0=y2y1(x2x1)(x2x0)y1y0(x1x0)(x2x0)[y0,y1,y2,y3]=[y1,y2,y3][y0,y1,y2]x3x0

To make the recursive process more clear the divided differences can be put in a tabular form

x0y0=[y0][y0,y1]x1y1=[y1][y0,y1,y2][y1,y2][y0,y1,y2,y3]x2y2=[y2][y1,y2,y3][y2,y3]x3y3=[y3]

Properties

(f+g)[x0,,xn]=f[x0,,xn]+g[x0,,xn]
(λf)[x0,,xn]=λf[x0,,xn]
(fg)[x0,,xn]=f[x0]g[x0,,xn]+f[x0,x1]g[x1,,xn]++f[x0,,xn]g[xn]
lim(x0,,xn)(ξ,,ξ)f[x0,,xn]=f(n)(ξ)n!

Matrix form

The divided difference scheme can be put into an upper triangular matrix. Let Tf(x0,,xn)=(f[x0]f[x0,x1]f[x0,x1,x2]f[x0,,xn]0f[x1]f[x1,x2]f[x1,,xn]000f[xn]).

Then it holds

This follows from the Leibniz rule. It means that multiplication of such matrices is commutative. Summarised, the matrices of divided difference schemes with respect to the same set of nodes form a commutative ring.
δξ(t)={1:t=ξ,0:else.
Obviously fδξ=f(ξ)δξ, thus δξ is an eigenfunction of the pointwise function multiplication. That is Tδxix is somehow an "eigenmatrix" of Tfx: TfxTδxix=f(xi)Tδxix. However, all columns of Tδxix are multiples of each other, the matrix rank of Tδxix is 1. So you can compose the matrix of all eigenvectors from the i-th column of each Tδxix. Denote the matrix of eigenvectors with Ux. Example
U(x0,x1,x2,x3)=(11(x1x0)1(x2x0)(x2x1)1(x3x0)(x3x1)(x3x2)011(x2x1)1(x3x1)(x3x2)0011(x3x2)0001)
The diagonalization of Tfx can be written as
Uxdiag(f(x0),,f(xn))=TfxUx.

Alternative definitions

Expanded form

f[x0]=f(x0)f[x0,x1]=f(x0)(x0x1)+f(x1)(x1x0)f[x0,x1,x2]=f(x0)(x0x1)(x0x2)+f(x1)(x1x0)(x1x2)+f(x2)(x2x0)(x2x1)f[x0,x1,x2,x3]=f(x0)(x0x1)(x0x2)(x0x3)+f(x1)(x1x0)(x1x2)(x1x3)+f(x2)(x2x0)(x2x1)(x2x3)+f(x3)(x3x0)(x3x1)(x3x2)f[x0,,xn]=j=0nf(xj)k{0,,n}{j}(xjxk)

With help of a polynomial function q with q(ξ)=(ξx0)(ξxn) this can be written as

f[x0,,xn]=j=0nf(xj)q(xj).

Partial fractions

You can represent partial fractions using the expanded form of divided differences. (This does not simplify computation, but is interesting in itself.) If p and q are polynomial functions, where degp<degq and q is given in terms of linear factors by q(ξ)=(ξx1)(ξxn), then it follows from partial fraction decomposition that

p(ξ)q(ξ)=(tp(t)ξt)[x1,,xn].

If limits of the divided differences are accepted, then this connection does also hold, if some of the xj coincide.

If f is a polynomial function with arbitrary degree and it is decomposed by f(x)=p(x)+q(x)d(x) using polynomial division of f by q, then

p(ξ)q(ξ)=(tf(t)ξt)[x1,,xn].

Peano form

The divided differences can be expressed as

f[x0,,xn]=1n!x0xnf(n)(t)Bn1(t)dt

where Bn1 is a B-spline of degree n1 for the data points x0,,xn and f(n) is the n-th derivative of the function f.

This is called the Peano form of the divided differences and Bn1 is called the Peano kernel for the divided differences, both named after Giuseppe Peano.

Taylor form

First order

If nodes are cumulated, then the numerical computation of the divided differences is inaccurate, because you divide almost two zeros, each of which with a high relative error due to differences of similar values. However we know, that difference quotients approximate the derivative and vice versa:

f(y)f(x)yxf(x) for xy

This approximation can be turned into an identity whenever Taylor's theorem applies.

f(y)=f(x)+f(x)(yx)+f(x)(yx)22!+f(x)(yx)33!+
f(y)f(x)yx=f(x)+f(x)yx2!+f(x)(yx)23!+

You can eliminate the odd powers of yx by expanding the Taylor series at the center between x and y:

x=mh,y=m+h, that is m=x+y2,h=yx2
f(m+h)=f(m)+f(m)h+f(m)h22!+f(m)h33!+
f(mh)=f(m)f(m)h+f(m)h22!f(m)h33!+
f(y)f(x)yx=f(m+h)f(mh)2h=f(m)+f(m)h23!+

Higher order

The Taylor series or any other representation with function series can in principle be used to approximate divided differences. Taylor series are infinite sums of power functions. The mapping from a function f to a divided difference f[x0,,xn] is a linear functional. We can as well apply this functional to the function summands.

Express power notation with an ordinary function: pn(x)=xn.

Regular Taylor series is a weighted sum of power functions: f=f(0)p0+f(0)p1+f(0)2!p2+f(0)3!p3+

Taylor series for divided differences: f[x0,,xn]=f(0)p0[x0,,xn]+f(0)p1[x0,,xn]+f(0)2!p2[x0,,xn]+f(0)3!p3[x0,,xn]+

We know that the first n terms vanish, because we have a higher difference order than polynomial order, and in the following term the divided difference is one:

j<npj[x0,,xn]=0pn[x0,,xn]=1pn+1[x0,,xn]=x0++xnpn+m[x0,,xn]=a{0,,n}m with a1a2amjaxj.

It follows that the Taylor series for the divided difference essentially starts with f(n)(0)n! which is also a simple approximation of the divided difference, according to the mean value theorem for divided differences.

If we would have to compute the divided differences for the power functions in the usual way, we would encounter the same numerical problems that we had when computing the divided difference of f. The nice thing is, that there is a simpler way. It holds

tn=(1x0t)(1xnt)(p0[x0,,xn]+p1[x0,,xn]t+p2[x0,,xn]t2+).

Consequently we can compute the divided differences of pn by a division of formal power series. See how this reduces to the successive computation of powers when we compute pn[h] for several n.

Cf. an implementation in Haskell.

If you need to compute a whole divided difference scheme with respect to a Taylor series, see the section about divided differences of power series.

Polynomials and power series

Divided differences of polynomials are particularly interesting, because they can benefit from the Leibniz rule. The matrix J with

J=(x010000x110000x2100000xn)

contains the divided difference scheme for the identity function with respect to the nodes x0,,xn, thus Jn contains the divided differences for the power function with exponent n. Consequently you can obtain the divided differences for a polynomial function φ(p) with respect to the polynomial p by applying p (more precisely: its corresponding matrix polynomial function φM(p)) to the matrix J.

φ(p)(ξ)=a0+a1ξ++anξn
φM(p)(J)=a0+a1J++anJn
=(φ(p)[x0]φ(p)[x0,x1]φ(p)[x0,x1,x2]φ(p)[x0,,xn]0φ(p)[x1]φ(p)[x1,x2]φ(p)[x1,,xn]000φ(p)[xn])

This is known as Opitz' formula. [1] [2]

Now consider increasing the degree of p to infinity, i.e. turn the Taylor polynomial to a Taylor series. Let f be a function which corresponds to a power series. You can compute a divided difference scheme by computing the according matrix series applied to J. If the nodes x0,,xn are all equal, then J is a Jordan block and computation boils down to generalizing a scalar function to a matrix function using Jordan decomposition.

Forward differences

34 year old Paediatrician Ronald from La Malbaie, has lots of hobbies that include vehicle, best property developers in singapore developers in singapore and base jumping. Will soon embark on a contiki trip that will cover going to the Pearling.

When the data points are equidistantly distributed we get the special case called forward differences. They are easier to calculate than the more general divided differences.

Definition

Given n data points

(x0,y0),,(xn1,yn1)

with

xν=x0+νh , h>0 , ν=0,,n1

the divided differences can be calculated via forward differences defined as

(0)yi:=yi
(k)yi:=(k1)yi+1(k1)yi , k1.

Example

y0y0y12y0y13y0y22y1y2y3

Computer Program

Notes

43 year old Petroleum Engineer Harry from Deep River, usually spends time with hobbies and interests like renting movies, property developers in singapore new condominium and vehicle racing. Constantly enjoys going to destinations like Camino Real de Tierra Adentro.

See also

de:Polynominterpolation#Bestimmung der Koeffizienten: Schema der dividierten Differenzen

  1. de Boor, Carl, Divided Differences, Surv. Approx. Theory 1 (2005), 46--69
  2. Opitz, G. Steigungsmatrizen, Z. Angew. Math. Mech. (1964), 44, T52-T54