|
|
(One intermediate revision by one other user not shown) |
Line 1: |
Line 1: |
| In [[mathematics]], the '''Euler–Maclaurin formula''' provides a powerful connection between [[integral]]s (see [[calculus]]) and sums. It can be used to approximate integrals by finite sums, or conversely to evaluate finite sums and [[series (mathematics)|infinite series]] using integrals and the machinery of calculus. For example, many asymptotic expansions are derived from the formula, and [[Faulhaber's formula]] for the sum of powers is an immediate consequence.
| | The author is known by the title of Numbers Lint. Hiring has been my occupation for some time but I've currently utilized for another one. To collect coins is what his family and him appreciate. Years ago we moved to North Dakota and I love every working day residing here.<br><br>Look at my site std testing at home ([http://www.buzzbit.net/user/AFilson Highly recommended Website]) |
| | |
| The formula was discovered independently by [[Leonhard Euler]] and [[Colin Maclaurin]] around 1735 (and later generalized as [[Darboux's formula]]). Euler needed it to compute slowly converging infinite series while Maclaurin used it to calculate integrals. | |
| | |
| ==The formula==
| |
| If ''m'' and ''n'' are [[natural number]]s and ''f''(''x'') is a smooth (meaning: sufficiently often [[derivative|differentiable]]), [[analytic function|analytic]] [[function (mathematics)|function]] of [[exponential type]] < 2π defined for all [[real number]]s ''x'' in the interval <math>[m,n]</math>, then the integral
| |
| | |
| :<math>I = \int_m^n f(x)\,dx</math>
| |
| | |
| can be approximated by the sum (or vice versa)
| |
| | |
| :<math>S = \frac{1}{2}f(m) + f\left(m + 1\right) + \cdots + f\left(n - 1\right) + \frac{1}{2}f(n)</math>
| |
| | |
| (see [[trapezoidal rule]]). The Euler–Maclaurin formula provides expressions for the difference between the sum and the integral in terms of the higher derivatives ''ƒ''<sup>(''k'')</sup> at the end points of the interval ''m'' and ''n''. Explicitly, for any natural number ''p'', we have
| |
| | |
| :<math> S - I = \sum_{k=2}^p {\frac{B_k}{k!}\left(f^{(k - 1)}(n) - f^{(k - 1)}(m)\right)} + R </math>
| |
| | |
| where ''B''<sub>1</sub> = −1/2, ''B''<sub>2</sub> = 1/6, ''B''<sub>3</sub> = 0, ''B''<sub>4</sub> = −1/30, ''B''<sub>5</sub> = 0, ''B''<sub>6</sub> = 1/42, ''B''<sub>7</sub> = 0, ''B''<sub>8</sub> = −1/30, … are the [[Bernoulli numbers]], and ''R'' is an [[error term]] which is normally small for suitable values of ''p'' and depends on ''n, m, p'' and ''f''. (The formula is often written with the subscript taking only even values, since the odd Bernoulli numbers are zero except for ''B''<sub>1</sub>.)
| |
| | |
| Note that
| |
| | |
| :<math> -B_1 \left(f(n) + f(m)\right) = \frac{1}{2}\left(f(n) + f(m)\right).</math>
| |
| | |
| Hence, we may also write the formula as follows:
| |
| | |
| :<math>\sum_{i=m}^n f(i) =
| |
| \int^n_m f(x)\,dx - B_1 \left(f(n) + f(m)\right) +
| |
| \sum_{k=1}^p\frac{B_{2k}}{(2k)!}\left(f^{(2k - 1)}(n) - f^{(2k - 1)}(m)\right) +
| |
| R
| |
| </math>
| |
| | |
| or, more compactly,
| |
| | |
| :<math>\sum_{i=m}^n f(i) =
| |
| \sum_{k=0}^{2p}\frac{1}{k!}\left(B^\ast_k f^{(k - 1)}(n) - B_k f^{(k - 1)}(m)\right) +
| |
| R
| |
| </math>
| |
| | |
| with the convention of <math>f^{(-1)}(x) = \int f(x)\,dx</math>, i.e. the -1-th derivation of ''f'' is the integral of the function.
| |
| This presentation also emphasizes the notation of the [[Bernoulli number|'''two kinds''' of Bernoulli numbers]], called the first and the second kind. Here we denote the ''Bernoulli number of the second kind'' as <math>B^\ast_k := (-1)^k B_k</math> which differ from the first kind only for the index 1.
| |
| | |
| ===The remainder term===
| |
| The remainder term ''R'' is most easily expressed using the [[periodic Bernoulli polynomial]]s ''P''<sub>''n''</sub>(''x''). The [[Bernoulli polynomial]]s ''B''<sub>''n''</sub>(''x''), ''n'' = 0, 1, 2, … are defined recursively as
| |
| | |
| :<math>\begin{align}
| |
| B_0(x) &= 1 \\
| |
| B_n'(x) &= nB_{n - 1}(x) \text{ and } \int_0^1 B_n(x)\,dx = 0\text{ for }n \ge 1
| |
| \end{align}</math>
| |
| | |
| Then the periodic Bernoulli functions ''P''<sub>''n''</sub> are defined as
| |
| | |
| :<math> P_n(x) = B_n \left(x - \lfloor x\rfloor\right)\text{ for } x > 0</math>
| |
| | |
| where <math>\scriptstyle \lfloor x\rfloor</math> denotes the largest integer that
| |
| is not greater than ''x''. Then, in terms of ''P''<sub>''n''</sub>(''x''), the remainder
| |
| term ''R'' can be written as
| |
| | |
| :<math> R = (-1) \int_m^n f^{(2p)}(x) {P_{2p}(x) \over (2p)!}\,dx </math>
| |
| | |
| or equivalently, integrating by parts, assuming ''ƒ''<sup>(2''p'')</sup> is differentiable again and recalling that the odd Bernoulli numbers are zero:
| |
| | |
| :<math> R = \int_m^n f^{(2p+1)}(x) {P_{(2p + 1)}(x) \over (2p + 1)!}\,dx </math>
| |
| | |
| When ''n'' > 0, it can be shown that
| |
| | |
| :<math>\left| B_n \left( x \right) \right| \le 2 \cdot \frac{n!}{\left( 2\pi \right)^n}\zeta \left( n \right)</math>
| |
| | |
| where ''ζ'' denotes the [[Riemann zeta function]] (see Lehmer; one approach to prove the inequality is to obtain the Fourier series for the polynomials ''B''<sub>''n''</sub>). The bound is achieved for even ''n'' when ''x'' is zero. Using this inequality, the size of the remainder term can be estimated using
| |
| | |
| :<math>\left|R\right|\leq\frac{2 \zeta (2p)}{(2\pi)^{2p}}\int_m^n\left|f^{(2p)}(x)\right|\ \, dx </math>
| |
| | |
| ==Applications==
| |
| | |
| ===The Basel problem===
| |
| The [[Basel problem]] asks to determine the sum
| |
| :<math> 1 + \frac14 + \frac19 + \frac1{16} + \frac1{25} + \cdots = \sum_{n=1}^\infty \frac{1}{n^2} </math>
| |
| | |
| Euler computed this sum to 20 decimal places with only a few terms of the Euler–Maclaurin formula in 1735. This probably convinced him that the sum equals π<sup>2</sup> / 6, which he proved in the same year.<ref>David J. Pengelley, [http://www.math.nmsu.edu/~davidp/euler2k2.pdf "Dances between continuous and discrete: Euler's summation formula"], in: Robert Bradley and Ed Sandifer (Eds), ''Proceedings, Euler 2K+2 Conference (Rumford, Maine, 2002)'', [[Euler Society]], 2003.</ref> [[Parseval's identity]] for the [[Fourier series]] of ''f''(''x'') = ''x'' gives the same result.
| |
| | |
| ===Sums involving a polynomial===
| |
| If ''f'' is a [[polynomial]] and ''p'' is big enough, then the remainder term vanishes. For instance, if ''f''(''x'') = ''x''<sup>3</sup>, we can choose ''p'' = 2 to obtain after simplification
| |
| | |
| :<math>\sum_{i=0}^n i^3 = \left(\frac{n(n + 1)}{2}\right)^2</math>
| |
| | |
| (see [[Faulhaber's formula]]).
| |
| | |
| ===Numerical integration===
| |
| The Euler–Maclaurin formula is also used for detailed [[error analysis]] in [[numerical quadrature]]. It explains the superior performance of the [[trapezoidal rule]] on smooth [[periodic function]]s and is used in certain [[Series acceleration|extrapolation methods]]. [[Clenshaw–Curtis quadrature]] is essentially a change of variables to cast an arbitrary integral in terms of integrals of periodic functions where the Euler–Maclaurin approach is very accurate (in that particular case the Euler–Maclaurin formula takes the form of a [[discrete cosine transform]]). This technique is known as a periodizing transformation.
| |
| | |
| ===Asymptotic expansion of sums===
| |
| In the context of computing [[asymptotic expansion]]s of sums and [[Series (mathematics)|series]], usually the most useful form of the Euler–Maclaurin formula is
| |
| | |
| :<math>\sum_{n=a}^b f(n) \sim \int_a^b f(x)\,dx + \frac{f(b) - f(a)}{2} + \sum_{k=1}^\infty \,\frac{B_{2k}}{(2k)!}\left(f^{(2k - 1)}(b) - f^{(2k - 1)}(a)\right)</math>
| |
| | |
| where ''a'' and ''b'' are integers.<ref>{{harvtxt|Abramowitz|Stegun|1972}}, 23.1.30</ref> Often the expansion remains valid even after taking the limits <math>{\scriptstyle a\to -\infty}</math> or <math>{\scriptstyle b\to +\infty}</math>, or both. In many cases the integral on the right-hand side can be evaluated in [[Differential Galois theory|closed form]] in terms of [[elementary function]]s even though the sum on the left-hand side cannot. Then all the terms in the asymptotic series can be expressed in terms of elementary functions. For example,
| |
| | |
| :<math>\sum_{k=0}^\infty \frac{1}{(z + k)^2} \sim \underbrace{\int_0^\infty\frac{1}{(z + k)^2}\,dk}_{= \frac{1}{z}} + \frac{1}{2z^2} + \sum_{t = 1}^\infty \frac{B_{2t}}{z^{2t + 1}}</math>
| |
| | |
| Here the left-hand side is equal to <math>{\scriptstyle \psi^{(1)}(z)}</math>, namely the first-order [[Polygamma function#Series representation|polygamma function]] defined through <math>{\scriptstyle \psi^{(1)}(z)=\frac{d^2}{dz^2}\ln \Gamma(z)}</math>; the [[gamma function]] <math>{\scriptstyle \Gamma(z)}</math> is equal to <math>{\scriptstyle (z-1)!}</math> if <math>{\scriptstyle z}</math> is a [[positive integer]]. This results in an asymptotic expansion for <math>{\scriptstyle \psi^{(1)}(z)}</math>. That expansion, in turn, serves as the starting point for one of the derivations of precise error estimates for [[Stirling's approximation]] of the [[factorial]] function.
| |
| | |
| ===Examples===
| |
| * <math> \sum_{k=1}^n \frac{1}{k^s} = \frac{1}{n^{s-1}} + s \int_1^n \frac{\lfloor x\rfloor}{x^{s+1}} dx \qquad \text{with }\quad s \in \R \setminus \{1\} </math>
| |
| | |
| * <math> \sum_{k=1}^n \frac{1}{k} = \log n + 1 - \int_1^n \frac{x-\lfloor x\rfloor}{x^{2}} dx</math>
| |
| | |
| == Proofs ==
| |
| | |
| === Derivation by mathematical induction ===
| |
| We follow the argument given in (Apostol).<ref>{{cite doi|10.2307/2589145}}</ref>
| |
| | |
| The [[Bernoulli polynomials]] ''B''<sub>''n''</sub>(''x''), ''n'' = 0, 1, 2, … may be defined recursively as follows:
| |
| | |
| :<math>\begin{align}
| |
| B_0(x) &= 1 \\
| |
| B_n'(x) &= nB_{n - 1}(x) \text{ and } \int_0^1 B_n(x)\,dx = 0\text{ for }n \ge 1
| |
| \end{align}</math>
| |
| | |
| The first several of these are
| |
| | |
| :<math>\begin{align}
| |
| B_1(x) &= x - \frac{1}{2} \\
| |
| B_2(x) &= x^2 - x + \frac{1}{6} \\
| |
| B_3(x) &= x^3 - \frac{3}{2}x^2 + \frac{1}{2}x \\
| |
| B_4(x) &= x^4 - 2x^3 + x^2 - \frac{1}{30} \\
| |
| & \vdots
| |
| \end{align}</math>
| |
| | |
| The values ''B''<sub>''n''</sub>(0) are the [[Bernoulli numbers]]. Notice that for ''n'' ≥ 2 we have
| |
| | |
| :<math>B_n(0) = B_n(1) = B_n\quad(n\text{th Bernoulli number})</math>
| |
| | |
| For ''n'' = 1,
| |
| | |
| :<math> B_1(0) = -B_1(1) = B_1</math>
| |
| | |
| We define the periodic Bernoulli functions ''P''<sub>''n''</sub> by
| |
| | |
| :<math> P_n(x) = B_n(x - \lfloor x\rfloor) </math>
| |
| | |
| where <math> \lfloor x\rfloor</math> denotes the largest integer that is not greater than ''x''. So ''P''<sub>''n''</sub> agree with the Bernoulli polynomials on the interval (0, 1) and are [[periodic function|periodic]] with period 1. Thus,
| |
| | |
| :<math> P_n(0) = P_n(1) = B_n</math>
| |
| | |
| Let ''k'' be an integer, and consider the integral
| |
| | |
| :<math> \int_k^{k + 1} f(x)\,dx = \int u\,dv</math>
| |
| | |
| where
| |
| | |
| :<math>\begin{align}
| |
| u &= f(x) \\
| |
| du &= f'(x)\,dx \\
| |
| dv &= P_0(x)\,dx \quad (\text{since }P_0(x) = 1) \\
| |
| v &= P_1(x)
| |
| \end{align}</math>
| |
| | |
| [[integration by parts|Integrating by parts]], we get
| |
| | |
| :<math>\begin{align}
| |
| \int_k^{k + 1} f(x)\,dx &= uv - \int v\,du &{}\\
| |
| &= \Big[f(x)P_1(x) \Big]_k^{k + 1} - \int_k^{k+1} f'(x)P_1(x)\,dx \\[8pt]
| |
| &= -B_1(f(k+1))-B_1(f(k)) - \int_k^{k+1} f'(x)P_1(x)\,dx
| |
| \end{align}</math>
| |
| | |
| Summing the above from ''k'' = 0 to ''k'' = ''n'' − 1, we get
| |
| | |
| :<math>\begin{align}
| |
| & \int_0^1 f(x)\,dx + \dotsb + \int_{n-1}^n f(x)\,dx \\
| |
| &= \int_0^n f(x)\, dx \\
| |
| &= \frac{f(0)}{2}+ f(1) + \dotsb + f(n-1) + {f(n) \over 2} - \int_0^n f'(x) P_1(x)\,dx
| |
| \end{align}</math>
| |
| | |
| Adding (''ƒ''(0) + ''ƒ''(''n''))/2 to both sides and rearranging, we have
| |
| | |
| :<math> \sum_{k=0}^n f(k) = \int_0^n f(x)\,dx + {f(0) + f(n) \over 2} + \int_0^n f'(x) P_1(x)\,dx\qquad (1)</math>
| |
| | |
| The last two terms therefore give the error when the integral is taken to approximate the sum.
| |
| | |
| Next, consider
| |
| | |
| :<math> \int_k^{k+1} f'(x)P_1(x)\,dx = \int u\,dv </math>
| |
| | |
| where
| |
| | |
| :<math>\begin{align}
| |
| u &{}= f'(x) \\
| |
| du &{}= f''(x)\,dx \\
| |
| dv &{}= P_1(x)\,dx \\
| |
| v &{}= \frac{1}{2}P_2(x)
| |
| \end{align}</math>
| |
| | |
| Integrating by parts again, we get,
| |
| | |
| :<math>\begin{align}
| |
| uv - \int v\,du &= \left[ {f'(x)P_2(x) \over 2} \right]_k^{k+1} - {1 \over 2}\int_k^{k+1} f''(x)P_2(x)\,dx \\
| |
| &= {B_2 \over 2}(f'(k + 1) - f'(k)) - {1 \over 2}\int_k^{k + 1} f''(x)P_2(x)\,dx
| |
| \end{align}</math>
| |
| | |
| Then summing from ''k'' = 0 to ''k'' = ''n'' − 1, and then replacing the last integral in (1) with what we have thus shown to be equal to it, we have
| |
| | |
| :<math> \sum_{k=0}^n f(k) = \int_0^n f(x)\,dx + {f(0) + f(n) \over 2} + \frac{B_2}{2}(f'(n) - f'(0)) - {1 \over 2}\int_0^n f''(x)P_2(x)\,dx. </math>
| |
| | |
| By now the reader will have guessed that this process can be iterated. In this way we get a proof of the Euler–Maclaurin summation formula by [[mathematical induction]], in which the induction step relies on integration by parts and on the identities for periodic Bernoulli functions.
| |
| <!--
| |
| Commented out because it's imprecise and contains false information.
| |
| In order to get bounds on the size of the error when the sum is approximated by the integral, we note that the Bernoulli polynomials on the interval [0, 1] attain their maximum absolute values at the endpoints (see D.H. Lehmer in References below), and the value ''B''<sub>''n''</sub>(1) is the ''n''th [[Bernoulli number]].
| |
| -->
| |
| | |
| === Derivation by functional analysis ===
| |
| | |
| The Euler–MacLaurin formula can be understood as a curious application of some ideas from [[Banach space]]s and [[functional analysis]].<ref name=gaspard>Pierre Gaspard, "r-adic one-dimensional maps and the Euler summation formula", ''Journal of Physics A'', '''25''' (letter) L483–L485 (1992). ''(Describes the eigenfunctions of the [[transfer operator]] for the [[Bernoulli map]])''</ref>
| |
| | |
| First we restrict the problem to the domain of the unit interval [0, 1]. Let <math>\scriptstyle B_n(x)</math> be the [[Bernoulli polynomial]]s. A set of functions [[dual space|dual]] to the Bernoulli polynomials are given by
| |
| | |
| :<math>\tilde{B}_n(x) = \frac{(-1)^{n + 1}}{n!} \left( \delta^{(n - 1)}(x - 1) - \delta^{(n - 1)}(x) \right)</math>
| |
| | |
| where δ is the [[Dirac delta function]]. The above is a formal notation for the idea of taking derivatives at a point; thus one has
| |
| | |
| :<math>\int_0^1 \tilde{B}_n(x) f(x)\, dx = \frac{1}{n!} \left( f^{(n - 1)}(1) - f^{(n-1)}(0) \right)</math>
| |
| | |
| for ''n'' > 0 and some arbitrary but differentiable function ''ƒ''(''x'') on the unit interval. For the case of ''n'' = 0, one defines <math>\scriptstyle \tilde{B}_0(x) \;=\; 1</math>. The Bernoulli polynomials, along with their duals, form an orthogonal set of states on the unit interval: one has
| |
| | |
| :<math>\int_0^1 \tilde{B}_m(x) B_n(x)\, dx = \delta_{mn}</math>
| |
| | |
| and
| |
| | |
| :<math>\sum_{n=0}^\infty B_n(x) \tilde{B}_n(y) = \delta (x - y)</math>
| |
| | |
| The Euler–MacLaurin summation formula then follows as an integral over the latter. One has
| |
| | |
| :<math>\begin{align}
| |
| f(x) &= \int_0^1 \sum_{n=0}^\infty B_n(x) \tilde{B}_n(y) f(y)\, dy\\
| |
| &= \int_0^1 f(y)\,dy +
| |
| \sum_{n=1}^N B_n(x) \frac{1}{n!}
| |
| \left( f^{(n-1)}(1) - f^{(n - 1)}(0) \right) -
| |
| \frac{1}{(N + 1)!} \int_0^1 B_{N + 1}(x-y) f^{(N)}(y)\, dy
| |
| \end{align}</math>
| |
| | |
| Then setting ''x'' = 0 and rearranging terms, one obtains an expression for ''ƒ''(0). Note that the Bernoulli numbers are defined as ''B''<sub>''n''</sub> = ''B''<sub>''n''</sub>(0), and that these vanish for odd ''n'' greater than 1.
| |
| | |
| Then, using the periodic Bernoulli function ''P''<sub>''n''</sub> defined above and repeating the argument on the interval [1,2], one can obtain an expression of ''ƒ''(1). This way one can obtain expressions for ''ƒ''(''n''), ''n'' = 0, 1, 2, ..., ''N'', and adding them up gives the Euler–MacLaurin formula. Note that this derivation does assume that ''ƒ''(''x'') is sufficiently differentiable and well-behaved; specifically, that ''ƒ'' may be approximated by [[polynomial]]s; equivalently, that ''ƒ'' is a real [[analytic function]] of [[exponential type]] <math>2\pi</math>. Written in explicit terms,
| |
| | |
| :<math>\begin{align}
| |
| \sum_{j=0}^{n-1} f(j) &=
| |
| \int_0^n f(x) \,dx + \sum_{i=1}^p{B_i \over i!} \left(f^{(i - 1)}(n) - f^{(i-1)}(0) \right) - (-1)^p \int_0^n {B_p(x - \lfloor x \rfloor) \over p!}f^{(p)}(x)dx \\
| |
| \sum_{j=1}^n f(j) &=
| |
| \int_0^n f(x) \,dx + \sum_{i=1}^p(-1)^i{B_i \over i!} \left(f^{(i - 1)}(n) - f^{(i - 1)}(0) \right) - (-1)^p \int_0^n {B_p(x - \lfloor x \rfloor) \over p!}f^{(p)}(x)dx\\
| |
| \sum_{j=0}^n f(j) &=
| |
| \int_0^n f(x) \,dx + \sum_{i=1}^p{1 \over i!} \left(B_i f^{(i - 1)}(n) - B_i^\star f^{(i - 1)}(0) \right) - (-1)^p \int_0^n {B_p(x - \lfloor x \rfloor) \over p!}f^{(p)}(x)dx
| |
| \end{align}</math>
| |
| | |
| where <math>B_i^\star = (-1)^i B_i</math> are the second kind of Bernoulli numbers and
| |
| <math>B_i(x)</math> indicate the periodic [[Bernoulli polynomial]]s. This general formula holds for even ''and odd p'' ≥ 1.
| |
| | |
| The Euler–MacLaurin summation formula can thus be seen to be an outcome of the [[group representation|representation]] of functions on the unit interval by the direct product of the Bernoulli polynomials and their duals. Note, however, that the representation is not [[complete lattice|complete]] on the set of [[square-integrable]] functions. The expansion in terms of the Bernoulli polynomials has a non-trivial [[kernel of a function|kernel]]. In particular, sin(2π''nx'') lies in the kernel; the integral of sin(2π''nx'') is vanishing on the unit interval, as is the difference of its derivatives at the endpoints. This is the essentially the reason for the restriction to [[exponential type]] of less than 2π: the function sin(2π''nz'') grows faster than ''e''<sup>2π|z|</sup> along the imaginary axis! Essentially, Euler-MacLaurin summation can be applied whenever [[Carlson's theorem]] holds; the Euler-MacLaurin formula is essentially a result obtaining from the study of [[finite difference]]s and [[Newton series]].
| |
| | |
| ==See also==
| |
| *[[Cesàro summation]]
| |
| *[[Euler summation]]
| |
| *[[Gauss–Kronrod quadrature formula]]
| |
| *[[Darboux's formula]]
| |
| | |
| ==Notes==
| |
| {{reflist}}
| |
| | |
| ==References==
| |
| {{refbegin|2}}
| |
| * {{Cite book | editor1-last=Abramowitz | editor1-first=Milton | editor1-link=Milton Abramowitz | editor2-last=Stegun | editor2-first=Irene A. | editor2-link=Irene Stegun | title=[[Abramowitz and Stegun|Handbook of Mathematical Functions with Formulas, Graphs, and Mathematical Tables]] | publisher=[[Dover Publications]] | location=New York | isbn=978-0-486-61272-0 | year=1972 | id=tenth printing | ref=harv }}, pp. 16, 806, 886
| |
| * {{MathWorld|title=Euler–Maclaurin Integration Formulas|urlname=Euler-MaclaurinIntegrationFormulas}}
| |
| * Xavier Gourdon and Pascal Sebah, ''[http://numbers.computation.free.fr/Constants/Miscellaneous/bernoulli.html Introduction on Bernoulli's numbers]'', (2002)
| |
| * [[D.H. Lehmer]], "On the Maxima and Minima of Bernoulli Polynomials", ''American Mathematical Monthly'', volume 47, pages 533–538 (1940)
| |
| * {{cite book | author=Hugh L. Montgomery | authorlink=Hugh Montgomery (mathematician) | coauthors=[[Robert Charles Vaughan (mathematician)|Robert C. Vaughan]] | title=Multiplicative number theory I. Classical theory | series=Cambridge tracts in advanced mathematics | volume=97 | year=2007 | isbn=0-521-84903-9 | pages=495–519}}
| |
| {{refend}}
| |
| | |
| {{DEFAULTSORT:Euler-Maclaurin Formula}}
| |
| [[Category:Approximation theory]]
| |
| [[Category:Asymptotic analysis]]
| |
| [[Category:Hilbert space]]
| |
| [[Category:Numerical integration (quadrature)]]
| |
| [[Category:Articles containing proofs]]
| |
| [[Category:Theorems in analysis]]
| |
| [[Category:Summability methods]]
| |