Bayes factor: Difference between revisions

From formulasearchengine
Jump to navigation Jump to search
en>Joaks1
Clarifying limitations of approximate-Bayesian estimates of Bayes factors and adding relevant citation
Example: link likelihood-ratio test and small rewording
 
(One intermediate revision by one other user not shown)
Line 1: Line 1:
{{technical|date=October 2012}}
Claude is her name and she totally digs that title. His working day job is a cashier and his salary has been truly satisfying. The thing I adore most bottle tops gathering and now I have time to take on new issues. Years ago we moved to Kansas.<br><br>Here is my web page - [http://www.phubate.net/slideshow/profile.php?u=LeBolliger http://www.phubate.net/slideshow/profile.php?u=LeBolliger]
'''Verlet integration''' ({{IPA-fr|vɛʁˈlɛ}}) is a numerical method used to [[Time integration method|integrate]] [[Isaac Newton|Newton's]] [[equations of motion]].<ref name="Verlet1967" /> It is frequently used to calculate [[Trajectory|trajectories]] of particles in [[molecular dynamics]] simulations and [[computer graphics]]. The algorithm was first used in 1791 by [[Jean Baptiste Joseph Delambre|Delambre]], and has been rediscovered many times since then, most recently by [[Loup Verlet]] in 1960s for molecular dynamics. It was also used by [[Philip Herbert Cowell|Cowell]] and [[Andrew Claude de la Cherois Crommelin|Crommelin]] in 1909 to compute the orbit of [[Halley's Comet]], and by [[Carl Størmer]] in 1907 to study the motion of electrical particles in a [[magnetic field]].
The Verlet integrator offers greater [[Numerical stability|stability]], as well as other properties that are important in [[physical system]]s such as [[time-reversibility]] and [[Symplectic integrator|preservation of the symplectic form on phase space]], at no significant additional cost over the simple [[Euler method]]. Verlet integration was used by [[Carl Størmer]] to compute the trajectories of particles moving in a magnetic field (hence it is also called '''Störmer's method''')<ref>{{Cite book | last1=Press | first1=WH | last2=Teukolsky | first2=SA | last3=Vetterling | first3=WT | last4=Flannery | first4=BP | year=2007 | title=Numerical Recipes: The Art of Scientific Computing | edition=3rd | publisher=Cambridge University Press |  publication-place=New York | isbn=978-0-521-88068-8 | chapter=Section 17.4. Second-Order Conservative Equations | chapter-url=http://apps.nrbook.com/empanel/index.html#pg=928}}
</ref> and was popularized in molecular dynamics by French physicist [[Loup Verlet]] in 1967.
 
==Basic Störmer–Verlet==
 
For a [[differential equation]] of second order of the type <math>\ddot{\vec x}(t)=A(\vec x(t))</math> with initial conditions <math>\vec x(t_0)=\vec x_0</math> and <math>\dot{\vec x}(t_0)=\vec v_0</math>, an approximate numerical solution <math>\vec x_n\approx \vec x(t_n)</math> at the times <math>t_n=t_0+n\,\Delta t</math> with step size <math>\Delta t>0</math> can be obtained by the following method:
 
* set <math>\vec x_1=\vec x_0+\vec v_0\,\Delta t+\frac12 A(\vec x_0)\,\Delta t^2</math>
* for ''n=1,2,...'' iterate
:::<math>
\vec x_{n+1}=2 \vec x_n- \vec x_{n-1}+ A(\vec x_n)\,\Delta t^2.
</math>
 
===Equations of motion===
Newton's equation of motion for conservative physical systems is
:<math>M\ddot {\vec x}(t)=F(\vec x(t))=-\nabla V(\vec x(t))</math>
or individually
:<math>m_k\ddot {\vec x}_k(t)=F_k(\vec x(t))=-\nabla_{\vec x_k} V(\vec x(t))</math>
where
* ''t'' is the time,
* <math>\vec x(t)=(\vec x_1(t),\ldots,\vec x_N(t))</math> is the ensemble of the position vector of ''N'' objects,
* ''V'' is the scalar potential function,
* ''F'' is the negative [[Potential gradient|gradient of the potential]] giving the ensemble of forces on the particles,
* ''M'' is the [[mass matrix]], typically diagonal with blocks with mass <math>m_k</math> for every particle.
 
This equation, for various choices of the potential function ''V'', can be used to describe the evolution of diverse physical systems, from the motion of [[molecular dynamics|interacting molecules]] to the [[N-body problem|orbit of the planets]].
 
After a transformation to bring the mass to the right side and forgetting the structure of multiple particles, the equation may be simplified to
:<math>\ddot {\vec x}(t)=A(\vec x(t))</math>
with some suitable vector valued function ''A'' representing the position dependent acceleration. Typically, an initial position <math>\vec x(0)=\vec x_0</math> and an initial velocity <math>\vec v(0)=\dot {\vec x}(0)=\vec v_0</math> are also given.
 
===Verlet integration (without velocities)===
To discretize and numerically solve this [[initial value problem]], a time step <math>\Delta t>0</math> is chosen and the sampling point sequence <math>t_n=n\Delta t</math> considered. The task is to construct a sequence of points <math>\vec x_n</math> that closely follow the points <math>\vec x(t_n)</math> on the trajectory of the exact solution.
 
Where [[Euler's method]] uses the [[forward difference]] approximation to the first derivative in differential equations of order one, Verlet Integration can be seen as using the [[central difference]] approximation to the second derivative:
:<math>
\frac{\Delta^2\vec x_n}{\Delta t^2}
=\frac{\frac{\vec x_{n+1}-\vec x_n}{\Delta t}-\frac{\vec x_n-\vec x_{n-1}}{\Delta t}}{\Delta t}
=\frac{\vec x_{n+1}-2\vec x_n+\vec x_{n-1}}{\Delta t^2}=\vec a_n=A(\vec x_n)
</math>
 
''Verlet integration'' in the form used as the ''Störmer method''<ref>[http://www.fisica.uniud.it/~ercolessi/md/md/node21.html webpage] with a description of the Störmer method</ref> uses this equation to obtain the next position vector from the previous two without using the velocity as
:<math>
\vec x_{n+1}=2\vec x_n-\vec x_{n-1}+\vec a_n\,\Delta t^2,
\qquad \vec a_n=A(\vec x_n).
</math>
 
===Discretization error===
The time symmetry inherent in the method reduces the level of local errors introduced into the integration by the discretization by removing all odd degree terms, here the terms in ''h'' of degree three. The local error is quantified by inserting the exact values <math>\vec x(t_{n-1}),\vec x(t_n),\vec x(t_{n+1})</math> into the iteration and computing the [[Taylor expansion]]s at time <math>t=t_n</math> of the position vector <math>\vec{x}(t\pm\Delta t)</math> in different time directions.
 
:<math>\begin{align}
\vec{x}(t + \Delta t)
&= \vec{x}(t) + \vec{v}(t)\Delta t + \frac{\vec{a}(t) \Delta t^2}{2}
+ \frac{\vec{b}(t) \Delta t^3}{6} + \mathcal{O}(\Delta t^4)\\[0.7em]
\vec{x}(t - \Delta t)
&= \vec{x}(t) - \vec{v}(t)\Delta t + \frac{\vec{a}(t) \Delta t^2}{2}
- \frac{\vec{b}(t) \Delta t^3}{6} + \mathcal{O}(\Delta t^4),\,
\end{align}</math>
where <math>\vec{x}</math> is the position, <math>\vec{v}=\dot{\vec x}</math> the velocity, <math>\vec{a}=\ddot{\vec x}</math> the acceleration and <math>\vec{b}</math> the [[Jerk (physics)|jerk]] (third derivative of the position with respect to the time) <math>t</math>.
 
Adding these two expansions gives
:<math>\vec{x}(t + \Delta t) = 2\vec{x}(t) - \vec{x}(t - \Delta t) + \vec{a}(t) \Delta t^2 + \mathcal{O}(\Delta t^4).\,</math>
We can see that the first and third-order terms from the Taylor expansion cancel out, thus making the Verlet integrator an order more accurate than integration by simple Taylor expansion alone.
 
Caution should be applied to the fact that the acceleration here is computed from the exact solution, <math>\vec a(t)=A(\vec x(t))</math>, whereas in the iteration it is computed at the central iteration point, <math>\vec a_n=A(\vec x_n)</math>. In computing the global error, that is the distance between exact solution and approximation sequence, those two terms do not cancel exactly, influencing the order of the global error.
 
===A simple example===
 
To gain insight into the relation of local and global errors it is helpful to examine simple examples where the exact as well as the approximative solution can be expressed in explicit formulas. The standard example for this task is the [[exponential function]].
 
Consider the linear differential equation <math>\ddot x(t)=w^2x(t)</math> with a constant ''w''. Its exact basis solutions are <math>e^{wt}</math> and <math>e^{-wt}</math>.
 
The Störmer method applied to this differential equation leads to a linear [[recurrence relation]]
:<math>\begin{align}
x_{n+1}-2x_n+x_{n-1}&=h^2w^2x_n\\
\iff \quad
x_{n+1}-2(1+\tfrac12(wh)^2)x_n+x_{n-1}&=0.
\end{align}</math>
It can be solved by finding the roots of its characteristic polynomial
<math>q^2-2(1+\tfrac12(wh)^2)q+1=0</math>. These are
:<math>q_\pm=1+\tfrac12(wh)^2\pm wh\sqrt{1+\tfrac14(wh)^2}</math>.
The basis solutions of the linear recurrence are <math>x_n=q_+^{\;n}</math> and <math>x_n=q_-^{\;n}</math>. To compare them with the exact solutions, Taylor expansions are computed.
:<math>\begin{align}
q_+
&=1+\tfrac12(wh)^2+wh(1+\tfrac18(wh)^2-\tfrac{3}{128}(wh)^4+\mathcal O(h^6))\\[.3em]
&=1+(wh)+\tfrac12(wh)^2+\tfrac18(wh)^3-\tfrac{3}{128}(wh)^5+ \mathcal O(h^7).
\end{align}</math>
The quotient of this series with the one of the exponential <math>e^{wh}</math> starts with <math>1-\tfrac1{24}(wh)^3+\mathcal O(h^5)</math>, so
:<math>\begin{align}
q_+&=(1-\tfrac1{24}(wh)^3+\mathcal O(h^5))e^{wh}\\[.3em]
&=e^{-\frac{1}{24}(wh)^3+\mathcal O(h^5)}\,e^{wh}.
\end{align}</math>
From there it follows that for the first basis solution the error can be computed as
:<math>\begin{align}
x_n=q_+^{\;n}&=e^{-\frac{1}{24}(wh)^2\,wt_n+\mathcal O(h^4)}\,e^{wt_n}\\[.3em]
&=e^{wt_n}\left(1-\tfrac{1}{24}(wh)^2\,wt_n+\mathcal O(h^4)\right)\\[.3em]
&=e^{wt_n}+\mathcal O(h^2t_ne^{wt_n}).
\end{align}</math>
That is, although the local discretization error is of order 4, due to the second order of the differential equation the global error is of order 2, with a constant that grows exponentially in time.
 
===Starting the iteration===
Note that at the start of the Verlet-iteration at step <math>n=1</math>, time <math>t = t_1=\Delta t</math>, computing <math>\vec x_2</math>, one already needs the position vector <math>\vec x_1</math> at time <math>t=t_1</math>. At first sight this could give problems, because the initial conditions are known only at the initial time <math>t_0=0</math>. However, from these the acceleration <math>\vec a_0=A(\vec x_0)</math> is known, and a suitable approximation for the first time step position can be obtained using the [[Taylor polynomial]] of degree two:
:<math>
\vec x_1
=
\vec{x}_0 + \vec{v}_0\Delta t + \tfrac12 \vec a_0\Delta t^2
\approx
\vec{x}(\Delta t) + \mathcal{O}(\Delta t^3).\,
</math>
 
The error on the first time step calculation then is of order <math>\mathcal O(\Delta t^3)</math>. This is not considered a problem because on a simulation of over a large amount of timesteps, the error on the first timestep is only a negligibly small amount of the total error, which at time <math>t_n</math> is of the order <math>\mathcal O(e^{Lt_n}\Delta t^2)</math>, both for the distance of the position vectors <math>\vec x_n</math> to <math>\vec x(t_n)</math> as for the distance of the divided differences <math>\tfrac{\vec x_{n+1}-\vec x_n}{\Delta t}</math> to <math>\tfrac{\vec x(t_{n+1})-\vec x(t_n)}{\Delta t}</math>. Moreover, to obtain this second order global error, the initial error needs to be of at least third order.
 
===Computing velocities – Störmer–Verlet method===
The velocities are not explicitly given in the basic Störmer equation, but often they are necessary for the calculation of certain physical quantities like the kinetic energy. This can create technical challenges in [[molecular dynamics]] simulations, because kinetic energy and instantaneous temperatures at time <math>t</math> cannot be calculated for a system until the positions are known at time <math>t + \Delta t</math>. This deficiency can either be dealt with using the Velocity Verlet algorithm, or estimating the velocity using the position terms and the [[mean value theorem]]:
:<math>
\vec{v}(t)
=
\frac{\vec{x}(t + \Delta t) - \vec{x}(t - \Delta t)}{2\Delta t}
+ \mathcal{O}(\Delta t^2).
</math>
 
Note that this velocity term is a step behind the position term, since this is for the velocity at time <math>t</math>, not <math>t + \Delta t</math>, meaning that <math>\vec v_n=\tfrac{\vec x_{n+1}-\vec x_{n-1}}{2\Delta t}</math> is an order two approximation to <math>\vec{v}(t_n)</math>. With the same argument, but halving the time step, <math>\vec v_{n+1/2}=\tfrac{\vec x_{n+1}-\vec x_n}{\Delta t}</math> is an order two approximation to <math>\vec{v}(t_{n+1/2})</math>, with <math>t_{n+1/2}=t_n+\tfrac12\Delta t</math>.
 
One can shorten the interval to approximate the velocity at time <math>t + \Delta t</math> at the cost of accuracy:
 
:<math>\vec{v}(t + \Delta t) = \frac{\vec{x}(t + \Delta t) - \vec{x}(t)}{\Delta t} + \mathcal{O}(\Delta t).</math>
 
== Velocity Verlet ==
A related, and more commonly used, algorithm is the '''Velocity Verlet''' algorithm,<ref>{{cite journal|last=Swope|first=William C.|coauthors=H.C. Andersen, P.H. Berens, K.R. Wilson|title=A computer simulation method for the calculation of equilibrium constants for the formation of physical clusters of molecules: Application to small water clusters|journal=The Journal of Chemical Physics|date=1 January 1982|volume=76|issue=1|pages=648(Appendix)|doi=10.1063/1.442716}}</ref> similar to the [[leapfrog method]], except that the velocity and position are calculated at the same value of the time variable (Leapfrog does not, as the name suggests). This uses a similar approach but explicitly incorporates velocity, solving the first-timestep problem in the Basic Verlet algorithm:
 
:<math>\vec{x}(t + \Delta t) = \vec{x}(t) + \vec{v}(t)\, \Delta t + \frac{1}{2} \,\vec{a}(t) \Delta t^2  \,</math>
:<math>\vec{v}(t + \Delta t) = \vec{v}(t) + \frac{\vec{a}(t) + \vec{a}(t + \Delta t)}{2} \Delta t  \,</math>
 
It can be shown that the error on the Velocity Verlet is of the same order as the Basic Verlet. Note that the Velocity algorithm is not necessarily more memory consuming, because it's not necessary to keep track of the velocity at every timestep during the simulation. The standard implementation scheme of this algorithm is:
# Calculate: <math>\vec{v}\left(t + \tfrac12\,\Delta t\right) = \vec{v}(t) + \tfrac12\,\vec{a}(t)\,\Delta t\,</math>
# Calculate: <math>\vec{x}(t + \Delta t) = \vec{x}(t) + \vec{v}\left(t + \tfrac12\,\Delta t\right)\, \Delta t\,</math>
# Derive <math>\vec{a}(t + \Delta t)</math> from the interaction potential using <math>\vec{x}(t + \Delta t)</math>
# Calculate: <math>\vec{v}(t + \Delta t) = \vec{v}\left(t + \tfrac12\,\Delta t\right) + \tfrac12\,\vec{a}(t + \Delta t)\Delta t\,</math>.
 
Eliminating the half-step velocity, this algorithm may be shortened to
# Calculate: <math>\vec{x}(t + \Delta t) = \vec{x}(t) + \vec{v}(t)\, \Delta t+\tfrac12 \,\vec{a}(t)\,\Delta t^2</math>
# Derive <math>\vec{a}(t + \Delta t)</math> from the interaction potential using <math>\vec{x}(t + \Delta t)</math>
# Calculate: <math>\vec{v}(t + \Delta t) = \vec{v}(t) + \tfrac12\,\left(\vec{a}(t)+\vec{a}(t + \Delta t)\right)\Delta t\,</math>.
 
Note, however, that this algorithm assumes that acceleration <math>\vec{a}(t + \Delta t)</math> only depends on position <math>\vec{x}(t + \Delta t)</math>, and does not depend on velocity <math>\vec{v}(t + \Delta t)</math>.
 
One might note that the long-term results of '''Velocity Verlet''', and similarly of '''Leapfrog''' are one order better than the [[semi-implicit Euler method]]. The algorithms are almost identical up to a shifted by half of a timestep in the velocity.  This is easily proven by rotating the above loop to start at Step 3 and then noticing that the acceleration term in Step 1 could be eliminated by combining Steps 2 and 4.  The only difference is that the midpoint velocity in '''velocity Verlet''' is considered the final velocity in semi-implicit Euler method.
 
The global error of all Euler methods is of order one, whereas the global error of this method is, similar to the [[midpoint method]], of order two. Additionally, if the acceleration indeed results from the forces in a conservative mechanical or [[Hamiltonian system]], the energy of the approximation essentially oscillates around the constant energy of the exactly solved system, with a global error bound again of order one for semi-explicit Euler and order two for Verlet-leapfrog. The same goes for all other conservered quantities of the system like linear or angular momentum, that are always preserved or nearly preserved in a [[symplectic integrator]].<ref name="Hairer2003" />
 
The Velocity Verlet method is a special case of the [[Newmark-beta method]] with <math>\beta=0</math> and <math>\gamma = 1/2</math>.
 
==Error terms==
 
The local error in position of the Verlet integrator is <math>O(\Delta t^4)</math> as described above, and the local error in velocity is <math>O(\Delta t^2)</math>.
 
The global error in position, in contrast, is <math>O(\Delta t^2)</math> and the global error in velocity is <math>O(\Delta t^2)</math>. These can be derived by noting the following:
 
:<math>\mathrm{error}\bigl(x(t_0 + \Delta t)\bigr) = O(\Delta t^4)</math>
 
and
 
:<math>x(t_0 + 2\Delta t) = 2x(t_0 + \Delta t) - x(t_0) + \Delta t^2 x''(t_0 + \Delta t) + O(\Delta t^4) \, </math>
 
Therefore:
 
:<math>\mathrm{error}\bigl(x(t_0 + 2\Delta t)\bigr) = 2\mathrm{error}\bigl(x(t_0 + \Delta t)\bigr) + O(\Delta t^4) = 3\,O(\Delta t^4)</math>
 
Similarly:
 
:<math>\mathrm{error}\bigl(x(t_0 + 3\Delta t)\bigl) = 6\,O(\Delta t^4)</math>
:<math>\mathrm{error}\bigl(x(t_0 + 4\Delta t)\bigl) = 10\,O(\Delta t^4)</math>
:<math>\mathrm{error}\bigl(x(t_0 + 5\Delta t)\bigl) = 15\,O(\Delta t^4)</math>
 
Which can be generalized to (it can be shown by induction, but it is given here without proof):
 
:<math>\mathrm{error}\bigl(x(t_0 + n\Delta t)\bigr) = \frac{n(n+1)}{2}\,O(\Delta t^4)</math>
 
If we consider the global error in position between <math>x(t)</math> and <math>x(t+T)</math>, where <math>T = n\Delta t</math>, it is clear that:
 
:<math>\mathrm{error}\bigl(x(t_0 + T)\bigr) = \left(\frac{T^2}{2\Delta t^2} + \frac{T}{2\Delta t}\right) O(\Delta t^4)</math>
 
And therefore, the global (cumulative) error over a constant interval of time is given by:
 
:<math>\mathrm{error}\bigr(x(t_0 + T)\bigl) = O(\Delta t^2)</math>
 
Because the velocity is determined in a non-cumulative way from the positions in the Verlet integrator, the global error in velocity is also <math>O(\Delta t^2)</math>.
 
In molecular dynamics simulations, the global error is typically far more important than the local error, and the Verlet integrator is therefore known as a second-order integrator.
 
==Constraints==
{{Main|Constraint algorithm}}
The most notable thing that is now easier due to using Verlet integration rather than Eulerian is that constraints between particles are very easy to do.  A constraint is a connection between multiple points that limits them in some way, perhaps setting them at a specific distance or keeping them apart, or making sure they are closer than a specific distance. Often physics systems use springs between the points in order to keep them in the locations they are supposed to be.  However, using springs of infinite stiffness between two points usually gives the best results coupled with the verlet algorithm.  Here's how:
 
:<math>d_1=x_2^{(t)}-x_1^{(t)}\,</math>
 
:<math>d_2=\|d_1\|\,</math>
 
:<math>d_3=\frac{d_2-r}{d_2}\,</math>
 
:<math>x_1^{(t+\Delta t)}=\tilde{x}_1^{(t+\Delta t)}+\frac{1}{2}d_1d_3\,</math>
 
:<math>x_2^{(t+\Delta t)}=\tilde{x}_2^{(t+\Delta t)}-\frac{1}{2}d_1d_3\,</math>
 
The <math>x_i^{(t)}</math> variables are the positions of the points ''i'' at time ''t'', the <math>\tilde{x}_i^{(t)}</math> are the ''unconstrained'' positions (''i.e.'' the point positions before applying the constraints) of the points ''i'' at time ''t'', the d variables are temporary (they are added for [[wiktionary:optimize|optimization]] as the results of their expressions are needed multiple times), and ''r'' is the distance that is supposed to be between the two points. Currently this is in one dimension; however, it is easily expanded to two or three.  Simply find the delta (first equation) of each dimension, and then add the deltas squared to the inside of the square root of the second equation ([[Pythagorean theorem]]).  Then, duplicate the last two equations for the number of dimensions there are.  This is where verlet makes constraints simple – instead of say, applying a velocity to the points that would eventually satisfy the constraint, you can simply position the point where it should be and the verlet integrator takes care of the rest.
 
Problems, however, arise when multiple constraints position a vertex. One way to solve this is to loop through all the vertices in a simulation in a criss cross manner, so that at every vertex the constraint relaxation of the last vertex is already used to speed up the spread of the information. Either use fine time steps for the simulation, use a fixed number of constraint solving steps per time step, or solve constraints until they are met by a specific deviation.
 
When approximating the constraints locally to first order this is the same as the [[Gauss–Seidel method]]. For small [[Matrix (mathematics)|matrices]] it is known that [[LU decomposition]] is faster. Large systems can be divided into clusters (for example: each [[ragdoll physics|ragdoll]]&nbsp;=&nbsp;cluster). Inside clusters the LU method is used, between clusters the [[Gauss–Seidel method]] is used. The matrix code can be reused: The dependency of the forces on the positions can be approximated locally to first order, and the verlet integration can be made more implicit.
 
For big matrices sophisticated solvers (look especially for "The sizes of these
small dense matrices can be tuned to match the sweet spot" in [http://crd.lbl.gov/~xiaoye/SuperLU/superlu_ug.pdf]) for sparse matrices exist, any self made Verlet integration has to compete with these. The usage of (clusters of) matrices is not generally more precise or stable, but addresses the specific problem, that a force on one vertex of a sheet of cloth should reach any other vertex in a low number of time steps even if a fine grid is used for the cloth [http://www.cs.cmu.edu/~baraff/papers/index.html] (link needs refinement) and not form a [[sound wave]].
 
Another way to solve [[holonomic constraints]] is to use [[constraint algorithm]]s.
 
==Collision reactions==
One way of reacting to collisions is to use a penalty-based system which basically applies a set force to a point upon contact. The problem with this is that it is very difficult to choose the force imparted.  Use too strong a force and objects will become unstable, too weak and the objects will penetrate each other.  Another way is to use projection collision reactions which takes the offending point and attempts to move it the shortest distance possible to move it out of the other object.
 
The Verlet integration would automatically handle the velocity imparted via the collision in the latter case, however note that this is not guaranteed to do so in a way that is consistent with [[collision|collision physics]] (that is, changes in momentum are not guaranteed to be realistic).  Instead of implicitly changing the velocity term, you would need to explicitly control the final velocities of the objects colliding (by changing the recorded position from the previous time step).
 
The two simplest methods for deciding on a new velocity are perfectly [[elastic collision]]s and [[inelastic collision]]s.  A slightly more complicated strategy that offers more control would involve using the [[coefficient of restitution]].
 
==Applications==
The Verlet equations can also be modified to create a very simple damping effect (for instance, to emulate air friction in computer games):
 
:<math>x(t + \Delta t) = (2-f) x(t) -(1-f) x(t - \Delta t) + a(t)\Delta t^2,\,</math>
 
where ''f'' is a number representing the fraction of the velocity per update that is lost to friction (0–1).
 
== See also ==
*[[Courant–Friedrichs–Lewy condition]]
*[[Energy drift]]
*[[Symplectic integrator]]
 
== Literature ==
<references>
<ref name="Hairer2003">{{cite journal
| first=Ernst | last=Hairer
| first2=Christian | last2=Lubich
| first3=Gerhard | last3=Wanner
| title=Geometric numerical integration illustrated by the Störmer/Verlet method
| journal = Acta Numerica
| year = 2003
| volume = 12
| pages = 399–450
| url=http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.7.7106
| doi=10.1017/S0962492902000144
}}</ref>
<ref name="Verlet1967">{{cite journal
| first=Loup | last=Verlet
| title=Computer "Experiments" on Classical Fluids. I. Thermodynamical Properties of Lennard−Jones Molecules
| journal = Physical Review
| year = 1967
| volume = 159
| pages = 98–103
| url=http://link.aps.org/doi/10.1103/PhysRev.159.98
| doi=10.1103/PhysRev.159.98
}}</ref>
</references>
 
==External links==
*[http://verlet.googlecode.com/ Verlet Integration Demo and Code as a Java Applet]
*[http://www.gamasutra.com/resource_guide/20030121/jacobson_pfv.htm Advanced Character Physics by Thomas Jakobsen]
*[http://www.fisica.uniud.it/~ercolessi/md/md/node21.html The Verlet algorithm]
*[http://www.ch.embnet.org/MD_tutorial/pages/MD.Part1.html Theory of Molecular Dynamics Simulations] – bottom of page
 
{{Numerical integrators}}
 
{{DEFAULTSORT:Verlet Integration}}
[[Category:Numerical differential equations]]

Latest revision as of 11:49, 10 January 2015

Claude is her name and she totally digs that title. His working day job is a cashier and his salary has been truly satisfying. The thing I adore most bottle tops gathering and now I have time to take on new issues. Years ago we moved to Kansas.

Here is my web page - http://www.phubate.net/slideshow/profile.php?u=LeBolliger