Surface force: Difference between revisions

From formulasearchengine
Jump to navigation Jump to search
en>WarddrBOT
m Robot: Adding nl:Oppervlaktekracht
 
en>Wavelength
"in to" (adverb and preposition) —> "into" (preposition)—wikt:inwikt:towikt:into—http://public.wsu.edu/~brians/errors/into.html
 
Line 1: Line 1:
Why precious metals have become a center of attention today? Is it really worthwhile to invest in gold and silver? Do you think about obtaining profitable returns in future? Every time, you switch on TV or browse internet, you see an incline in bullion prices. So, why high-end metals have become popular constituent of the wealth portfolio?<br><br>Buying stock in a gold mining company is riskier than buying coins or bars. When you buy gold stock you are not actually buying gold. It a gamble because you are [http://www.tumblr.com/tagged/betting betting] on the mine to produce more metal, sometime that isn't the case.<br><br>You have probably heard the story from many years ago where a rich merchant gave three men gold coins to invest, as they so chose. After returning from being away for some time, the merchant sought out the three men and asked how they had got on with their investments.<br><br>In some cases, you may want a veil for a special occasion. Among other things, you may decide that you want to wear something similar to a belly dancing costume for your wedding. As you may be aware, these costumes can be every bit as formal looking as a conventional gown. At the same time, you are sure to enjoy being able to wear a garment that sounds as pretty as it looks. That said, you may want  [http://coins.goldgrey.org/gold-bars/ http://coins.goldgrey.org/gold-bars/] to have a lace veil, or even one with another type of motif.<br><br>There are big [http://www.wired.com/search?query=doubters doubters] though. After all this is market. Pat Dorsey, Morningstar's former director of equity research and now vice chairman of Sanibel Captiva Trust Company, which has $500 million under management says, "I've never been a fan of gold. I am in the camp that thinks it generates no income and has no utility (like copper), so the valuation is based on the opinions of other people," Dorsey says.<br><br>The cost of silver coins depends on how rare it truly is. silver coins dated more than 80 or 90 years back are more expensive compared to those created in the later years. They are more valuable because people did not make many silver coins during days past.<br><br>In the recent article "Gold Suppression Theory," Alex Koyfman discussed six institutions that don't want you to buy gold and why. The bottom line was this: every dollar you invest in gold is a dollar you don't invest in one of their preferred investment vehicles. . . whether it be stocks, mutual funds, or the US dollar. In the eyes of these institutions, investments in physical gold mean lost commissions and lower profits.<br><br>You never know what is all involved until you take a detailed look at the prospectus provided by the fund. You want to make sure you don't just know bits and pieces of what's involved, but everything there is to know, and then some. Make sure you know all the fees involved with buying and selling funds, and if there are international fees required. Knowing this can help you determine if the company is a solid company where you can make money, or if you are getting into something you will regret in the future.<br><br>Pads - You may be tough, but eventually, kneeling on the sand will take its toll on your skin. Wear pads on your knees, especially if you want to go beach combing all day.
In mathematics, the '''logarithmic norm''' is a real-valued [[Functional (mathematics)|functional]] on [[Operator (mathematics)|operators]], and is derived from either an [[Inner product spaces|inner product]], a vector norm, or its induced [[operator norm]]. The logarithmic norm was independently introduced by [[Germund Dahlquist]]<ref>Germund Dahlquist, "Stability and error bounds in the numerical integration of ordinary differential equations", Almqvist & Wiksell, Uppsala 1958</ref> and Sergei Lozinskiĭ in 1958, for square [[Matrix (mathematics)|matrices]]. It has since been extended to nonlinear operators and [[unbounded operator]]s as well.<ref>Gustaf Söderlind, "The logarithmic norm. History and modern theory", ''BIT Numerical Mathematics'', '''46(3)''':631-652, 2006</ref> The logarithmic norm has a wide range of applications, in particular in matrix theory, [[differential equation]]s and [[numerical analysis]].
 
==Original Definition==
 
Let <math>A</math> be a square matrix and <math>\| \cdot \|</math> be an induced matrix norm. The associated logarithmic norm <math>\mu</math> of <math>A</math> is defined
:<math>\mu(A) = \lim \limits_{h \rightarrow 0^+} \frac{\| I + hA \| - 1}{h}</math>
 
Here <math>I</math> is the [[identity matrix]] of the same dimension as <math>A</math>, and <math>h</math> is a real, positive number. The limit as <math>h\rightarrow 0^-</math> equals <math>-\mu(-A)</math>, and is in general different from the logarithmic norm <math>\mu(A)</math>, as <math>-\mu(-A) \leq \mu(A)</math> for all matrices.
 
The matrix norm <math>\|A\|</math> is always positive if <math>A\neq 0</math>, but the logarithmic norm <math>\mu(A)</math> may also take negative values, e.g. when <math>A</math> is [[Positive-definite matrix|negative definite]]. Therefore, the logarithmic norm does not satisfy the axioms of a norm. The name ''logarithmic norm,'' which does not appear in the original reference, seems to originate from estimating the logarithm of the norm of solutions to the differential equation
:<math>\dot x = Ax.</math>
The maximal growth rate of <math>\log \|x\|</math> is <math>\mu(A)</math>. This is expressed by the differential inequality
:<math>\frac{\mathrm d}{\mathrm d t^+} \log \|x\| \leq \mu(A),</math>
where <math>\mathrm d/\mathrm dt^+</math> is the [[Dini derivative|upper right Dini derivative]]. Using [[logarithmic differentiation]] the differential inequality can also be written
:<math>\frac{\mathrm d\|x\|}{\mathrm d t^+} \leq \mu(A)\cdot \|x\|,</math>
showing its direct relation to [[Gronwall's inequality|Grönwall's lemma]].
 
==Alternative definitions==
 
If the vector norm is an inner product norm, as in a [[Hilbert space]], then the logarithmic norm is the smallest number <math>\mu(A)</math> such that for all <math>x</math>
:<math> \real\langle x, Ax\rangle \leq \mu(A)\cdot \|x\|^2</math>
 
Unlike the original definition, the latter expression also allows <math>A</math> to be unbounded. Thus [[differential operator]]s too can have logarithmic norms, allowing the use of the logarithmic norm both in algebra and in analysis. The modern, extended theory therefore prefers a definition based on inner products or [[Duality (mathematics)|duality]]. Both the operator norm and the logarithmic norm are then associated with extremal values of [[quadratic form]]s as follows:
:<math> \|A\|^2 = \sup_{x\neq 0}{\frac { \langle Ax, Ax\rangle }{ \langle x,x\rangle }}\,; \qquad  \mu(A) = \sup_{x\neq 0} {\frac {\real\langle x, Ax\rangle }{ \langle x,x \rangle }} </math>
 
==Properties==
 
Basic properties of the logarithmic norm of a matrix include:
# <math> \mu(zI) = \real\,(z) </math>
# <math> \mu(A) \leq \|A\| </math>
# <math> \mu(\gamma A) = \gamma \mu(A)\,</math> for scalar <math>\gamma > 0 </math>
# <math> \mu(A+zI) = \mu(A) + \real\,(z)</math>
# <math> \mu(A + B) \leq \mu(A) + \mu(B) </math>
# <math> \alpha(A) \leq \mu(A)\,</math> where <math>\alpha(A) </math> is the [[Extreme value|maximal]] real part of the [[Eigenvalue, eigenvector and eigenspace|eigenvalues]] of <math>A</math>
# <math> \|\mathrm e^{tA}\| \leq \mathrm e^{t\mu(A)}\, </math> for <math>t \geq 0</math>
# <math> \mu(A) < 0 \, \Rightarrow \, \|A^{-1}\| \leq -1/\mu(A) </math>
 
==Example logarithmic norms==
 
The logarithmic norm of a matrix can be calculated as follows for the three most common norms. In these formulas, <math>a_{ij}</math> represents the element on the <math>i</math>th row and <math>j</math>th column of a matrix <math>A</math>.
 
* <math> \mu_1(A) = \sup \limits_j (\real (a_{jj}) + \sum \limits_{i, i \neq j} |a_{ij}|) </math>
 
* <math> \displaystyle \mu_{2}(A) = \lambda_{max}\left(\frac{A+A^{\mathrm T}}{2}\right) </math>
 
* <math> \mu_{\infty}(A) = \sup \limits_i (\real (a_{ii}) + \sum \limits_{j, j \neq i} |a_{ij}|) </math>
 
==Applications in matrix theory and spectral theory==
 
The logarithmic norm is related to the extreme values of the Rayleigh quotient. It holds that
:<math>-\mu(-A) \leq {\frac {x^{\mathrm T}Ax}{x^{\mathrm T}x}} \leq \mu(A),</math>
and both extreme values are taken for some vectors <math>x\neq 0</math>. This also means that every eigenvalue <math>\lambda_k</math> of <math>A</math> satisfies
:<math>-\mu(-A) \leq \real\, \lambda_k \leq \mu(A)</math>.
More generally, the logarithmic norm is related to the [[numerical range]] of a matrix.
 
A matrix with <math>-\mu(-A)>0</math> is positive definite, and one with <math>\mu(A)<0</math> is negative definite. Such matrices have [[Invertible matrix|inverses]]. The inverse of a negative definite matrix is bounded by
:<math>\|A^{-1}\|\leq - {\frac {1}{\mu(A)}}.</math>
 
Both the bounds on the inverse and on the eigenvalues hold irrespective of the choice of vector (matrix) norm. Some results only hold for inner product norms, however. For example, if <math>R</math> is a rational function with the property
:<math>\real \, (z)\leq 0 \, \Rightarrow \, |R(z)|\leq 1</math>
then, for inner product norms,
:<math>\mu(A)\leq 0 \, \Rightarrow \, \|R(A)\|\leq 1.</math>
Thus the matrix norm and logarithmic norms may be viewed as generalizing the modulus and real part, respectively, from complex numbers to matrices.
 
==Applications in stability theory and numerical analysis==
 
The logarithmic norm plays an important role in the stability analysis of a continuous dynamical system <math>\dot x = Ax</math>. Its role is analogous to that of the matrix norm for a discrete dynamical system <math>x_{n+1} = Ax_n</math>.
 
In the simplest case, when <math>A</math> is a scalar complex constant <math>\lambda</math>, the discrete dynamical system has stable solutions when <math>|\lambda|\leq 1</math>, while the differential equation has stable solutions when <math>\real\,\lambda\leq 0</math>. When <math>A</math> is a matrix, the discrete system has stable solutions if <math>\|A\|\leq 1</math>. In the continuous system, the solutions are of the form <math>\mathrm e^{tA}x(0)</math>. They are stable if <math>\|\mathrm e^{tA}\|\leq 1</math> for all <math>t\geq 0</math>, which follows from property 7 above, if <math>\mu(A)\leq 0</math>. In the latter case, <math>\|x\|</math> is a [[Lyapunov function]] for the system.
 
[[Runge-Kutta methods]] for the numerical solution of <math>\dot x = Ax</math> replace the differential equation by a discrete equation <math>x_{n+1} = R(hA)\cdot x_n</math>, where the rational function <math>R</math> is characteristic of the method, and <math>h</math> is the time step size. If <math>|R(z)|\leq 1</math> whenever <math>\real\,(z)\leq 0</math>, then a stable differential equation, having <math>\mu(A)\leq 0</math>, will always result in a stable (contractive) numerical method, as <math>\|R(hA)\|\leq 1</math>. Runge-Kutta methods having this property are called A-stable.
 
Retaining the same form, the results can, under additional assumptions, be extended to nonlinear systems as well as to [[semigroup]] theory, where the crucial advantage of the logarithmic norm is that it discriminates between forward and reverse time evolution and can establish whether the problem is [[Well-posed problem|well posed]]. Similar results also apply in the stability analysis in [[control theory]], where there is a need to discriminate between positive and negative feedback.
 
==Applications to elliptic differential operators==
 
In connection with differential operators it is common to use inner products and [[integration by parts]]. In the simplest case we consider functions satisfying <math>u(0)=u(1)=0</math> with inner product
:<math>\langle u,v\rangle = \int_0^1 uv\, \mathrm dx.</math>
Then it holds that
:<math>\langle u,u''\rangle = -\langle u',u'\rangle \leq -\pi^2\|u\|^2,</math>
where the equality on the left represents integration by parts, and the inequality to the right is a Sobolev inequality. In the latter, equality is attained for the function <math>\sin\, \pi x</math>, implying that the constant <math>-\pi^2</math> is the best possible. Thus
:<math>\langle u, Au\rangle \leq -\pi^2 \|u\|^2</math>
for the differential operator <math>A=\mathrm d^2/\mathrm dx^2</math>, which implies that
:<math>\mu({\frac {\mathrm d^2}{\mathrm dx^2}}) = -\pi^2.</math>
As an operator satisfying <math>\langle u,Au \rangle > 0</math> is called [[Elliptic operator|elliptic]], the logarithmic norm quantifies the (strong) ellipticity of <math>-\mathrm d^2/\mathrm dx^2</math>. Thus, if <math>A</math> is strongly elliptic, then <math>\mu(-A)<0</math>, and is invertible given proper data.
 
If a finite difference method is used to solve <math>-u''=f</math>, the problem is replaced by an algebraic equation <math>Tu=f</math>. The matrix <math>T</math> will typically inherit the ellipticity, i.e., <math>-\mu(-T)>0</math>, showing that <math>T</math> is positive definite and therefore invertible.
 
These results carry over to the [[Poisson's equation|Poisson equation]] as well as to other numerical methods such as the [[Finite element method]].
 
==Extensions to nonlinear maps==
 
For nonlinear operators the operator norm and logarithmic norm are defined in terms of the inequalities
:<math>l(f)\cdot \|u-v\| \leq \|f(u)-f(v)\| \leq L(f)\cdot \|u-v\|,</math>
where <math>L(f)</math> is the least upper bound [[Lipschitz continuity|Lipschitz constant]] of <math>f</math>, and <math>l(f)</math> is the greatest lower bound Lipschitz constant; and
:<math>m(f)\cdot \|u-v\|^2 \leq \langle u-v, f(u)-f(v)\rangle \leq M(f)\cdot \|u-v\|^2,</math>
where <math>u</math> and <math>v</math> are in the domain <math>D</math> of <math>f</math>. Here <math>M(f)</math> is the least upper bound logarithmic Lipschitz constant of <math>f</math>, and <math>l(f)</math> is the greatest lower bound logarithmic Lipschitz constant. It holds that <math>m(f)=-M(-f)</math> (compare above) and, analogously, <math>l(f)=L(f^{-1})^{-1}</math>, where <math>L(f^{-1})</math> is defined on the image of <math>f</math>.
 
For nonlinear operators that are Lipschitz continuous, it further holds that
:<math>M(f) = \lim_{h\rightarrow 0^+}{\frac {L(I+hf)-1}{h}}.</math>
If <math>f</math> is differentiable and its domain <math>D</math> is convex, then
:<math>L(f) = \sup_{x\in D} \|f'(x)\| </math>  and  <math> \displaystyle M(f) = \sup_{x\in D} \mu(f'(x)).</math>
Here <math>f'(x)</math> is the [[Jacobian matrix and determinant|Jacobian matrix]] of <math>f</math>, linking the nonlinear extension to the matrix norm and logarithmic norm.
 
An operator having either <math>m(f)>0</math> or <math>M(f)<0</math> is called uniformly monotone. An operator satisfying <math>L(f)<1</math> is called [[Contraction mapping|contractive]]. This extension offers many connections to fixed point theory, and critical point theory.
 
The theory becomes analogous to that of the logarithmic norm for matrices, but is more complicated as the domains of the operators need to be given close attention, as in the case with unbounded operators. Property 8 of the logarithmic norm above carries over, independently of the choice of vector norm, and it holds that
:<math>M(f)<0\,\Rightarrow\,L(f^{-1})\leq -{\frac {1}{M(f)}},</math>
which quantifies the [[Browder-Minty theorem|Uniform Monotonicity Theorem]] due to Browder & Minty (1963).
 
==References==
<references />
 
[[Category:Matrix theory]]

Latest revision as of 03:39, 23 April 2013

In mathematics, the logarithmic norm is a real-valued functional on operators, and is derived from either an inner product, a vector norm, or its induced operator norm. The logarithmic norm was independently introduced by Germund Dahlquist[1] and Sergei Lozinskiĭ in 1958, for square matrices. It has since been extended to nonlinear operators and unbounded operators as well.[2] The logarithmic norm has a wide range of applications, in particular in matrix theory, differential equations and numerical analysis.

Original Definition

Let A be a square matrix and be an induced matrix norm. The associated logarithmic norm μ of A is defined

μ(A)=lim\limits h0+I+hA1h

Here I is the identity matrix of the same dimension as A, and h is a real, positive number. The limit as h0 equals μ(A), and is in general different from the logarithmic norm μ(A), as μ(A)μ(A) for all matrices.

The matrix norm A is always positive if A0, but the logarithmic norm μ(A) may also take negative values, e.g. when A is negative definite. Therefore, the logarithmic norm does not satisfy the axioms of a norm. The name logarithmic norm, which does not appear in the original reference, seems to originate from estimating the logarithm of the norm of solutions to the differential equation

x˙=Ax.

The maximal growth rate of logx is μ(A). This is expressed by the differential inequality

ddt+logxμ(A),

where d/dt+ is the upper right Dini derivative. Using logarithmic differentiation the differential inequality can also be written

dxdt+μ(A)x,

showing its direct relation to Grönwall's lemma.

Alternative definitions

If the vector norm is an inner product norm, as in a Hilbert space, then the logarithmic norm is the smallest number μ(A) such that for all x

x,Axμ(A)x2

Unlike the original definition, the latter expression also allows A to be unbounded. Thus differential operators too can have logarithmic norms, allowing the use of the logarithmic norm both in algebra and in analysis. The modern, extended theory therefore prefers a definition based on inner products or duality. Both the operator norm and the logarithmic norm are then associated with extremal values of quadratic forms as follows:

A2=supx0Ax,Axx,x;μ(A)=supx0x,Axx,x

Properties

Basic properties of the logarithmic norm of a matrix include:

  1. μ(zI)=(z)
  2. μ(A)A
  3. μ(γA)=γμ(A) for scalar γ>0
  4. μ(A+zI)=μ(A)+(z)
  5. μ(A+B)μ(A)+μ(B)
  6. α(A)μ(A) where α(A) is the maximal real part of the eigenvalues of A
  7. etAetμ(A) for t0
  8. μ(A)<0A11/μ(A)

Example logarithmic norms

The logarithmic norm of a matrix can be calculated as follows for the three most common norms. In these formulas, aij represents the element on the ith row and jth column of a matrix A.

Applications in matrix theory and spectral theory

The logarithmic norm is related to the extreme values of the Rayleigh quotient. It holds that

μ(A)xTAxxTxμ(A),

and both extreme values are taken for some vectors x0. This also means that every eigenvalue λk of A satisfies

μ(A)λkμ(A).

More generally, the logarithmic norm is related to the numerical range of a matrix.

A matrix with μ(A)>0 is positive definite, and one with μ(A)<0 is negative definite. Such matrices have inverses. The inverse of a negative definite matrix is bounded by

A11μ(A).

Both the bounds on the inverse and on the eigenvalues hold irrespective of the choice of vector (matrix) norm. Some results only hold for inner product norms, however. For example, if R is a rational function with the property

(z)0|R(z)|1

then, for inner product norms,

μ(A)0R(A)1.

Thus the matrix norm and logarithmic norms may be viewed as generalizing the modulus and real part, respectively, from complex numbers to matrices.

Applications in stability theory and numerical analysis

The logarithmic norm plays an important role in the stability analysis of a continuous dynamical system x˙=Ax. Its role is analogous to that of the matrix norm for a discrete dynamical system xn+1=Axn.

In the simplest case, when A is a scalar complex constant λ, the discrete dynamical system has stable solutions when |λ|1, while the differential equation has stable solutions when λ0. When A is a matrix, the discrete system has stable solutions if A1. In the continuous system, the solutions are of the form etAx(0). They are stable if etA1 for all t0, which follows from property 7 above, if μ(A)0. In the latter case, x is a Lyapunov function for the system.

Runge-Kutta methods for the numerical solution of x˙=Ax replace the differential equation by a discrete equation xn+1=R(hA)xn, where the rational function R is characteristic of the method, and h is the time step size. If |R(z)|1 whenever (z)0, then a stable differential equation, having μ(A)0, will always result in a stable (contractive) numerical method, as R(hA)1. Runge-Kutta methods having this property are called A-stable.

Retaining the same form, the results can, under additional assumptions, be extended to nonlinear systems as well as to semigroup theory, where the crucial advantage of the logarithmic norm is that it discriminates between forward and reverse time evolution and can establish whether the problem is well posed. Similar results also apply in the stability analysis in control theory, where there is a need to discriminate between positive and negative feedback.

Applications to elliptic differential operators

In connection with differential operators it is common to use inner products and integration by parts. In the simplest case we consider functions satisfying u(0)=u(1)=0 with inner product

u,v=01uvdx.

Then it holds that

u,u=u,uπ2u2,

where the equality on the left represents integration by parts, and the inequality to the right is a Sobolev inequality. In the latter, equality is attained for the function sinπx, implying that the constant π2 is the best possible. Thus

u,Auπ2u2

for the differential operator A=d2/dx2, which implies that

μ(d2dx2)=π2.

As an operator satisfying u,Au>0 is called elliptic, the logarithmic norm quantifies the (strong) ellipticity of d2/dx2. Thus, if A is strongly elliptic, then μ(A)<0, and is invertible given proper data.

If a finite difference method is used to solve u=f, the problem is replaced by an algebraic equation Tu=f. The matrix T will typically inherit the ellipticity, i.e., μ(T)>0, showing that T is positive definite and therefore invertible.

These results carry over to the Poisson equation as well as to other numerical methods such as the Finite element method.

Extensions to nonlinear maps

For nonlinear operators the operator norm and logarithmic norm are defined in terms of the inequalities

l(f)uvf(u)f(v)L(f)uv,

where L(f) is the least upper bound Lipschitz constant of f, and l(f) is the greatest lower bound Lipschitz constant; and

m(f)uv2uv,f(u)f(v)M(f)uv2,

where u and v are in the domain D of f. Here M(f) is the least upper bound logarithmic Lipschitz constant of f, and l(f) is the greatest lower bound logarithmic Lipschitz constant. It holds that m(f)=M(f) (compare above) and, analogously, l(f)=L(f1)1, where L(f1) is defined on the image of f.

For nonlinear operators that are Lipschitz continuous, it further holds that

M(f)=limh0+L(I+hf)1h.

If f is differentiable and its domain D is convex, then

L(f)=supxDf(x) and M(f)=supxDμ(f(x)).

Here f(x) is the Jacobian matrix of f, linking the nonlinear extension to the matrix norm and logarithmic norm.

An operator having either m(f)>0 or M(f)<0 is called uniformly monotone. An operator satisfying L(f)<1 is called contractive. This extension offers many connections to fixed point theory, and critical point theory.

The theory becomes analogous to that of the logarithmic norm for matrices, but is more complicated as the domains of the operators need to be given close attention, as in the case with unbounded operators. Property 8 of the logarithmic norm above carries over, independently of the choice of vector norm, and it holds that

M(f)<0L(f1)1M(f),

which quantifies the Uniform Monotonicity Theorem due to Browder & Minty (1963).

References

  1. Germund Dahlquist, "Stability and error bounds in the numerical integration of ordinary differential equations", Almqvist & Wiksell, Uppsala 1958
  2. Gustaf Söderlind, "The logarithmic norm. History and modern theory", BIT Numerical Mathematics, 46(3):631-652, 2006