Sample-continuous process: Difference between revisions

From formulasearchengine
Jump to navigation Jump to search
en>DemocraticLuntz
Reverted 1 good faith edit by 132.205.61.198 using STiki
en>Monkbot
 
Line 1: Line 1:
In [[functional analysis]], [[compact operator]]s on [[Hilbert space]]s are a direct extension of matrices: in the Hilbert spaces, they are precisely the closure of [[finite-rank operator]]s in the [[Operator topology|uniform operator topology]]. As such, results from matrix theory can sometimes be extended to compact operators using similar arguments. In contrast, the study of general operators on infinite dimensional spaces often requires a genuinely different approach.
Hi there, I am Alyson Pomerleau and I believe it seems quite great when you say it. Office supervising is what she does for a living. For a while I've been in Mississippi but now I'm contemplating other choices. To play lacross is 1 of the issues she loves most.<br><br>Take a look at my page: [http://mybrandcp.com/xe/board_XmDx25/107997 clairvoyants]
 
For example, the [[spectral theory of compact operators]] on [[Banach space]]s takes a form that is very similar to the [[Jordan canonical form]] of matrices. In the context of Hilbert spaces, a square matrix is unitarily diagonalizable if and only if it is [[normal operator|normal]]. A corresponding result holds for normal compact operators on Hilbert spaces. (More generally, the compactness assumption can be dropped. But, as stated above, the techniques used are less routine.)
 
This article will discuss a few results for compact operators on Hilbert space, starting with general properties before considering subclasses of compact operators.
 
== Some general properties ==
Let ''H'' be a Hilbert space, ''L''(''H'') be the bounded operators on ''H''. ''T'' ∈ ''L''(''H'') is a '''compact operator''' if the image of each bounded set under ''T'' is [[Relatively compact subspace|relatively compact]]. We list some general properties of compact operators.
 
If ''X'' and ''Y'' are Hilbert spaces (in fact ''X'' Banach and ''Y'' normed will suffice), then ''T:'' ''X'' → ''Y'' is compact if and only if it is continuous when viewed as a map from ''X'' with the [[Weak convergence (Hilbert space)|weak topology]] to ''Y'' (with the norm topology). (See {{harv|Zhu|2007|loc=Theorem 1.14, p.11}}, and note in this reference that the uniform boundedness will apply in the situation where ''F'' ⊆ ''X'' satisfies (∀φ ∈ Hom(''X'', ''K'')) sup{''x**''(φ) = φ(''x''):''x''} < ∞, where ''K'' is the underlying field. The uniform boundedness principle applies since Hom(''X'', ''K'') with the norm topology will be a Banach space, and the maps ''x**'':Hom(''X'',''K'') → ''K'' are continuous homomorphisms with respect to this topology.)
 
The family of compact operators is a norm-closed, two-sided, *-ideal in ''L''(''H''). Consequently, a compact operator ''T'' cannot have a bounded inverse if ''H'' is infinite dimensional. If ''ST'' = ''TS'' = ''I'', then the identity operator would be compact, a contradiction.
 
If a sequence of bounded operators ''S<sub>n</sub>'' → ''S'' in the [[strong operator topology]] and ''T'' is compact, then ''S<sub>n</sub>T'' converges to ''ST'' in norm. For example, consider the Hilbert space ''l''<sup>2</sup>('''N'''), with standard basis {''e<sub>n</sub>''}. Let ''P<sub>m</sub>'' be the orthogonal projection on the linear span of {''e''<sub>1</sub> ... ''e<sub>m</sub>''}. The sequence {''P<sub>m</sub>''} converges to the identity operator ''I'' strongly but not uniformly. Define ''T'' by ''Te<sub>n</sub>'' = (1/''n'')<sup>2</sup> · ''e<sub>n</sub>''. ''T'' is compact, and, as claimed above, ''P<sub>m</sub>T'' → ''I T'' = ''T'' in the uniform operator topology: for all ''x'',
 
:<math>\left\| P_m T x - T x \right \| \leq \left( \frac{1}{m+1}\right)^2 \| x \|.</math>
 
Notice each ''P<sub>m</sub>'' is a finite-rank operator. Similar reasoning shows that if ''T'' is compact, then ''T'' is the uniform limit of some sequence of finite-rank operators.
 
By the norm-closedness of the ideal of compact operators, the converse is also true.
 
The quotient C*-algebra of ''L''(''H'') modulo the compact operators is called the [[Calkin algebra]], in which one can consider properties of an operator up to compact perturbation.
 
== Compact self adjoint operator ==
A bounded operator ''T'' on a Hilbert space ''H'' is said to be [[self-adjoint operator|self-adjoint]] if ''T'' = ''T*'', or equivalently,
 
:<math>\langle T x, y \rangle = \langle x, T y \rangle, \quad x, y \in H.</math>
 
It follows that <''Tx'', ''x''> is real for every ''x'' ∈ ''H'', thus eigenvalues of ''T'', when they exist, are real.  When a closed linear subspace ''L'' of ''H'' is invariant under ''T'', then the restriction of ''T'' to ''L'' is a self-adjoint operator on ''L'', and furthermore, the [[orthogonal complement]] ''L''<sup>&perp;</sup> of ''L'' is also invariant under ''T''.  For example, the space ''H'' can be decomposed as orthogonal direct sum of two ''T''&ndash;invariant closed linear subspaces: the [[Kernel (linear operator)|kernel]] of ''T'', and the orthogonal complement {{nowrap|(ker ''T'')<sup>&perp;</sup>}} of the kernel (which is equal to the closure of the range of ''T'', for any bounded self-adjoint operator).  These basic facts play an important role in the proof of the spectral theorem below.
 
The classification result for Hermitian {{nowrap|''n'' × ''n''}} matrices is the [[spectral theorem]]: If ''M'' = ''M*'', then ''M'' is unitarily diagonalizable and the diagonalization of ''M'' has real entries. Let ''T'' be a compact self adjoint operator on a Hilbert space ''H''. We will prove the same statement for ''T'': the operator ''T'' can be diagonalized by an orthonormal set of eigenvectors, each of which corresponds to a real eigenvalue.
 
=== Spectral theorem ===
'''Theorem''' For every compact self-adjoint operator ''T'' on a real or complex Hilbert space ''H'', there exists an [[orthonormal basis]] of ''H'' consisting of eigenvectors of ''T''.  More specifically, the orthogonal complement of the kernel of ''T'' admits, either a finite orthonormal basis of eigenvectors of ''T'', or a [[Countable set|countably infinite]] orthonormal basis {''e<sub>n</sub>''} of eigenvectors of ''T'', with corresponding eigenvalues {{nowrap|{''λ<sub>n</sub>''} ⊂ '''R'''}}, such that {{nowrap|''λ<sub>n</sub>'' → 0}}.
 
In other words, a compact self-adjoint operator can be unitarily diagonalized.  This is the spectral theorem.
 
When ''H'' is [[Separable space|separable]], one can mix the basis {''e<sub>n</sub>''} with a [[Countable set|countable]] orthonormal basis for the kernel of ''T'', and obtain an orthonormal basis {''f''<sub>''n''</sub>} for ''H'', consisting of eigenvectors of ''T'' with real eigenvalues {μ<sub>''n''</sub>} such that {{nowrap|μ<sub>''n''</sub> → 0}}.
 
'''Corollary''' For every compact self-adjoint operator ''T'' on a real or complex separable infinite dimensional Hilbert space ''H'', there exists a countably infinite orthonormal basis {''f<sub>n</sub>''} of ''H'' consisting of eigenvectors of ''T'', with corresponding eigenvalues {{nowrap|{μ<sub>''n''</sub>} ⊂ '''R'''}}, such that {{nowrap|μ<sub>''n''</sub> → 0}}.
 
==== The idea ====
Proving the spectral theorem for a Hermitian ''n'' × ''n'' matrix ''T'' hinges on showing the existence of one eigenvector ''x''. Once this is done, Hermiticity implies that both the linear span and orthogonal complement of ''x'' are invariant subspaces of ''T''. The desired result is then obtained by iteration. The existence of an eigenvector can be shown in at least two ways:
 
#One can argue algebraically: The characteristic polynomial of ''T'' has a complex root, therefore ''T'' has an eigenvalue with a corresponding eigenvector. Or,
#The eigenvalues can be characterized variationally: The largest eigenvalue is the maximum on the closed unit ''sphere'' of the function {{nowrap|''f'': '''R'''<sup>2''n''</sup> → '''R'''}} defined by ''f''(''x'') = ''x*Tx'' = <''Tx'', x>.
 
'''Note.''' In the finite dimensional case, part of the first approach works in much greater generailty; any square matrix, not necessarily Hermitian, has an eigenvector. This is  simply not true for general  operators on Hilbert spaces.
 
The spectral theorem for the compact self adjoint case can be obtained analogously: one finds an eigenvector by extending the second finite-dimensional argument above, then apply induction.  We first sketch the argument for matrices.
 
Since the closed unit sphere ''S'' in '''R'''<sup>2''n''</sup> is compact, and ''f'' is continuous, ''f''(''S'') is compact on the real line, therefore ''f'' attains a maximum on ''S'', at some unit vector ''y''. By [[Lagrange multipliers|Lagrange's multiplier]] theorem, ''y'' satisfies
 
:<math>\nabla f = \nabla \; y^* T y = \lambda \cdot \nabla \; y^* y</math>
 
for some λ. By Hermiticity, {{nowrap|''Ty'' {{=}} λ''y''}}.
 
However, the Lagrange multipliers do not generalize easily to the infinite dimensional case. Alternatively, let ''z'' ∈ '''C'''<sup>''n''</sup> be any vector. Notice that if a unit vector ''y'' maximizes <''Tx'', ''x''> on the unit sphere (or on the unit ball), it also maximizes the [[Rayleigh quotient]]:
 
:<math>g(x) = \frac{\langle Tx, x \rangle}{\|x\|^2}, \qquad 0 \ne x \in \mathbf{C}^n.</math>
 
Consider the function:
 
:<math>\begin{cases} h : \mathbf{R} \to \mathbf{R} \\ h(t) = g(y+tz) \end{cases}</math>
 
By calculus, {{nowrap|''h''′(0) {{=}} 0}}, ''i.e.'',
 
:<math>\begin{align}
h'(0) &= \lim_{t \to 0} \frac{h(t) - h(0)}{t - 0} \\
&= \lim_{t \to 0} \frac{g(y+tz) - g(y)}{t} \\
&= \lim_{t \to 0} \frac{1}{t} \left (\frac{\langle T(y+tz), y+tz \rangle}{\|y+tz\|^2} - \frac{\langle Ty, y \rangle}{\|y\|^2} \right ) \\
&= \lim_{t \to 0} \frac{1}{t} \left (\frac{\langle T(y+tz), y+tz \rangle - \langle Ty, y \rangle}{\|y\|^2} \right ) \\
&= \frac{1}{\|y\|^2} \lim_{t \to 0}  \frac{\langle T(y+tz), y+tz \rangle - \langle Ty, y \rangle}{t} \\
&= \frac{1}{\|y\|^2} \left (\frac{d}{dt} \frac{\langle T (y + t z), y + tz \rangle}{\langle y + tz, y + tz \rangle} \right)(0) \\
&= 0.
\end{align}</math>
 
Define:
 
:<math>m=\frac{\langle Ty, y \rangle}{\langle y, y \rangle}</math>
 
After some algebra the above expression becomes (''Re'' denotes the real part of a complex number)
 
:<math>\Re \left (\langle T y - m y, z \rangle \right) = 0.</math>
 
But ''z'' is arbitrary, therefore {{nowrap|''Ty'' − ''my'' {{=}} 0}}. This is the crux of proof for spectral theorem in the matricial case.
 
==== Details ====
'''Claim'''&nbsp;  If ''T'' is a compact self-adjoint operator on a non-zero Hilbert space ''H'' and
 
:<math>m(T) := \sup \bigl\{ |\langle T x, x \rangle| : x \in H, \, \|x\| \le 1 \bigr\},</math>
 
then ''m''(''T'') or −''m''(''T'') is an eigenvalue of ''T''. 
 
If {{nowrap|''m''(''T'') {{=}} 0}}, then ''T'' = 0 by the [[polarization identity]], and this case is clear.  Consider the function
 
:<math>\begin{cases} f : H \to \mathbf{R} \\ f(x) = \langle T x, x \rangle \end{cases}</math>
 
Replacing ''T'' by −''T'' if necessary, one may assume that the supremum of ''f'' on the closed unit ball ''B'' ⊂ ''H'' is equal to {{nowrap|''m''(''T'') > 0}}.  If ''f'' attains its maximum ''m''(''T'') on ''B'' at some unit vector ''y'', then, by the same argument used for matrices, ''y'' is an eigenvector of ''T'', with corresponding eigenvalue {{nowrap|λ {{=}} < λ''y'', ''y''>}} = {{nowrap|<''Ty'', ''y''> {{=}} ''f''(''y'') {{=}} ''m''(''T'')}}.
 
By the [[Banach–Alaoglu theorem]] and the reflexivity of ''H'', the closed unit ball ''B'' is weakly compact.  Also, the compactness of ''T'' means (see above) that ''T'' : ''X'' with the weak topology → ''X'' with the norm topology, is continuous. These two facts imply that ''f'' is continuous on ''B'' equipped with the weak topology, and ''f'' attains therefore its maximum ''m'' on ''B'' at some {{nowrap|''y'' ∈ ''B''}}.  By maximality, ||''y''|| = 1, which in turn implies that ''y'' also maximizes the Rayleigh quotient ''g''(''x'') (see above).  This shows that ''y'' is an eigenvector of ''T'', and ends the proof of the claim.
 
'''Note.''' The compactness of ''T'' is crucial. In general, ''f'' need not be continuous for the weak topology on the unit ball ''B''. For example, let ''T'' be the identity operator, which is not compact when ''H'' is infinite dimensional.  Take any orthonormal sequence {''y<sub>n</sub>''}. Then ''y<sub>n</sub>'' converges to 0 weakly, but lim ''f''(''y<sub>n</sub>'') = 1 ≠ 0 = ''f''(0).
 
Let ''T'' be a compact operator on a Hilbert space ''H''.  A finite (possibly empty) or countably infinite orthonormal sequence  {''e<sub>n</sub>''} of eigenvectors of ''T'', with corresponding non-zero eigenvalues, is constructed by induction as follows. Let ''H''<sub>0</sub> = ''H'' and ''T''<sub>0</sub> = ''T''.  If ''m''(''T''<sub>0</sub>) = 0, then ''T'' = 0 and the construction stops without producing any eigenvector ''e<sub>n</sub>''.  Suppose that orthonormal eigenvectors {{nowrap|''e''<sub>0</sub>, &hellip;, ''e''<sub>''n'' − 1</sub>}} of ''T'' have been found.  Then  {{nowrap|''E<sub>n</sub>'':{{=}} span(''e''<sub>0</sub>, &hellip;, ''e''<sub>''n'' − 1</sub>)}} is invariant under ''T'', and by self-adjointness,  the orthogonal  complement ''H<sub>n</sub>'' of ''E''<sub>''n''</sub> is an invariant subspace of ''T''.  Let ''T<sub>n</sub>'' denote the restriction of ''T'' to ''H<sub>n</sub>''. If ''m''(''T<sub>n</sub>'') = 0, then ''T<sub>n</sub>'' = 0, and the construction stops. Otherwise, by the ''claim'' applied to ''T<sub>n</sub>'', there is a norm one eigenvector ''e<sub>n</sub>'' of ''T'' in ''H''<sub>''n''</sub>, with corresponding non-zero eigenvalue λ<sub>''n''</sub> = {{nowrap|&plusmn; ''m''(''T<sub>n</sub>'')}}. 
 
Let ''F'' =  (span{''e<sub>n</sub>''})<sup>&perp;</sup>, where {''e<sub>n</sub>''} is the finite or infinite sequence constructed by the inductive process; by self-adjointness, ''F'' is  invariant under ''T''.  Let ''S'' denote the restriction of ''T'' to ''F''.  If the process was stopped after finitely many steps, with a last vector ''e''<sub>''m''−1</sub>, then  ''F''= ''H<sub>m</sub>'' and ''S'' = ''T<sub>m</sub>'' = 0 by construction.  In the infinite case, compactness of ''T'' and the weak-convergence of ''e<sub>n</sub>'' to 0 imply that {{nowrap|''Te<sub>n</sub>'' {{=}} λ<sub>''n''</sub>''e<sub>n</sub>'' → 0}},  therefore {{nowrap|λ<sub>''n''</sub> → 0}}. Since ''F'' is contained in ''H<sub>n</sub>'' for every ''n'', it  follows that ''m''(''S'') ≤ ''m''({''T<sub>n</sub>''}) = |λ<sub>''n''</sub>| for every ''n'', hence ''m''(''S'') = 0.  This implies again that  {{nowrap|''S'' {{=}} 0}}.
 
The fact that ''S'' = 0 means that ''F'' is contained in the kernel of ''T''.  Conversely, if ''x'' ∈ ker(''T''), then by  self-adjointness, ''x'' is orthogonal to every eigenvector {''e<sub>n</sub>''} with non-zero eigenvalue.  It follows that {{nowrap|''F'' {{=}} ker(''T'')}}, and that {''e<sub>n</sub>''} is an orthonormal basis for the orthogonal complement of the kernel of ''T''.  One can complete the  diagonalization of ''T'' by selecting an orthonormal basis of the kernel.  This proves the spectral theorem.
 
A shorter but more abstract proof goes as follows: by [[Zorn's lemma]], select ''U'' to be a maximal subset of ''H'' with the following three properties: all elements of ''U'' are eigenvectors of ''T'', they have norm one, and any two distinct elements of ''U'' are orthogonal.  Let ''F'' be the orthogonal complement of the linear span of ''U''.  If ''F'' ≠ {0}, it is a non-trivial invariant subspace of ''T'', and by the initial claim there must exist a norm one eigenvector ''y'' of ''T'' in ''F''. But then ''U'' &cup; {''y''} contradicts the maximality of ''U''.  It follows that ''F'' = {0}, hence span(''U'') is dense in ''H''.  This shows that ''U'' is an orthonormal basis of ''H'' consisting of eigenvectors of ''T''.
 
=== Functional calculus ===
If ''T'' is compact on an infinite dimensional Hilbert space ''H'', then ''T'' is not invertible, hence σ(''T''), the spectrum of ''T'', always contains 0.  The spectral theorem shows that σ(''T'') consists of the eigenvalues {λ<sub>''n''</sub>} of ''T'', and of 0 (if 0 is not already an eigenvalue). The set σ(''T'') is a compact subset of the real line, and the eigenvalues are dense in σ(''T'').
 
Any spectral theorem can be reformulated in terms of a [[functional calculus]]. In the present context we have:
 
<blockquote>'''Theorem.''' Let ''C''(σ(''T'')) denote the [[C*-algebra]] of continuous functions on σ(''T''). There exists a unique isometric homomorphism {{nowrap|Φ : ''C''(σ(''T'')) → ''L''(''H'')}} such that Φ(1) = ''I'' and, if ''f'' is the identity function ''f''(λ) = λ, then {{nowrap|Φ(''f'') {{=}} ''T''}}. Moreover, {{nowrap|σ(''f''(''T'')) {{=}} ''f''(σ(''T''))}}.</blockquote>
 
The functional calculus map Φ is defined in a natural way: let {''e<sub>n</sub>''} be an orthonormal basis of eigenvectors for ''H'', with corresponding eigenvalues {λ<sub>''n''</sub>}; for {{nowrap|''f'' ∈ ''C''(σ(''T''))}}, the operator Φ(''f''), diagonal with respect to the orthonormal basis {''e<sub>n</sub>''}, is defined by setting
 
:<math>\Phi(f)(e_n) = f(\lambda_n) e_n</math>
 
for every ''n''.  Since  Φ(''f'') is diagonal with respect to an orthonormal basis, its norm is equal to the supremum of the modulus of diagonal coefficients,
 
:<math>\|\Phi(f)\| = \sup_{\lambda_n \in \sigma(T)} |f(\lambda_n)| = \|f\|_{C(\sigma(T))}.</math>
 
The other properties of Φ can be readily verified. Conversely, any homomorphism Ψ satisfying the requirements of the theorem must coincide with Φ when ''f'' is a polynomial. By the [[Stone–Weierstrass theorem|Weierstrass approximation theorem]], polynomial functions are dense in ''C''(σ(''T'')), and it follows that {{nowrap|Ψ {{=}} Φ}}.  This shows that Φ is unique.
 
The more general [[continuous functional calculus]] can be defined for any self-adjoint (or even normal, in the complex case) bounded linear operator on a Hilbert space.  The compact case, described here, is a particularly simple instance of this functional calculus.
 
=== Simultaneous diagonalisation ===
Consider an Hilbert space ''H'' (e.g. the finite dimensional '''C'''<sup>''n''</sup>), and a commuting set <math>\mathcal{F}\subseteq\operatorname{Hom}(H,H)</math> of self-adjoint operators. Then under suitable conditions, can be simultaneously (unitarily) diagonalised. ''Viz.'', there exists an orthonormal basis ''Q'' consisting of common eigenvectors for the operators — ''i.e.''
 
:<math>(\forall{q\in Q,T\in\mathcal{F}})~(\exists{\sigma\in\mathbf{C}})~(T-\sigma)q=0</math>
 
<blockquote>'''Lemma.''' Suppose all the operators in <math>\mathcal{F}</math> are compact. Then every closed non-zero <math>\mathcal{F}</math>-invariant sub-space ''S'' ⊆ ''H'' has a common eigenvector for <math>\mathcal{F}</math>.</blockquote>
 
<div style="background-color: #F2F2F2; border: 1px solid #808000; padding: 5px; {{box-shadow}}">'''Proof.''' ''Case I:'' all the operators have each exactly one eigenvalue. Then take any <math>s\in S</math> of unit length. This is a common eigenvector.
 
''Case II:'' there is some operator <math>T\in\mathcal{F}</math> with at least 2 eigenvalues and let <math>0 \neq \alpha \in \sigma(T\upharpoonright S)</math>. Since ''T'' is compact and α is non-zero, we have <math>S':=\ker(T\upharpoonright S-\alpha)</math> is a finite dimensional (and therefore closed) non-zero <math>\mathcal{F}</math>-invariant sub-space (because the operators all commute with ''T'', we have for <math>T'\in\mathcal{F}</math> and <math>x\in\ker(T\upharpoonright S-\alpha)</math>, that <math>(T-\alpha)(T'x)=(T'(T~x)-\alpha T'x)=0</math>). In particular we definitely have <math>\dim~S'<\dim~S</math>. Thus we could in principle argue by induction over dimension, yielding that <math>S'\subseteq S</math> has a common eigenvector for <math>\mathcal{F}</math>.</div>
 
<blockquote>'''Theorem 1.''' If all the operators in <math>\mathcal{F}</math> are compact then the operators can be simultaneously (unitarily) diagonalised.</blockquote>
 
<div style="background-color: #F2F2F2; border: 1px solid #808000; padding: 5px; {{box-shadow}}">'''Proof.''' The following set
 
:<math>\mathbf{P}=\{ A \subseteq H : A \text{ is an orthonormal set of common eigenvectors for } \mathcal{F}\},</math>
 
is partially ordered by inclusion. This clearly has the Zorn property. So taking ''Q'' a maximal member, if ''Q'' is a basis for the whole hilbert space ''H'', we are done. If this were not the case, then letting <math>S={\langle Q\rangle}^{\bot}</math>, it is easy to see that this would be an <math>\mathcal{F}</math>-invariant non-trivial closed subspace; and thus by the lemma above, therein would lie a common eigenvector for the operators (necessarily orthogonal to ''Q''). But then there would then be a proper extension of ''Q'' within '''P'''; a contradiction to its maximality.</div>
 
<blockquote>'''Theorem 2.''' If there is an injective compact operator in <math>\mathcal{F}</math>; then the operators can be simultaneously (unitarily) diagonalised.</blockquote>
 
<div style="background-color: #F2F2F2; border: 1px solid #808000; padding: 5px; {{box-shadow}}">'''Proof.''' Fix <math>T_0\in\mathcal{F}</math> compact injective. Then we have, by the spectral theory of compact symmetric operators on hilbert spaces:
 
:<math>H=\overline{\bigoplus\nolimits_{\lambda\in\sigma(T_0)} \ker(T_0-\sigma)},</math>
 
where <math>\sigma(T_0)</math> is a discrete, countable subset of positive real numbers, and all the eigenspaces are finite dimensional. Since <math>\mathcal{F}</math> a commuting set, we have all the eigenspaces are invariant. Since the operators restricted to the eigenspaces (which are finite dimensional) are automatically all compact, we can apply Theorem 1 to each of these, and find orthonormal bases ''Q''<sub>σ</sub> for the <math>\ker(T_0-\sigma)</math>. Since ''T''<sub>0</sub> is symmetric, we have that
 
:<math>Q:=\bigcup\nolimits_{\sigma\in\sigma(T_0)} Q_{\sigma}</math>
 
is a (countable) orthonormal set. It is also, by the decomposition we first stated, a basis for ''H''.</div>
 
<blockquote>'''Theorem 3.''' If ''H'' a finite dimensional Hilbert space, and <math>\mathcal{F}\subseteq\operatorname{Hom}(H,H)</math> a commutative set of operators, each of which is diagonalisable; then the operators can be simultaneously diagonalised.</blockquote>
 
<div style="background-color: #F2F2F2; border: 1px solid #808000; padding: 5px; {{box-shadow}}">'''Proof.''' ''Case I:'' all operators have exactly one eigenvalue. Then any basis for ''H'' will do.
 
''Case II:'' Fix <math>T_0\in\mathcal{F}</math> an operator with at least two eigenvalues, and let <math>P\in\operatorname{Hom}(H,H)^{\times}</math> so that <math>P^{-1}~T_0~P</math> is a symmetric operator. Now let α be an eigenvalue of <math>P^{-1}T_0P</math>. Then it is easy to see that both:
 
:<math>\ker\left(P^{-1}~T_0(P-\alpha)\right), \quad \ker\left(P^{-1}~T_0(P-\alpha) \right)^{\bot}</math>
 
are non-trivial <math>P^{-1}\mathcal{F}P</math>-invariant subspaces. By induction over dimension we have that there are linearly independent bases ''Q''<sub>1</sub>, ''Q''<sub>2</sub> for the subspaces, which demonstrate that the operators in <math>P^{-1}\mathcal{F}P</math> can be simultaneously diagonalisable on the subspaces. Clearly then <math>P(Q_1\cup Q_2)</math> demonstrates that the operators in <math>\mathcal{F}</math> can be simultaneously diagonalised.</div>
 
Notice we did not have to directly use the machinery of matrices at all in this proof. There are other versions which do.
 
We can strengthen the above to the case where all the operators merely commute with their adjoint; in this case we remove the term "orthogonal" from the diagonalisation. There are weaker results for operators arising from representations due to Weyl–Peter. Let ''G'' be a fixed locally compact hausdorff group, and <math>H=L^2(G)</math> (the space of square integrable measurable functions with respect to the unique-up-to-scale Haar measure on ''G''). Consider the continuous shift action:
 
:<math>\begin{cases} G\times H\to H \\ (gf)(x)=f(g^{-1}x) \end{cases}</math>
 
Then if ''G'' were compact then there is a unique decomposition of ''H'' into a countable direct sum of finite dimensional, irreducible, invariant subspaces (this is essentially diagonalisation of the family of operators <math>G\subseteq U(H)</math>). If ''G'' were not compact, but were abelien, then diagonalisation is not achieved, but we get a unique ''continuous'' decomposition of ''H'' into 1-dimensional invariant subspaces.
 
== Compact normal operator ==
The family of Hermitian matrices is a proper subset of matrices that are unitarily diagonalizable. A matrix ''M'' is unitarily diagonalizable if and only if it is normal, i.e. ''M*M'' = ''MM*''. Similar statements hold for compact normal operators.
 
Let ''T'' be compact and ''T*T'' = ''TT*''. Apply the ''Cartesian decomposition'' to ''T'': define
 
:<math>R = \frac{T + T^*}{2}, \quad J = \frac{T - T^*}{2i}.</math>
 
The self adjoint compact operators ''R'' and ''J'' are called the real and imaginary parts of ''T'' respectively. ''T'' is compact means ''T*'', consequently ''R'' and ''J'', are compact. Furthermore, the normality of ''T'' implies ''R'' and ''J'' commute. Therefore they can be simultaneously diagonalized, from which follows the claim.
 
A [[hyponormal operator|hyponormal compact operator]] (in particular, a [[subnormal operator]]) is normal.
 
== Unitary operator ==
The spectrum of a [[unitary operator]] ''U'' lies on the unit circle in the complex plane; it could be the entire unit circle. However, if ''U'' is identity plus a compact perturbation, ''U'' has only countable spectrum, containing 1 and possibly, a finite set or a sequence tending to 1 on the unit circle. More precisely, suppose {{nowrap|''U'' {{=}} ''I'' + ''C''}} where ''C'' is compact.  The equations {{nowrap|''UU*'' {{=}} ''U*U'' {{=}} ''I''}} and {{nowrap|''C'' {{=}} ''U'' − ''I''}} show that ''C'' is normal.  The spectrum of ''C'' contains 0, and possibly, a finite set or a sequence tending to 0.  Since {{nowrap|''U'' {{=}} ''I'' + ''C''}}, the spectrum of ''U'' is obtained by shifting the spectrum of ''C'' by 1.
 
== Examples ==
* Let ''H'' = [[Lp space|''L''<sup>2</sup>([0, 1])]]. The multiplication operator ''M'' defined by
 
:: <math>(M f)(x) = x f(x), \quad f \in H, \, \, x \in [0, 1]</math>
 
:is a bounded self-adjoint operator on ''H'' that has no eigenvector and hence, by the spectral theorem, cannot
be compact.
 
* Let ''K''(''x'', ''y'') be square integrable on [0, 1]<sup>2</sup> and define ''T''<sub>''K''</sup> on ''H'' by
 
::<math>(T_K f)(x) = \int_0^1 K(x, y) f(y) \, \mathrm{d} y.</math>
 
:Then ''T<sub>K</sup>'' is compact on ''H''; it is a [[Hilbert–Schmidt operator]].
 
* Suppose that the kernel ''K''(''x'', ''y'') satisfies the Hermiticity condition
 
::<math>K(y, x) = \overline{K(x, y)}, \quad x, y \in [0, 1].</math>
 
:Then ''T<sub>K</sup>'' is compact and self-adjoint on ''H''; if {φ<sub>''n''</sub>} is an orthonormal basis of eigenvectors, with eigenvalues {λ<sub>''n''</sub>}, it can be proved that
 
::<math>\sum \lambda_n^2 < \infty, \ \ K(x, y) \sim \sum \lambda_n \varphi_n(x) \overline{\varphi_n(y)},</math>
 
:where the sum of the series of functions is understood as ''L''<sup>2</sup> convergence for the Lebesgue measure {{nowrap|on [0, 1]<sup>2</sup>}}. [[Mercer's theorem]] gives conditions under which the series converges to ''K''(''x'', ''y'') pointwise, and uniformly {{nowrap|on [0, 1]<sup>2</sup>}}.
 
== See also ==
*[[Singular value decomposition#Bounded operators on Hilbert spaces]]. The notion of singular values can be extended from matrices to compact operators.
*[[Decomposition of spectrum (functional analysis)]]. If the compactness assumption is removed, operators need not have countable spectrum in general.
*[[Calkin algebra]]
 
== References ==
*J. Blank, P. Exner, and M. Havlicek, ''Hilbert Space Operators in Quantum Physics'', American Institute of Physics, 1994.
*M. Reed and B. Simon, ''Methods of Modern Mathematical Physics I: Functional Analysis'', Academic Press, 1972.
*{{citation|last=Zhu | first= Kehe | title=Operator Theory in Function Spaces | series=Mathematical surveys and monographs | volume=Vol. 138 | publisher=American Mathematical Society | year=2007 | isbn=978-0-8218-3965-2}}
 
[[Category:Operator theory]]
[[Category:Hilbert space]]
[[Category:Articles containing proofs]]

Latest revision as of 07:45, 24 July 2014

Hi there, I am Alyson Pomerleau and I believe it seems quite great when you say it. Office supervising is what she does for a living. For a while I've been in Mississippi but now I'm contemplating other choices. To play lacross is 1 of the issues she loves most.

Take a look at my page: clairvoyants