|
|
Line 1: |
Line 1: |
| In [[probability theory]], the '''Borel–Cantelli lemma''' is a [[theorem]] about [[sequence]]s of [[event (probability theory)|events]]. In general, it is a result in [[measure theory]]. It is named after [[Émile Borel]] and [[Francesco Paolo Cantelli]], who gave statement to the lemma in the first decades of the 20th century.<ref>E. Borel, "Les probabilités dénombrables et leurs applications arithmetiques" ''Rend. Circ. Mat. Palermo'' (2) '''27''' (1909) pp. 247–271.</ref><ref>F.P. Cantelli, "Sulla probabilità come limite della frequenza", ''Atti Accad. Naz. Lincei'' 26:1 (1917) pp.39–45.</ref> A related result, sometimes called the '''second Borel–Cantelli lemma''', is a partial [[converse (logic)|converse]] of the first Borel–Cantelli lemma. The lemma states that, under certain conditions, an event will occur with probability zero or with probability one. As such, it is the best-known of a class of similar theorems, known as zero-one laws. Other examples include the [[Kolmogorov 0-1 law]] and the [[Hewitt–Savage zero-one law]].
| | There is nothing to tell about me I think.<br>Finally a member of this site.<br>I really hope Im useful in some way .<br><br>My web-site [http://feeds.feedburner.com/pandaInvesting فوركس] |
| | |
| ==Statement of lemma for probability spaces==
| |
| | |
| Let (''E''<sub>''n''</sub>) be a sequence of events in some [[probability space]].
| |
| The Borel–Cantelli lemma states:<ref>Achim Klenke, ''Probability Theory'', (2006) Springer-Verlag ISBN 978-1-848000-047-6 doi:10.1007/978-1-848000-048-3</ref>
| |
| | |
| :If the sum of the probabilities of the ''E''<sub>''n''</sub> is finite
| |
| | |
| ::<math>\sum_{n=1}^\infty \Pr(E_n)<\infty,</math>
| |
| | |
| :then the probability that infinitely many of them occur is 0, that is,
| |
| | |
| ::<math>\Pr\left(\limsup_{n\to\infty} E_n\right) = 0.\,</math>
| |
| | |
| Here, "lim sup" denotes [[limit superior]] of the sequence of events, and each event is a set of outcomes. That is, lim sup ''E''<sub>''n''</sub> is the set of outcomes that occur infinitely many times within the infinite sequence of events (''E''<sub>''n''</sub>). Explicitly,
| |
| | |
| :<math>\limsup_{n\to\infty} E_n = \bigcap_{n=1}^{\infty} \bigcup_{k=n}^{\infty} E_k.</math>
| |
| | |
| The theorem therefore asserts that if the sum of the probabilities of the events ''E''<sub>''n''</sub> is finite, then the set of all outcomes that are "repeated" infinitely many times must occur with probability zero. Note that no assumption of [[statistical independence|independence]] is required.
| |
| | |
| ===Example===
| |
|
| |
| Suppose (''X''<sub>''n''</sub>) is a sequence of [[random variable]]s with Pr(''X''<sub>''n''</sub> = 0) = 1/''n''<sup>2</sup> for each ''n''. The probability that ''X''<sub>''n''</sub> = 0 occurs for infinitely many ''n'' is equivalent to the probability of the intersection of infinitely many [''X''<sub>''n''</sub> = 0] events. The intersection of infinitely many such events is a set of outcomes common to all of them. However, the sum ∑Pr(''X''<sub>''n''</sub> = 0) converges to ''π''<sup>2</sup>/6 ≈ 1.645 < ∞, and so the Borel–Cantelli Lemma states that the set of outcomes that are common to infinitely many such events occurs with probability zero. Hence, the probability of ''X''<sub>''n''</sub> = 0 occurring for infinitely many ''n'' is 0. [[Almost surely]] (i.e., with probability 1), ''X''<sub>''n''</sub> is nonzero for all but finitely many ''n''.
| |
| | |
| ==Proof <ref name="math.ucdavis.edu">{{cite web|title=Romik, Dan. Probability Theory Lecture Notes, Fall 2009, UC Davis.|url=http://www.math.ucdavis.edu/~romik/teaching/lectures.pdf}}</ref> ==
| |
| | |
| Let (''E''<sub>''n''</sub>) be a sequence of events in some [[probability space]] and suppose that the sum of the probabilities of the ''E''<sub>''n''</sub> is finite. That is suppose:
| |
| | |
| :<math>\sum_{n=1}^\infty \Pr(E_n)<\infty.</math>
| |
| | |
| Note that the convergence of this sum implies:
| |
| | |
| :<math> \inf_{N\geq 1} \sum_{n=N}^\infty \Pr(E_n) = 0. \, </math>
| |
| | |
| Therefore it follows that
| |
| | |
| :<math>
| |
| \begin{align}
| |
| & {}\qquad \Pr\left(\limsup_{n\to\infty} E_n\right) = \Pr(E_n \text{ infinitely often}) \\[8pt]
| |
| & = \Pr\left(\bigcap_{N=1}^\infty \bigcup_{n=N}^\infty E_n\right)
| |
| \leq \inf_{N \geq 1} \Pr\left( \bigcup_{n=N}^\infty E_n\right) \leq \inf_{N\geq 1} \sum_{n=N}^\infty \Pr(E_n) = 0.
| |
| \end{align}
| |
| </math>
| |
| | |
| ==General measure spaces==
| |
| | |
| For general [[measure space]]s, the Borel–Cantelli lemma takes the following form:
| |
| | |
| :Let μ be a (positive) [[measure (mathematics)|measure]] on a set ''X'', with [[sigma-algebra|σ-algebra]] ''F'', and let (''A''<sub>''n''</sub>) be a sequence in ''F''. If
| |
| | |
| ::<math>\sum_{n=1}^\infty\mu(A_n)<\infty,</math>
| |
| | |
| :then
| |
| | |
| ::<math>\mu\left(\limsup_{n\to\infty} A_n\right) = 0.\,</math>
| |
| | |
| ==Converse result==
| |
| | |
| A related result, sometimes called the '''second Borel–Cantelli lemma''', is a partial converse of the first Borel–Cantelli lemma. The lemma states: If the events ''E''<sub>''n''</sub> are [[statistical independence|independent]] and the sum of the probabilities of the ''E''<sub>''n''</sub> diverges to infinity, then the probability that infinitely many of them occur is 1. That is:
| |
| | |
| :: If <math>\sum^{\infty}_{n = 1} \Pr(E_n) = \infty</math> and the events <math>(E_n)^{\infty}_{n = 1}</math> are independent, then <math>\Pr(\limsup_{n \rightarrow \infty} E_n) = 1.</math>
| |
| | |
| The assumption of independence can be weakened to [[pairwise independence]], but in that case the proof is more difficult.
| |
| | |
| ===Example===
| |
| The [[infinite monkey theorem]] is a special case of this lemma.
| |
| | |
| The lemma can be applied to give a covering theorem in '''R'''<sup>''n''</sup>. Specifically {{harv|Stein|1993|loc=Lemma X.2.1}}, if ''E''<sub>''j''</sub> is a collection of [[Lebesgue measure|Lebesgue measurable]] subsets of a [[compact set]] in '''R'''<sup>''n''</sup> such that
| |
| | |
| :<math>\sum_j \mu(E_j) = \infty,</math>
| |
| | |
| then there is a sequence ''F''<sub>''j''</sub> of translates
| |
| | |
| :<math>F_j = E_j + x_j \, </math>
| |
| | |
| such that
| |
| | |
| :<math>\lim\sup F_j = \bigcap_{n=1}^\infty \bigcup_{k=n}^\infty F_k = \mathbb{R}^n</math>
| |
| | |
| apart from a set of measure zero.
| |
| | |
| ==Proof<ref name="math.ucdavis.edu"/> ==
| |
| | |
| Suppose that <math>\sum_{n = 1}^\infty \Pr(E_n) = \infty</math> and the events <math>(E_n)^\infty_{n = 1}</math> are independent. It is sufficient to show the event that the ''E''<sub>''n''</sub>'s did not occur for infinitely many values of ''n'' has probability 0. This is just to say that it is sufficient to show that
| |
| | |
| : <math> 1-\Pr(\limsup_{n \rightarrow \infty} E_n) = 0. \, </math>
| |
| | |
| Noting that:
| |
| | |
| :<math>\begin{align}
| |
| 1 - \Pr(\limsup_{n \rightarrow \infty} E_n) &= 1 - \Pr\left(\{E_n\text{ i.o.}\}\right) = \Pr\left(\{E_n \text{ i.o.}\}^{c}\right) \\
| |
| & = \Pr\left(\left(\bigcap_{N=1}^{\infty} \bigcup_{n=N}^{\infty}E_n\right)^{c}\right) = \Pr\left(\bigcup_{N=1}^{\infty} \bigcap_{n=N}^{\infty}E_n^{c}\right)\\
| |
| &= \Pr\left(\liminf_{n \rightarrow \infty}E_n^{c}\right)= \lim_{N \rightarrow \infty}\Pr\left(\bigcap_{n=N}^{\infty}E_n^{c}\right)
| |
| \end{align}
| |
| </math>
| |
| | |
| it is enough to show: <math>\Pr\left(\bigcap_{n=N}^{\infty}E_n^{c}\right) = 0</math>. Since the <math>(E_n)^{\infty}_{n = 1}</math> are independent:
| |
| | |
| :<math>\begin{align}
| |
| \Pr\left(\bigcap_{n=N}^{\infty}E_n^{c}\right)
| |
| &= \prod^{\infty}_{n=N}\Pr\left(E_n^{c}\right) \\
| |
| &= \prod^{\infty}_{n=N}\left(1-\Pr\left(E_n\right)\right) \\
| |
| &\leq \prod^{\infty}_{n=N}\left(1-\Pr(E_n)+\frac{(\Pr(E_n))^{2}}{2!}-\frac{(\Pr(E_n))^{3}}{3!}+\cdots\right) \\
| |
| & = \prod^{\infty}_{n=N}\left(\sum^{\infty}_{m=0}\frac{(-\Pr(E_n))^{m}}{m!}\right) \\
| |
| &=\prod^{\infty}_{n=N}\exp\left(-\Pr\left(E_n\right)\right)\\
| |
| &=\exp\left(-\sum^{\infty}_{n=N}\Pr(E_n)\right)\\
| |
| &= 0.
| |
| \end{align}
| |
| </math>
| |
| | |
| This completes the proof. Alternatively, we can see <math>\Pr\left(\bigcap_{n=N}^{\infty}E_n^{c}\right) = 0</math> by taking negative the logarithm of both sides to get:
| |
| | |
| :<math>
| |
| \begin{align}
| |
| -\log\left(\Pr\left(\bigcap_{n=N}^{\infty}E_n^{c}\right)\right) &= -\log\left(\prod^{\infty}_{n=N} (1-\Pr(E_n))\right) \\
| |
| &= - \sum^{\infty}_{n=N}\log(1-\Pr(E_n)).
| |
| \end{align}
| |
| </math>
| |
| | |
| Since −log(1 − ''x'') ≥ ''x'' for all ''x'' > 0, the result similarly follows from our assumption that <math>\sum^\infty_{n = 1} \Pr(E_n) = \infty.</math>
| |
| | |
| == Counterpart ==
| |
| | |
| Another related result is the so-called '''counterpart of the Borel–Cantelli lemma'''. It is a counterpart of the
| |
| Lemma in the sense that it gives a necessary and sufficient condition for the limsup to be 1 by replacing the independence assumption by the completely different assumption that <math>(A_n)</math> is monotone increasing for sufficiently large indices. This Lemma says:
| |
| | |
| Let <math>(A_n)</math> be such that <math>A_k \subseteq A_{k+1}</math>,
| |
| and let <math>\bar A</math> denote the complement of <math>A</math>. Then the probability of infinitely many <math>A_k</math> occur (that is, at least one <math>A_k</math> occurs) is one if and only if there exists a strictly increasing sequence of positive integers <math>( t_k)</math> such that
| |
| | |
| : <math> \sum_{k} \Pr( A_{t_{k+1}}| \bar A_{t_k}) = \infty. </math>
| |
| | |
| This simple result can be useful in problems such as for instance those involving hitting probabilities for [[stochastic process]] with the choice of the sequence <math>(t_k)</math> usually being the essence.
| |
| | |
| ==See also==
| |
| * [[Lévy's zero-one law]]
| |
| * [[Kuratowski convergence]]
| |
| | |
| ==References ==
| |
| {{More footnotes|date=November 2009}}
| |
| {{Reflist}}
| |
| * {{Springer|title=Borel–Cantelli lemma |id=B/b017040|first=A.V. |last=Prokhorov}}
| |
| * {{citation|first=William|last=Feller|authorlink=William Feller|year=1961|title=An Introduction to Probability Theory and Its Application|publisher=John Wiley & Sons}}.
| |
| * {{citation|title=Harmonic analysis: Real-variable methods, orthogonality, and oscillatory integrals|first=Elias|last=Stein|authorlink=Elias Stein|year=1993|publisher=Princeton University Press}}.
| |
| * {{citation|first=F. Thomas|last=Bruss|authorlink=Franz Thomas Bruss|year=1980|title=A counterpart of the Borel Cantelli Lemma|journal=J. Appl. Prob.|volume=17|pages=1094–1101}}.
| |
| * Durrett, Rick. "Probability: Theory and Examples." Duxbury advanced series, Third Edition, Thomson Brooks/Cole, 2005.
| |
| | |
| ==External links==
| |
| * [http://planetmath.org/encyclopedia/BorelCantelliLemma.html Planet Math Proof] Refer for a simple proof of the Borel Cantelli Lemma
| |
| | |
| {{DEFAULTSORT:Borel-Cantelli lemma}}
| |
| [[Category:Measure theory]]
| |
| [[Category:Probability theorems]]
| |
| [[Category:Covering lemmas]]
| |
| [[Category:Lemmas]]
| |