Block code: Difference between revisions

From formulasearchengine
Jump to navigation Jump to search
en>Mogism
m Sphere packings and lattices: Cleanup/Typo fixing, typo(s) fixed: so called → so-called using AWB
en>C5813
mNo edit summary
 
(One intermediate revision by one other user not shown)
Line 1: Line 1:
In [[probability theory]], '''Hoeffding's inequality''' provides an [[upper bound]] on the [[probability]] that the sum of [[random variables]] deviates from its [[expected value]].
Greetings! I am Myrtle Shroyer. For many years I've been working as a payroll clerk. Puerto Rico is exactly where he's usually been living but she requirements to transfer because of her family members. The favorite hobby for my kids and me is to perform baseball and I'm trying to make it a profession.<br><br>Look into my web site ... [http://kurl.at/diettogoreviews74620 kurl.at]
Hoeffding's inequality was proved by [[Wassily Hoeffding]] in 1963.<ref>{{harvtxt|Hoeffding|1963}}</ref>
 
Hoeffding's inequality is a special case of the [[Azuma–Hoeffding inequality]], and it is more general than the [[Bernstein inequalities in probability theory|Bernstein inequality]], proved by [[Sergei Bernstein]] in 1923. They are also special cases of [[McDiarmid's inequality]].
 
== Special case of Bernoulli random variables ==
 
Hoeffding's inequality can be applied to the important special case of identically distributed [[Bernoulli trial|Bernoulli random variables]], and this is how the inequality is often used in [[combinatorics]] and [[computer science]].
We consider a coin that shows heads with probability <math>p</math> and tails with probability <math>1-p</math>.
We toss the coin <math>n</math> times.
The [[expected value|expected]] number of times the coin comes up heads is <math>p\cdot n</math>.
Furthermore, the probability that the coin comes up heads at most <math>k</math> times can be exactly quantified by the following expression:
 
:<math>\Pr\Big(n \text{ coin tosses yield heads at most } k \text{ times}\Big)= \sum_{i=0}^{k} \binom{n}{i} p^i (1-p)^{n-i}\,.</math>
 
In the case that <math>k=(p-\epsilon) n</math> for some <math>\epsilon > 0</math>,
Hoeffding's inequality bounds this probability by a term that is exponentially small in <math>\epsilon^2 \cdot n</math>:
:<math>\Pr\Big(n \text{ coin tosses yield heads at most } (p-\epsilon) n \text{ times}\Big)\leq\exp\big(-2\epsilon^2 n\big)\,.</math>
 
Similarly, in the case that <math>k=(p+\epsilon) n</math> for some <math>\epsilon > 0</math>,
Hoeffding's inequality bounds the probability that we see at least <math>\epsilon n</math> more tosses that show heads than we would expect:
:<math>\Pr\Big(n \text{ coin tosses yield heads at least } (p+\epsilon) n \text{ times}\Big)\leq\exp\big(-2\epsilon^2 n\big)\,.</math>
 
Hence Hoeffding's inequality implies that the number of heads that we see is concentrated around its mean, with exponentially small tail.
:<math>\Pr\Big(n \text{ coin tosses yield heads between } (p-\epsilon)n \text{ and } (p+\epsilon)n \text{ times}\Big)\geq 1-2\exp\big(-2\epsilon^2 n\big)\,.</math>
 
== General case ==
 
Let
 
:<math>X_1, \dots, X_n \!</math>
 
be [[independent random variables]].
Assume that the <math>X_i</math> are [[almost sure]]ly bounded; that is, assume for <math>1 \leq i \leq n</math> that
 
:<math>\Pr(X_i \in [a_i, b_i]) = 1. \!</math>
 
We define the empirical mean of these variables
 
:<math>\overline X = \frac{1}{n}(X_1 + \cdots + X_n).</math>
 
Theorem 2 of {{harvtxt|Hoeffding|1963}} proves the inequalities
 
:<math>\Pr(\overline X - \mathrm{E}[\overline X] \geq t) \leq \exp \left( - \frac{2n^2t^2}{\sum_{i=1}^n (b_i - a_i)^2} \right),\!</math>
:<math>\Pr(|\overline X - \mathrm{E}[\overline X]| \geq t) \leq 2\exp \left( - \frac{2n^2t^2}{\sum_{i=1}^n (b_i - a_i)^2} \right),\!</math>
 
which are valid for positive values of ''t''. Here <math>\mathrm{E}[\overline X]</math> is the [[expected value]] of <math>\overline X</math>.
The inequalities can be also stated in terms of the sum
 
:<math>S = X_1 + \cdots + X_n</math>
 
of the random variables:
 
:<math>\Pr(S - \mathrm{E}[S] \geq t) \leq \exp \left( - \frac{2t^2}{\sum_{i=1}^n (b_i - a_i)^2} \right),\!</math>
:<math>\Pr(|S - \mathrm{E}[S]| \geq t) \leq 2\exp \left( - \frac{2t^2}{\sum_{i=1}^n (b_i - a_i)^2} \right).\!</math>
 
Note that the inequalities also hold when the <math>X_i</math> have been obtained using sampling without replacement; in this case the random variables are not independent anymore. A proof of this statement can be found in Hoeffding's paper. For slightly better bounds in the case of sampling without replacement, see for instance the paper by {{harvtxt|Serfling|1974}}.
 
==See also==
*[[Bennett's inequality]]
*[[Chebyshev's inequality]]
*[[Markov's inequality]]
*[[Chernoff bounds]]
*[[Hoeffding's lemma]]
 
==Notes==
{{reflist}}
 
==References==
{{refbegin}}
* {{cite journal
| first1=Robert J. | last1=Serfling
| title=Probability Inequalities for the Sum in Sampling without Replacement
| journal=The Annals of Statistics
| pages=39–48
| year=1974
| ref=harv
| volume=2
| number=1
| doi=10.1214/aos/1176342611}}
* {{cite journal
| first1=Wassily | last1=Hoeffding
| title=Probability inequalities for sums of bounded random variables
| journal=Journal of the American Statistical Association
| pages=13–30
|date=March 1963
| ref=harv
| volume=58
| number=301
| jstor=2282952}}
 
  {{refend}}
 
[[Category:Probabilistic inequalities]]

Latest revision as of 22:22, 30 September 2014

Greetings! I am Myrtle Shroyer. For many years I've been working as a payroll clerk. Puerto Rico is exactly where he's usually been living but she requirements to transfer because of her family members. The favorite hobby for my kids and me is to perform baseball and I'm trying to make it a profession.

Look into my web site ... kurl.at