Fermat pseudoprime: Difference between revisions

From formulasearchengine
Jump to navigation Jump to search
en>Nageh
m categories
 
en>Monkbot
Line 1: Line 1:
The title of the author is Numbers. What I adore doing is playing baseball but I haven't made a dime with it. Puerto Rico is where he and his wife live. My working day job is a librarian.<br><br>Feel free to visit my site ... [http://Drive.Ilovetheory.com/content/great-guide-regarding-how-effectively-take-care-candidiasis Drive.Ilovetheory.com]
{{Infobox probability distribution 2
| name        = Geometric
| type        = mass
| pdf_image  = [[File:geometric pmf.svg|450px]]
| cdf_image  = [[File:geometric cdf.svg|450px]]
| parameters  = <math>0< p \leq 1</math> success probability ([[real number|real]])
| support    = <math>k \in \{1,2,3,\dots\}\!</math>
| pdf        = <math>(1 - p)^{k-1}\,p\!</math>
| cdf        = <math>1-(1 - p)^k\!</math>
| mean        = <math>\frac{1}{p}\!</math>
| median      = <math>\left\lceil \frac{-1}{\log_2(1-p)} \right\rceil\!</math> (not unique if <math>-1/\log_2(1-p)</math> is an integer)
| mode        = <math>1</math>
| variance    = <math>\frac{1-p}{p^2}\!</math>
| skewness    = <math>\frac{2-p}{\sqrt{1-p}}\!</math>
| kurtosis    = <math>6+\frac{p^2}{1-p}\!</math>
| entropy    = <math>\tfrac{-(1-p)\log_2 (1-p) - p \log_2 p}{p}\!</math>
| mgf        = <math>\frac{pe^t}{1-(1-p) e^t}\!</math>,    <br>  <big> <big>for</big></big> <math>t<-\ln(1-p)\!</math>
| char        = <math>\frac{pe^{it}}{1-(1-p)\,e^{it}}\!</math>
| parameters2 = <math>0< p \leq 1</math> success probability ([[real number|real]])
| support2    = <math>k \in \{0,1,2,3,\dots\}\!</math>
| pdf2        = <math>(1 - p)^{k}\,p\!</math>
| cdf2        = <math>1-(1 - p)^{k+1}\!</math>
| mean2      = <math>\frac{1-p}{p}\!</math>
| median2    = <math>\left\lceil \frac{-1}{\log_2(1-p)} \right\rceil\! - 1</math> (not unique if <math>-1/\log_2(1-p)</math> is an integer)
| mode2      = <math>0</math>
| variance2  = <math>\frac{1-p}{p^2}\!</math>
| skewness2  = <math>\frac{2-p}{\sqrt{1-p}}\!</math>
| kurtosis2  = <math>6+\frac{p^2}{1-p}\!</math>
| entropy2    = <math>\tfrac{-(1-p)\log_2 (1-p) - p \log_2 p}{p}\!</math>
| mgf2        = <math>\frac{p}{1-(1-p)e^t}\!</math>
| char2      = <math>\frac{p}{1-(1-p)\,e^{it}}\!</math>
}}
In [[probability theory]] and [[statistics]], the '''geometric distribution''' is either of two [[discrete probability distribution]]s:
 
* The probability distribution of the number ''X'' of [[Bernoulli trial]]s needed to get one success, supported on the set&nbsp;{&nbsp;1,&nbsp;2,&nbsp;3,&nbsp;...}
 
* The probability distribution of the number ''Y''&nbsp;=&nbsp;''X''&nbsp;−&nbsp;1 of failures before the first success, supported on the set&nbsp;{&nbsp;0,&nbsp;1,&nbsp;2,&nbsp;3,&nbsp;...&nbsp;}
 
Which of these one calls "the" geometric distribution is a matter of convention and convenience.
 
These two different geometric distributions should not be confused with each other. Often, the name ''shifted'' geometric distribution is adopted for the former one (distribution of the number ''X''); however, to avoid ambiguity, it is considered wise to indicate which is intended, by mentioning the support explicitly.
 
It’s the probability that the first occurrence of success require k number of independent trials, each with success probability p.&nbsp;If the probability of success on each trial is ''p'', then the probability that the ''k''th trial (out of ''k'' trials) is the first success is
 
:<math>\Pr(X = k) = (1-p)^{k-1}\,p\,</math>
 
for ''k'' = 1, 2, 3, ....
 
The above form of geometric distribution is used for modeling the number of trials until the first success. By contrast, the following form of geometric distribution is used for modeling number of failures until the first success:
 
:<math>\Pr(Y=k) = (1 - p)^k\,p\,</math>
 
for&nbsp;''k''&nbsp;=&nbsp;0,&nbsp;1,&nbsp;2,&nbsp;3,&nbsp;....
 
In either case, the sequence of probabilities is a [[geometric sequence]].
 
For example, suppose an ordinary [[dice|die]] <!-- "die" is the correct singular form of the plural "dice." --> is thrown repeatedly until the first time a "1" appears.  The probability distribution of the number of times it is thrown is supported on the infinite set {&nbsp;1,&nbsp;2,&nbsp;3,&nbsp;...&nbsp;} and is a geometric distribution with ''p''&nbsp;=&nbsp;1/6.
 
==Moments and cumulants==
 
The [[expected value]] of a geometrically distributed [[random variable]] ''X'' is 1/''p'' and the [[variance]] is (1&nbsp;&minus;&nbsp;''p'')/''p''<sup>2</sup>:
 
:<math>\mathrm{E}(X) = \frac{1}{p},
\qquad\mathrm{var}(X) = \frac{1-p}{p^2}.</math>
 
Similarly, the expected value of the geometrically distributed random variable ''Y'' (where Y corresponds to the pmf listed in the right column) is (1&nbsp;&minus;&nbsp;''p'')/''p'', and its variance is (1&nbsp;&minus;&nbsp;''p'')/''p''<sup>2</sup>:
 
:<math>\mathrm{E}(Y) = \frac{1-p}{p},
\qquad\mathrm{var}(Y) = \frac{1-p}{p^2}.</math>
 
Let ''μ'' = (1&nbsp;&minus;&nbsp;''p'')/''p'' be the expected value of ''Y''.  Then the [[cumulant]]s <math>\kappa_n</math> of the probability distribution of ''Y'' satisfy the recursion
 
:<math>\kappa_{n+1} = \mu(\mu+1) \frac{d\kappa_n}{d\mu}.</math>
 
''Outline of proof:'' That the expected value is (1&nbsp;&minus;&nbsp;''p'')/''p'' can be shown in the following way. Let ''Y'' be as above. Then
 
: <math>
\begin{align}
\mathrm{E}(Y) & {} =\sum_{k=0}^\infty (1-p)^k p\cdot k \\
& {} =p\sum_{k=0}^\infty(1-p)^k k \\
& {} = p\left[\frac{d}{dp}\left(-\sum_{k=0}^\infty (1-p)^k\right)\right](1-p) \\
& {} =-p(1-p)\frac{d}{dp}\frac{1}{p}=\frac{1-p}{p}.
\end{align}
</math>
 
(The interchange of summation and differentiation is justified by the fact that convergent [[power series]] [[uniform convergence|converge uniformly]] on [[compact space|compact]] subsets of the set of points where they converge.)
 
==Parameter estimation==
 
For both variants of the geometric distribution, the parameter ''p'' can be estimated by equating the expected value with the [[sample mean]]. This is the [[method of moments (statistics)|method of moments]], which in this case happens to yield [[maximum likelihood]] estimates of ''p''.{{cn|date=May 2012}}
 
Specifically, for the first variant let ''k''&nbsp;=&nbsp;''k''<sub>1</sub>,&nbsp;...,&nbsp;''k''<sub>''n''</sub> be a [[sample (statistics)|sample]] where ''k''<sub>''i''</sub>&nbsp;≥&nbsp;1 for ''i''&nbsp;=&nbsp;1,&nbsp;...,&nbsp;''n''. Then ''p'' can be estimated as
 
:<math>\widehat{p} = \left(\frac1n \sum_{i=1}^n k_i\right)^{-1} = \frac{n}{\sum_{i=1}^n k_i }. \!</math>
 
In [[Bayesian inference]], the [[Beta distribution]] is the [[conjugate prior]] distribution for the parameter ''p''.  If this parameter is given a Beta(''α'',&nbsp;''β'') [[prior distribution|prior]], then the [[posterior distribution]] is{{cn|date=May 2012}}
 
:<math>p \sim \mathrm{Beta}\left(\alpha+n,\ \beta+\sum_{i=1}^n (k_i-1)\right). \!</math>
 
The posterior mean E[''p''] approaches the maximum likelihood estimate <math>\widehat{p}</math> as ''α'' and ''β'' approach zero.
 
In the alternative case, let ''k''<sub>1</sub>,&nbsp;...,&nbsp;''k''<sub>''n''</sub> be a sample where ''k''<sub>''i''</sub>&nbsp;≥&nbsp;0 for ''i''&nbsp;=&nbsp;1,&nbsp;...,&nbsp;''n''.  Then ''p'' can be estimated as
 
:<math>\widehat{p} = \left(1 + \frac1n \sum_{i=1}^n k_i\right)^{-1}. \!</math>
 
The posterior distribution of ''p'' given a Beta(''α'',&nbsp;''β'') prior is{{cn|date=May 2012}}
 
:<math>p \sim \mathrm{Beta}\left(\alpha+n,\ \beta+\sum_{i=1}^n k_i\right). \!</math>
 
Again the posterior mean E[''p''] approaches the maximum likelihood estimate <math>\widehat{p}</math> as ''α'' and ''β'' approach zero.
 
== Other Properties ==
 
* The [[probability-generating function]]s of ''X'' and ''Y'' are, respectively,
::<math>
\begin{align}
G_X(s) & = \frac{s\,p}{1-s\,(1-p)}, \\[10pt]
G_Y(s) & = \frac{p}{1-s\,(1-p)}, \quad |s| < (1-p)^{-1}.
\end{align}
</math>
 
* Like its continuous analogue (the [[exponential distribution]]), the geometric distribution is [[memorylessness|memoryless]].  That means that if you intend to repeat an experiment until the first success, then, given that the first success has not yet occurred, the conditional probability distribution of the number of additional trials does not depend on how many failures have been observed.  The die one throws or the coin one tosses does not have a "memory" of these failures.  The geometric distribution is the only memoryless discrete distribution.
 
* Among all discrete probability distributions supported on {1,&nbsp;2,&nbsp;3,&nbsp;...&nbsp;} with given expected value&nbsp;μ, the geometric distribution ''X'' with parameter ''p''&nbsp;=&nbsp;1/μ is the one with the [[maximum entropy probability distribution|largest entropy]].{{cn|date=May 2012}}
 
* The geometric distribution of the number ''Y'' of failures before the first success  is [[infinite divisibility (probability)|infinitely divisible]], i.e., for any positive integer ''n'', there exist independent identically distributed random variables ''Y''<sub>1</sub>,&nbsp;...,&nbsp;''Y''<sub>''n''</sub> whose sum has the same distribution that ''Y'' has.  These will not be geometrically distributed unless ''n''&nbsp;=&nbsp;1; they follow a [[negative binomial distribution]].
 
* The decimal digits of the geometrically distributed random variable ''Y'' are a sequence of [[statistical independence|independent]] (and ''not'' identically distributed) random variables.{{cn|date=May 2012}}  For example, the <!-- "hundreds" is correct; "hundredth" is wrong -->hundreds<!-- "hundreds" is correct; "hundredth" is wrong --> digit ''D'' has this probability distribution:
::<math>\Pr(D=d) = {q^{100d} \over 1 + q^{100} + q^{200} + \cdots + q^{900}},</math>
:where ''q''&nbsp;=&nbsp;1&nbsp;&minus;&nbsp;''p'', and similarly for the other digits, and, more generally, similarly for [[numeral system]]s with other bases than 10.  When the base is 2, this shows that a geometrically distributed random variable can be written as a sum of independent random variables whose probability distributions are [[indecomposable distribution|indecomposable]].
 
* [[Golomb coding]] is the optimal [[prefix code]]{{clarify|date=May 2012}} for the geometric discrete distribution.{{cn|date=May 2012}}
 
==Related distributions==
 
* The geometric distribution ''Y'' is a special case of the [[negative binomial distribution]], with ''r''&nbsp;=&nbsp;1. More generally, if ''Y''<sub>1</sub>,&nbsp;...,&nbsp;''Y''<sub>''r''</sub> are [[statistical independence|independent]] geometrically distributed variables with parameter&nbsp;''p'', then the sum
 
::<math>Z = \sum_{m=1}^r Y_m</math>
 
:follows a negative binomial distribution with parameters ''r''&nbsp;and&nbsp;''p''.<ref>Pitman, Jim. Probability (1993 edition). Springer Publishers. pp 372.</ref>
 
* If ''Y''<sub>1</sub>,&nbsp;...,&nbsp;''Y''<sub>''r''</sub> are independent geometrically distributed variables (with possibly different success parameters ''p''<sub>''m''</sub>), then their [[minimum]]
 
::<math>W = \min_{m \in 1, \dots, r} Y_m\,</math>
 
:is also geometrically distributed, with parameter <math>p = 1-\prod_m(1-p_{m}).</math>{{cn|date=May 2012}}
 
* Suppose 0&nbsp;<&nbsp;''r''&nbsp;<&nbsp;1, and for ''k''&nbsp;=&nbsp;1,&nbsp;2,&nbsp;3,&nbsp;... the random variable ''X''<sub>''k''</sub> has a [[Poisson distribution]] with expected value ''r''<sup>&nbsp;''k''</sup>/''k''.  Then
 
::<math>\sum_{k=1}^\infty k\,X_k</math>
 
:has a geometric distribution taking values in the set {0,&nbsp;1,&nbsp;2,&nbsp;...}, with expected value ''r''/(1&nbsp;&minus;&nbsp;''r'').{{cn|date=May 2012}}
 
* The [[exponential distribution]] is the continuous analogue of the geometric distribution.  If ''X'' is an exponentially distributed random variable with parameter&nbsp;λ, then
 
::<math>Y = \lfloor X \rfloor,</math>
 
: where <math>\lfloor \quad \rfloor</math> is the [[Floor and ceiling functions|floor]] (or greatest integer) function, is a geometrically distributed random variable with parameter ''p''&nbsp;=&nbsp;1&nbsp;&minus;&nbsp;''e''<sup>&minus;''&lambda;''</sup> (thus ''&lambda;''&nbsp;=&nbsp;&minus;ln(1&nbsp;&minus;&nbsp;''p'')<ref>http://www.wolframalpha.com/input/?i=inverse+p+%3D+1+-+e^-l</ref>) and taking values in the set&nbsp;{0,&nbsp;1,&nbsp;2,&nbsp;...}.  This can be used to generate geometrically distributed pseudorandom numbers by first [[Exponential distribution#Generating exponential variates|generating exponentially distributed]] pseudorandom numbers from a uniform [[pseudorandom number generator]]: then <math>\lfloor \ln(U) / \ln(1-p)\rfloor</math> is geometrically distributed with parameter <math>p</math>, if <math>U</math> is uniformly distributed in [0,1].
 
== See also ==
 
* [[Hypergeometric distribution]]
* [[Coupon collector's problem]]
 
{{More footnotes|date=March 2011}}
 
==References==
{{reflist}}
 
==External links==
*{{planetmath reference|id=3456|title=Geometric distribution}}
*[http://mathworld.wolfram.com/GeometricDistribution.html Geometric distribution] on [[MathWorld]].
*[http://www.solvemymath.com/online_math_calculator/statistics/discrete_distributions/geometric/index.php Online geometric distribution calculator]
* [http://www.elektro-energetika.cz/calculations/distrgeo.php?language=english Online calculator of Geometric distribution]
 
{{ProbDistributions|discrete-infinite}}
{{Common univariate probability distributions}}
 
[[Category:Discrete distributions]]
[[Category:Exponential family distributions]]
[[Category:Infinitely divisible probability distributions]]
[[Category:Probability distributions]]

Revision as of 19:14, 18 January 2014

Template:Infobox probability distribution 2 In probability theory and statistics, the geometric distribution is either of two discrete probability distributions:

  • The probability distribution of the number X of Bernoulli trials needed to get one success, supported on the set { 1, 2, 3, ...}
  • The probability distribution of the number Y = X − 1 of failures before the first success, supported on the set { 0, 1, 2, 3, ... }

Which of these one calls "the" geometric distribution is a matter of convention and convenience.

These two different geometric distributions should not be confused with each other. Often, the name shifted geometric distribution is adopted for the former one (distribution of the number X); however, to avoid ambiguity, it is considered wise to indicate which is intended, by mentioning the support explicitly.

It’s the probability that the first occurrence of success require k number of independent trials, each with success probability p. If the probability of success on each trial is p, then the probability that the kth trial (out of k trials) is the first success is

Pr(X=k)=(1p)k1p

for k = 1, 2, 3, ....

The above form of geometric distribution is used for modeling the number of trials until the first success. By contrast, the following form of geometric distribution is used for modeling number of failures until the first success:

Pr(Y=k)=(1p)kp

for k = 0, 1, 2, 3, ....

In either case, the sequence of probabilities is a geometric sequence.

For example, suppose an ordinary die is thrown repeatedly until the first time a "1" appears. The probability distribution of the number of times it is thrown is supported on the infinite set { 1, 2, 3, ... } and is a geometric distribution with p = 1/6.

Moments and cumulants

The expected value of a geometrically distributed random variable X is 1/p and the variance is (1 − p)/p2:

E(X)=1p,var(X)=1pp2.

Similarly, the expected value of the geometrically distributed random variable Y (where Y corresponds to the pmf listed in the right column) is (1 − p)/p, and its variance is (1 − p)/p2:

E(Y)=1pp,var(Y)=1pp2.

Let μ = (1 − p)/p be the expected value of Y. Then the cumulants κn of the probability distribution of Y satisfy the recursion

κn+1=μ(μ+1)dκndμ.

Outline of proof: That the expected value is (1 − p)/p can be shown in the following way. Let Y be as above. Then

E(Y)=k=0(1p)kpk=pk=0(1p)kk=p[ddp(k=0(1p)k)](1p)=p(1p)ddp1p=1pp.

(The interchange of summation and differentiation is justified by the fact that convergent power series converge uniformly on compact subsets of the set of points where they converge.)

Parameter estimation

For both variants of the geometric distribution, the parameter p can be estimated by equating the expected value with the sample mean. This is the method of moments, which in this case happens to yield maximum likelihood estimates of p.Template:Cn

Specifically, for the first variant let k = k1, ..., kn be a sample where ki ≥ 1 for i = 1, ..., n. Then p can be estimated as

p^=(1ni=1nki)1=ni=1nki.

In Bayesian inference, the Beta distribution is the conjugate prior distribution for the parameter p. If this parameter is given a Beta(αβ) prior, then the posterior distribution isTemplate:Cn

pBeta(α+n,β+i=1n(ki1)).

The posterior mean E[p] approaches the maximum likelihood estimate p^ as α and β approach zero.

In the alternative case, let k1, ..., kn be a sample where ki ≥ 0 for i = 1, ..., n. Then p can be estimated as

p^=(1+1ni=1nki)1.

The posterior distribution of p given a Beta(αβ) prior isTemplate:Cn

pBeta(α+n,β+i=1nki).

Again the posterior mean E[p] approaches the maximum likelihood estimate p^ as α and β approach zero.

Other Properties

GX(s)=sp1s(1p),GY(s)=p1s(1p),|s|<(1p)1.
  • Like its continuous analogue (the exponential distribution), the geometric distribution is memoryless. That means that if you intend to repeat an experiment until the first success, then, given that the first success has not yet occurred, the conditional probability distribution of the number of additional trials does not depend on how many failures have been observed. The die one throws or the coin one tosses does not have a "memory" of these failures. The geometric distribution is the only memoryless discrete distribution.
  • Among all discrete probability distributions supported on {1, 2, 3, ... } with given expected value μ, the geometric distribution X with parameter p = 1/μ is the one with the largest entropy.Template:Cn
  • The geometric distribution of the number Y of failures before the first success is infinitely divisible, i.e., for any positive integer n, there exist independent identically distributed random variables Y1, ..., Yn whose sum has the same distribution that Y has. These will not be geometrically distributed unless n = 1; they follow a negative binomial distribution.
  • The decimal digits of the geometrically distributed random variable Y are a sequence of independent (and not identically distributed) random variables.Template:Cn For example, the hundreds digit D has this probability distribution:
Pr(D=d)=q100d1+q100+q200++q900,
where q = 1 − p, and similarly for the other digits, and, more generally, similarly for numeral systems with other bases than 10. When the base is 2, this shows that a geometrically distributed random variable can be written as a sum of independent random variables whose probability distributions are indecomposable.

Related distributions

  • The geometric distribution Y is a special case of the negative binomial distribution, with r = 1. More generally, if Y1, ..., Yr are independent geometrically distributed variables with parameter p, then the sum
Z=m=1rYm
follows a negative binomial distribution with parameters r and p.[1]
  • If Y1, ..., Yr are independent geometrically distributed variables (with possibly different success parameters pm), then their minimum
W=minm1,,rYm
is also geometrically distributed, with parameter p=1m(1pm).Template:Cn
  • Suppose 0 < r < 1, and for k = 1, 2, 3, ... the random variable Xk has a Poisson distribution with expected value r k/k. Then
k=1kXk
has a geometric distribution taking values in the set {0, 1, 2, ...}, with expected value r/(1 − r).Template:Cn
  • The exponential distribution is the continuous analogue of the geometric distribution. If X is an exponentially distributed random variable with parameter λ, then
Y=X,
where is the floor (or greatest integer) function, is a geometrically distributed random variable with parameter p = 1 − eλ (thus λ = −ln(1 − p)[2]) and taking values in the set {0, 1, 2, ...}. This can be used to generate geometrically distributed pseudorandom numbers by first generating exponentially distributed pseudorandom numbers from a uniform pseudorandom number generator: then ln(U)/ln(1p) is geometrically distributed with parameter p, if U is uniformly distributed in [0,1].

See also

Template:More footnotes

References

43 year old Petroleum Engineer Harry from Deep River, usually spends time with hobbies and interests like renting movies, property developers in singapore new condominium and vehicle racing. Constantly enjoys going to destinations like Camino Real de Tierra Adentro.

External links

55 yrs old Metal Polisher Records from Gypsumville, has interests which include owning an antique car, summoners war hack and spelunkering. Gets immense motivation from life by going to places such as Villa Adriana (Tivoli).

my web site - summoners war hack no survey ios Template:Common univariate probability distributions

  1. Pitman, Jim. Probability (1993 edition). Springer Publishers. pp 372.
  2. http://www.wolframalpha.com/input/?i=inverse+p+%3D+1+-+e^-l