Upper hybrid oscillation: Difference between revisions

From formulasearchengine
Jump to navigation Jump to search
en>Mark viking
Added context to first sentence, wl
en>Midatlantian
corrected expression for electron cyclotron radial frequency. Not sure why "c" was present, but it is probably confused as speed of light, would should not appear in the expression.
 
Line 1: Line 1:
In [[Bayesian probability]], the '''Jeffreys prior''', named after [[Harold Jeffreys]], is a [[non-informative prior|non-informative]] (objective) [[prior distribution]] on parameter space that is proportional to the [[square root]] of the [[determinant]] of the [[Fisher information]]:
Luke Bryan is a celebrity in the generating as well as the profession development first second to his third hotel albumAnd , may be the proof. He burst  [http://lukebryantickets.sgs-suparco.org luke bryan book] open to the picture in 2006 regarding his funny blend of downward-home accessibility, video legend excellent appearance and   [http://www.ffpjp24.org drake concert tickets] words, is   [http://www.ladyhawkshockey.org meet and greet justin bieber] scheduled t in the major way. The brand new a on the region graph or chart and #2 around the take maps, building it the 2nd highest debut in those days of 2010 for the nation designer. <br><br>The child of your , understands perseverance and determination are key elements in terms of an effective  job- . His very first album,  Keep Me, created the Top  reaches “All My Friends “Country and Say” Man,” whilst his  effort, Doin’  Issue, identified the performer-about three directly No. 8 men and women: Else Phoning Is usually a Wonderful Factor.”<br><br>In the fall of 2005, Concert tour: Luke  & which in fact had a remarkable set of , which includes Downtown. “It’s almost like you’re acquiring a   endorsement to go to another level, states individuals artists that have been an element of the Concert tourabove into a bigger degree of musicians.” It wrapped as among the best organized tours in its ten-12 months historical past.<br><br>Feel free to surf to my web-site; information on luke bryan ([http://www.netpaw.org http://www.netpaw.org])
 
: <math>p\left(\vec\theta\right) \propto \sqrt{\det \mathcal{I}\left(\vec\theta\right)}.\,</math>
 
It has the key feature that it is [[Parametrization#Parametrization invariance|invariant under reparameterization]] of the parameter vector <math>\vec\theta</math>. This makes it of special interest for use with [[Scale_parameter|''scale parameters'']].<ref>Jaynes, E. T. (1968) "Prior Probabilities", ''IEEE Trans. on Systems Science and Cybernetics'', '''SSC-4''', 227 [http://bayes.wustl.edu/etj/articles/prior.pdf pdf].</ref>
 
== Reparameterization ==
=== One-parameter case ===
For an alternate parameterization <math>\varphi</math> we can derive
 
: <math>p(\varphi) \propto \sqrt{I(\varphi)}\,</math>
 
from
 
: <math>p(\theta) \propto \sqrt{I(\theta)}\,</math>
 
using the [[change of variables theorem]] and the definition of Fisher information:
 
: <math>
\begin{align}
p(\varphi) & = p(\theta) \left|\frac{d\theta}{d\varphi}\right|
\propto \sqrt{I(\theta) \left(\frac{d\theta}{d\varphi}\right)^2}
= \sqrt{\operatorname{E}\!\left[\left(\frac{d \ln L}{d\theta}\right)^2\right] \left(\frac{d\theta}{d\varphi}\right)^2} \\
& = \sqrt{\operatorname{E}\!\left[\left(\frac{d \ln L}{d\theta} \frac{d\theta}{d\varphi}\right)^2\right]}
= \sqrt{\operatorname{E}\!\left[\left(\frac{d \ln L}{d\varphi}\right)^2\right]}
= \sqrt{I(\varphi)}.
\end{align}
</math>
 
=== Multiple-parameter case ===
For an alternate parameterization <math>\vec\varphi</math> we can derive
 
: <math>p(\vec\varphi) \propto \sqrt{\det I(\vec\varphi)}\,</math>
 
from
 
: <math>p(\vec\theta) \propto \sqrt{\det I(\vec\theta)}\,</math>
 
using the [[change of variables theorem]], the definition of Fisher information, and that the product of determinants is the determinant of the matrix product:
 
: <math>
\begin{align}
p(\vec\varphi) & = p(\vec\theta) \left|\det\frac{\partial\theta_i}{\partial\varphi_j}\right| \\
& \propto \sqrt{\det I(\vec\theta)\, {\det}^2\frac{\partial\theta_i}{\partial\varphi_j}} \\
& = \sqrt{\det \frac{\partial\theta_k}{\partial\varphi_i}\, \det \operatorname{E}\!\left[\frac{\partial \ln L}{\partial\theta_k} \frac{\partial \ln L}{\partial\theta_l} \right]\, \det \frac{\partial\theta_l}{\partial\varphi_j}} \\
& = \sqrt{\det \operatorname{E}\!\left[\sum_{k,l} \frac{\partial\theta_k}{\partial\varphi_i} \frac{\partial \ln L}{\partial\theta_k} \frac{\partial \ln L}{\partial\theta_l} \frac{\partial\theta_l}{\partial\varphi_j} \right]} \\
& = \sqrt{\det \operatorname{E}\!\left[\frac{\partial \ln L}{\partial\varphi_i} \frac{\partial \ln L}{\partial\varphi_j}\right]}
= \sqrt{\det I(\vec\varphi)}.
\end{align}
</math>
 
== Attributes ==
From a practical and mathematical standpoint, a valid reason to use this non-informative prior instead of others, like the ones obtained through a limit in conjugate families of distributions, is that it is not dependent upon the set of parameter variables that is chosen to describe parameter space.
 
Sometimes the Jeffreys prior cannot be [[Normalizing constant|normalized]], and is thus an [[improper prior]].  For example, the Jeffreys prior for the distribution mean is uniform over the entire real line in the case of a [[Gaussian distribution]] of known variance.
 
Use of the Jeffreys prior violates the strong version of the [[likelihood principle]], which is accepted by many, but by no means all, statisticians.  When using the Jeffreys prior, inferences about <math>\vec\theta</math> depend not just on the probability of the observed data as a function of <math>\vec\theta</math>, but also on the universe of all possible experimental outcomes, as determined by the experimental design, because the Fisher information is computed from an expectation over the chosen universe. Accordingly, the Jeffreys prior, and hence the inferences made using it, may be different for two experiments involving the same <math>\vec\theta</math> parameter even when the likelihood functions for the two experiments are the same—a violation of the strong likelihood principle.
 
== Minimum description length ==
 
In the [[minimum description length]] approach to statistics the goal is to describe data as compactly as possible where the length of a description is measured in bits of the code used. For a parametric family of distributions one compares a code with the best code based on one of the distributions in the parameterized family. The main result is that in [[exponential family|exponential families]], asymptotically for large sample size, the code based on the distribution that is a mixture of the elements in the exponential family with the Jeffreys prior is optimal. This result holds if one restricts the parameter set to a compact subset in the interior of the full parameter space. If the full parameter is used a modified version of the result should be used.
 
==Examples==
The Jeffreys prior for a parameter (or a set of parameters) depends upon the statistical model.
 
===Gaussian distribution with mean parameter===
For the [[Gaussian distribution]] of the real value <math>x</math>
: <math>f(x|\mu) = \frac{e^{-(x - \mu)^2 / 2\sigma^2}}{\sqrt{2 \pi \sigma^2}}</math>
with <math>\sigma</math> fixed, the Jeffreys prior for the mean <math>\mu</math> is
: <math>\begin{align} p(\mu) & \propto \sqrt{I(\mu)}
= \sqrt{\operatorname{E}\!\left[ \left( \frac{d}{d\mu} \log f(x|\mu) \right)^2\right]}
= \sqrt{\operatorname{E}\!\left[ \left( \frac{x - \mu}{\sigma^2} \right)^2 \right]} \\
& = \sqrt{\int_{-\infty}^{+\infty} f(x|\mu) \left(\frac{x-\mu}{\sigma^2}\right)^2 dx}
= \sqrt{\frac{\sigma^2}{\sigma^4}}
\propto 1.\end{align}</math>
That is, the Jeffreys prior for <math>\mu</math> does not depend upon <math>\mu</math>; it is the unnormalized uniform distribution on the real line — the distribution that is 1 (or some other fixed constant) for all points. This is an [[improper prior]], and is, up to the choice of constant, the unique ''translation''-invariant distribution on the reals (the [[Haar measure]] with respect to addition of reals), corresponding to the mean being a measure of ''location'' and translation-invariance corresponding to no information about location.
 
===Gaussian distribution with standard deviation parameter===
For the [[Gaussian distribution]] of the real value <math>x</math>
: <math>f(x|\sigma) = \frac{e^{-(x - \mu)^2 / 2 \sigma^2}}{\sqrt{2 \pi \sigma^2}},</math>
the Jeffreys prior for the standard deviation σ&nbsp;>&nbsp;0 is
: <math>\begin{align}p(\sigma) & \propto \sqrt{I(\sigma)}
= \sqrt{\operatorname{E}\!\left[ \left( \frac{d}{d\sigma} \log f(x|\sigma) \right)^2\right]}
= \sqrt{\operatorname{E}\!\left[ \left( \frac{(x - \mu)^2-\sigma^2}{\sigma^3} \right)^2 \right]} \\
& = \sqrt{\int_{-\infty}^{+\infty} f(x|\sigma)\left(\frac{(x-\mu)^2-\sigma^2}{\sigma^3}\right)^2 dx}
= \sqrt{\frac{2}{\sigma^2}}
\propto \frac{1}{\sigma}.
\end{align}</math>
Equivalently, the Jeffreys prior for log&nbsp;σ<sup>2</sup> (or log&nbsp;σ) is the unnormalized uniform distribution on the real line, and thus this distribution is also known as the '''{{visible anchor|logarithmic prior}}'''. It is the unique (up to a multiple) prior (on the positive reals) that is ''scale''-invariant (the [[Haar measure]] with respect to multiplication of positive reals), corresponding to the standard deviation being a measure of ''scale'' and scale-invariance corresponding to no information about scale. As with the uniform distribution on the reals, it is an [[improper prior]].
 
===Poisson distribution with rate parameter===
For the [[Poisson distribution]] of the non-negative integer <math>n</math>,
: <math>f(n | \lambda) = e^{-\lambda}\frac{\lambda^n}{n!},</math>
the Jeffreys prior for the rate parameter λ&nbsp;≥&nbsp;0 is
: <math>\begin{align}p(\lambda) &\propto \sqrt{I(\lambda)}
= \sqrt{\operatorname{E}\!\left[ \left( \frac{d}{d\lambda} \log f(x|\lambda) \right)^2\right]}
= \sqrt{\operatorname{E}\!\left[ \left( \frac{n-\lambda}{\lambda} \right)^2\right]} \\
& = \sqrt{\sum_{n=0}^{+\infty} f(n|\lambda) \left( \frac{n-\lambda}{\lambda} \right)^2}
= \sqrt{\frac{1}{\lambda}}.\end{align}</math>
Equivalently, the Jeffreys prior for <math>\sqrt{\lambda}</math> is the unnormalized uniform distribution on the non-negative real line.
 
===Bernoulli trial===
For a coin that is "heads" with probability γ&nbsp;∈&nbsp;[0,1] and is "tails" with probability 1&nbsp;−&nbsp;γ, for a given (H,T)&nbsp;∈&nbsp;{(0,1),&nbsp;(1,0)} the probability is <math>\gamma^H (1-\gamma)^T</math>. The Jeffreys prior for the parameter <math>\gamma</math> is
 
: <math>\begin{align}p(\gamma) & \propto \sqrt{I(\gamma)}
= \sqrt{\operatorname{E}\!\left[ \left( \frac{d}{d\gamma} \log f(x|\gamma) \right)^2\right]}
= \sqrt{\operatorname{E}\!\left[ \left( \frac{H}{\gamma} - \frac{T}{1-\gamma}\right)^2 \right]} \\
& = \sqrt{\gamma \left( \frac{1}{\gamma} - \frac{0}{1-\gamma}\right)^2 + (1-\gamma)\left( \frac{0}{\gamma} - \frac{1}{1-\gamma}\right)^2}
= \frac{1}{\sqrt{\gamma(1-\gamma)}}\,.\end{align}</math>
 
This is the [[arcsine distribution]] and is a [[beta distribution]] with <math>\alpha = \beta = 1/2</math>. Furthermore, if <math>\gamma = \sin^2(\theta)</math> the Jeffreys prior for <math>\theta</math> is uniform in the interval <math>[0, \pi / 2]</math>. Equivalently, <math>\theta</math> is uniform on the whole circle <math>[0, 2 \pi]</math>.
 
===''N''-sided die with biased probabilities===
Similarly, for a throw of an <math>N</math>-sided die with outcome probabilities <math>\vec{\gamma} = (\gamma_1, \ldots, \gamma_N)</math>, each non-negative and satisfying <math>\sum_{i=1}^N \gamma_i = 1</math>, the Jeffreys prior for <math>\vec{\gamma}</math> is the [[Dirichlet distribution]] with all (alpha) parameters set to <math>1/2</math>. In particular, if we write <math>\gamma_i = {\phi_i}^2</math> for each <math>i</math>, then the Jeffreys prior for <math>\vec{\phi}</math> is uniform on the (''N''&ndash;1)-dimensional [[unit sphere]] (''i.e.'', it is uniform on the surface of an ''N''-dimensional [[unit sphere|unit ball]]).
 
==References==
*{{cite journal
| last= Jeffreys | first=H. | authorlink=Harold Jeffreys
| year = 1946
| title = An Invariant Form for the Prior Probability in Estimation Problems
| journal = Proceedings of the Royal Society of London. Series A, Mathematical and Physical Sciences
| volume = 186
| issue = 1007
| pages = 453–461
| jstor = 97883
| doi = 10.1098/rspa.1946.0056
}}
 
*{{cite book
| last= Jeffreys | first=H. | authorlink=Harold Jeffreys
| year = 1939
| title = Theory of Probability
| publisher = Oxford University Press
}}
 
== Footnotes ==
 
<references/>
 
[[Category:Bayesian statistics]]

Latest revision as of 23:04, 7 September 2014

Luke Bryan is a celebrity in the generating as well as the profession development first second to his third hotel album, And , may be the proof. He burst luke bryan book open to the picture in 2006 regarding his funny blend of downward-home accessibility, video legend excellent appearance and drake concert tickets words, is meet and greet justin bieber scheduled t in the major way. The brand new a on the region graph or chart and #2 around the take maps, building it the 2nd highest debut in those days of 2010 for the nation designer.

The child of your , understands perseverance and determination are key elements in terms of an effective job- . His very first album, Keep Me, created the Top reaches “All My Friends “Country and Say” Man,” whilst his effort, Doin’ Issue, identified the performer-about three directly No. 8 men and women: Else Phoning Is usually a Wonderful Factor.”

In the fall of 2005, Concert tour: Luke & which in fact had a remarkable set of , which includes Downtown. “It’s almost like you’re acquiring a endorsement to go to another level, states individuals artists that have been an element of the Concert tourabove into a bigger degree of musicians.” It wrapped as among the best organized tours in its ten-12 months historical past.

Feel free to surf to my web-site; information on luke bryan (http://www.netpaw.org)