Generalized quantifier: Difference between revisions

From formulasearchengine
Jump to navigation Jump to search
No edit summary
en>BD2412
m minor fixes, mostly disambig links using AWB
 
Line 1: Line 1:
{{Context|date=October 2009}}
Hi there, I am Felicidad Oquendo. To play croquet is the hobby I will by no means quit performing. Years ago we moved to Arizona but my wife desires us to move. His working day occupation is a cashier and his wage has been really satisfying.<br><br>Also visit my web page: extended car warranty ([http://Www.frag5.com/index.php?mod=users&action=view&id=13770 click home page])
 
The '''nested sampling algorithm''' is a [[computation]]al approach to the problem of comparing models in [[Bayesian statistics]], developed in 2004 by [[physicist]] John Skilling.<ref name="skilling1">{{cite journal | last = Skilling | first = John | title = Nested Sampling | journal = AIP Conference Proceedings | pages = 395–405 | year = 2004 | volume = 735 | doi = 10.1063/1.1835238}}</ref>
 
==Background==
 
[[Bayes' theorem]] can be applied to a pair of competing models <math>M1</math> and <math>M2</math> for data <math>D</math>, one of which may be true (though which one is not known) but which both cannot simultaneously be true, as follows:
 
: <math>
\begin{align}
P(M1|D) & {} = \frac{P(D|M1) P(M1)}{P(D)} \\
  & {} = \frac{P(D|M1) P(M1)}{P(D|M1) P(M1) + P(D|M2) P(M2)}  \\
  & {} = \frac{1}{1 + \frac{P(D|M2)}{P(D|M1)} \frac{P(M2)}{P(M1)} }
\end{align}
</math>
 
Given no a priori information in favor of <math>M1</math> or <math>M2</math>, it is reasonable to assign prior probabilities
<math>P(M1)=P(M2)=1/2</math>, so that <math>P(M2)/P(M1)=1</math>. The remaining ratio <math>P(D|M2)/P(D|M1)</math>
is not so easy to evaluate since in general it requires marginalization of
nuisance parameters. Generally, <math>M1</math> has a collection of parameters that can be
lumped together and called <math>\theta</math>, and <math>M2</math> has its own vector of parameters
that may be of different dimensionality but is still referred to as <math>\theta</math>.
The marginalization for <math>M1</math> is
 
: <math>P(D|M1) = \int d \theta P(D|\theta,M1) P(\theta|M1)</math>
 
and likewise for <math>M2</math>. This integral is often analytically intractable, and in these cases it is necessary to employ a numerical algorithm to find an approximation. The nested sampling algorithm was developed by John Skilling specifically to approximate these marginalization integrals, and it has the added benefit of generating samples from the posterior distribution <math>P(\theta|D,M1)</math>.<ref name = "skilling2">{{cite journal | last = Skilling | first = John | title = Nested Sampling for General Bayesian Computation | journal = Bayesian Analysis | volume = 1 | issue = 4 | pages = 833–860 | year = 2006 | doi = 10.1214/06-BA127}}</ref> It is an alternative to methods from the Bayesian literature<ref name="chen">{{cite book | author = Chen, Ming-Hui, Shao, Qi-Man, and Ibrahim, Joseph George | title = Monte Carlo methods in Bayesian computation | publisher = Springer | year = 2000 | isbn = 978-0-387-98935-8 | url = http://books.google.com/?id=R3GeFfshc7wC}}</ref>  such as bridge sampling and defensive importance sampling.
 
Here is a simple version of the nested sampling algorithm, followed by a description of how it computes the marginal probability density <math>Z=P(D|M)</math> where
<math>M</math> is <math>M1</math> or <math>M2</math>:
 
  Start with <math>N</math> points <math>\theta_1,...,\theta_N</math> sampled from prior.
  for <math>i=1</math> to <math>j</math> do        % The number of iterations j is chosen by guesswork.
      <math>L_i := \min(</math>current likelihood values of the points<math>)</math>;
      <math>X_i := \exp(-i/N);</math>
      <math>w_i := X_{i-1} - X_i</math>
      <math>Z := Z + L_i*w_i;</math>
      Save the point with least likelihood as a sample point with weight <math>w_i</math>.
      Update the point with least likelihood with some Markov Chain
      Monte Carlo steps according to the prior, accepting only steps that
      keep the likelihood above <math>L_i</math>.
  end
  return <math>Z</math>;
 
At each iteration, <math>X_i</math> is an estimate of the amount of prior mass covered by
the hypervolume in parameter space of all points with likelihood greater than
<math>\theta_i</math>. The weight factor
<math>w_i</math> is
an estimate of the amount of prior mass that lies between two nested
hypersurfaces <math>\{ \theta | P(D|\theta,M) = P(D|\theta_{i-1},M) \}</math>
and <math>\{ \theta | P(D|\theta,M) = P(D|\theta_i,M) \}</math>. The update step
<math>Z := Z+L_i*w_i</math>
computes the sum over <math>i</math> of <math>L_i*w_i</math> to numerically approximate the integral
 
: <math>
\begin{array}{lcl}
  P(D|M) &=& \int P(D|\theta,M) P(\theta|M) d \theta \\
        &=& \int P(D|\theta,M) dP(\theta|M)\\
\end{array}
</math>
 
The idea is to chop up the range of <math>f(\theta) = P(D|\theta,M)</math> and estimate, for each interval <math>[f(\theta_{i-1}), f(\theta_i)]</math>, how likely it is a priori that a randomly chosen <math>\theta</math> would map to this interval. This can be thought of as a Bayesian's way to numerically implement [[Lebesgue integration]].
 
==Implementations==
 
* Simple example code written in [[C (programming language)|C]], [[R (programming language)|R]], or [[Python (programming language)|Python]] demonstrating this algorithm can be downloaded from [http://www.inference.phy.cam.ac.uk/bayesys/ John Skilling's website]
* There is also a Haskell port of the above simple codes on [http://hackage.haskell.org/package/NestedSampling Hackage]
* An implementation in [[R (programming language)|R]] originally designed for fitting of spectra is described at [http://www.mrao.cam.ac.uk/~bn204/galevol/speca/rnested.html] and can be obtained on GitHub at [https://github.com/bnikolic/RNested]
* A highly modular [[Python (programming language)|Python]] parallel implementation of Nested Sampling for [[statistical physics]] and [[condensed matter physics]] applications is publicly available from GitHub [https://github.com/js850/nested_sampling].
 
==Applications==
Since nested sampling was proposed in 2004, it has been used in multiple settings within the field of [[astronomy]]. One paper suggested using nested sampling for [[cosmology|cosmological]] [[model selection]] and object detection, as it "uniquely combines accuracy, general applicability and computational feasibility."<ref name="mukherjee">{{cite journal | author = Mukherjee, P., Parkinson, D., and Liddle, A.R. | title = A Nested Sampling Algorithm for Cosmological Model Selection | journal = Astrophysical Journal | volume = 638 | issue = 2 | pages = 51–54 | year = 2006 | bibcode = 2005astro.ph..8461M | doi = 10.1086/501068|arxiv = astro-ph/0508461 }}</ref> A refinement of the nested sampling algorithm to handle multimodal posteriors has also been suggested as a means of detecting astronomical objects in existing datasets.<ref name="feroz">{{cite journal | author = Feroz, F., Hobson, M.P. | title = Multimodal nested sampling: an efficient and robust alternative to Markov Chain Monte Carlo methods for astronomical data analyses | journal = MNRAS | volume = 384 | issue = 2 | pages = 449–463 | year = 2008 | url = http://adsabs.harvard.edu/cgi-bin/bib_query?arXiv:0704.3704 | doi = 10.1111/j.1365-2966.2007.12353.x | bibcode=2008MNRAS.384..449F|arxiv = 0704.3704 }}</ref>
 
==See also==
*[[Bayesian model comparison]]
 
==References==
 
{{reflist}}
 
[[Category:Bayesian statistics]]
[[Category:Model selection]]
[[Category:Statistical algorithms]]

Latest revision as of 23:38, 20 August 2014

Hi there, I am Felicidad Oquendo. To play croquet is the hobby I will by no means quit performing. Years ago we moved to Arizona but my wife desires us to move. His working day occupation is a cashier and his wage has been really satisfying.

Also visit my web page: extended car warranty (click home page)