|
|
Line 1: |
Line 1: |
| {{Technical|date=August 2012}}
| | The writer's name is Christy. As a lady what she truly likes is fashion and she's been doing it for fairly a while. Credit authorising is how he makes money. Ohio is exactly where his home is and his family enjoys it.<br><br>Look at my web blog :: phone psychic ([http://www.edmposts.com/build-a-beautiful-organic-garden-using-these-ideas/ www.edmposts.com]) |
| In [[probability theory]], a '''Dirichlet process''' is a [[stochastic process|random process]], that is a [[probability distribution]] whose domain is itself a set of probability distributions. Because of this hierarchical structure of distribution over distributions, we will below use terminology "draw from" a distribution, as distinct from "draw of" a distribution. In the former terminology, the distribution mentioned governs the relative chances of different outcomes; in the latter, the distribution mentioned ''is'' the random outcome.
| |
| | |
| Given a Dirichlet process <math>\mathrm{DP}\left(H, \alpha\right)</math>, where <math>H</math> (the ''base distribution'' or ''base measure'') is an arbitrary distribution and <math>\alpha</math> (the ''[[concentration parameter]]'') is a positive real number, a draw from <math>\mathbf{DP}</math> will return a random distribution (the ''output distribution'') over some of the values that can be drawn from <math>H</math>. That is, the [[support (mathematics)|support]] of each draw of the output distribution is always a subset of the [[support (mathematics)|support]] of base distribution. The output distribution will be [[discrete distribution|discrete]], meaning that individual values drawn from the output distribution will sometimes repeat themselves even if the base distribution is continuous (i.e., if two different draws from the base distribution will be distinct with probability one). The extent to which values will repeat is determined by <math>\alpha</math>, with higher values causing less repetition. If the base distribution is continuous, so that separate draws from it always return distinct values, then the infinite set of probabilities corresponding to the frequency of each possible value that the output distribution can return are distributed according to a '''stick-breaking process'''.
| |
| | |
| Note that the Dirichlet process is a [[stochastic process]], meaning that technically speaking it is an infinite sequence of random variables, rather than a single random distribution. The relation between the two is as follows. Consider the Dirichlet process as defined above, as a distribution over random distributions, and call this process <math>\operatorname{DP}_1()</math>. We can call this the ''distribution-centered'' view of the Dirichlet process. First, draw a random output distribution from this process, and then consider an infinite sequence of random variables representing values drawn from this distribution. Note that, conditioned on the output distribution, the variables are [[independent identically distributed]]. Now, consider instead the distribution of the random variables that results from [[marginalizing out]] (integrating over) the random output distribution. (This makes all the variables dependent on each other. However, they are still [[exchangeable]], meaning that the marginal distribution of one variable is the same as that of all other variables. That is, they are "identically distributed" but not "independent".) The resulting infinite sequence of random variables with the given marginal distributions is another view onto the Dirichlet process, denoted here
| |
| <math>\operatorname{DP}_2()</math>. We can call this the ''process-centered'' view of the Dirichlet process. The [[conditional distribution]] of one variable given all the others, or given all previous variables, is defined by the '''[[Chinese restaurant process]]''' (see below).
| |
| | |
| Another way to think of a Dirichlet process is as an infinite-dimensional generalization of the [[Dirichlet distribution]]. A sample from a Dirichlet distribution is a finite-dimensional set of probabilities (for some size <math>K</math>, specified by the parameters of the distribution), all of which sum to 1. This can be thought of as a finite-dimensional [[discrete distribution]]; i.e. a Dirichlet distribution can be thought of as a distribution over <math>K</math>-dimensional discrete distributions. Imagine generalizing a symmetric Dirichlet distribution, defined by a dimension <math>K</math> and concentration parameter <math>\alpha/K</math>, to an infinite set of probabilities; the resulting distribution over infinite-dimensional discrete distributions is called the ''stick-breaking process'' (see below). Imagine then using this set of probabilities to create an infinite-dimensional [[mixture model]], with each separate probability from the set associated with a mixture component, and the value of each component drawn separately from a base distribution <math>H</math>; then draw an infinite number of samples from this mixture model. The infinite set of random variables corresponding to the marginal distribution of these samples is a Dirichlet process with parameters <math>H</math> and <math>\alpha</math>.
| |
| | |
| The Dirichlet process was formally introduced by Thomas Ferguson in 1973.<ref>{{cite journal
| |
| | last = Ferguson
| |
| | first = Thomas
| |
| | authorlink = Thomas Ferguson
| |
| | title = Bayesian analysis of some nonparametric problems
| |
| | journal = [[Annals of Statistics]]
| |
| | volume = 1
| |
| | issue=2
| |
| | pages = 209–230
| |
| | year = 1973
| |
| | doi = 10.1214/aos/1176342360
| |
| | mr = 350949
| |
| }}</ref>
| |
| | |
| ==Introduction==
| |
| Consider a simple [[mixture model]]:
| |
| | |
| : <math>
| |
| \begin{array}{lcl}
| |
| \theta_{1,\dots,K} &\sim& H() \\
| |
| \boldsymbol\beta &\sim& \operatorname{Dirichlet}(\alpha/K,..., \alpha/K) \\
| |
| z_{1,\dots,N} &\sim& \operatorname{Categorical}(\boldsymbol\beta) \\
| |
| \phi_i &=& \theta_{z_i} \\
| |
| x_{i=1,\dots,N} &\sim& F(\phi_i)
| |
| \end{array}
| |
| </math>
| |
| | |
| This is a basic [[generative model]] where the observations <math>x_1,\dots,x_N</math> are distributed according to a mixture of <math>K</math> components, where each component is distributed according to a single [[parametric family]] <math>F(\theta)</math> but where different components have different values of <math>\theta</math>, which is drawn in turn from a distribution <math>H</math>. Typically, <math>H</math> will be the [[conjugate prior]] distribution of <math>F</math>. In addition, the [[prior probability]] of each component is specified by <math>\boldsymbol\beta</math>, which is a size-<math>K</math> vector of probabilities, all of which add up to 1. Also, the <math>\phi_i</math> are non-random variables, their purpose is to record the value of <math>\theta</math> that is associated with observation <math>i</math>. As <math>K\to\infty</math>, the distribution of the vector <math>\phi_{1..N}</math> (where <math>\theta,\beta,z</math> have been marginalized out) becomes the Dirichlet process.
| |
| | |
| For example, if the observations are apartment prices and the <math>K</math> components represent different neighborhoods, then <math>F</math> might be a [[Gaussian distribution]] with unknown [[mean]] and unknown [[variance]], with the mean and variance specifying the distribution of prices in that neighborhood. Then the parameter <math>\theta</math> will be a vector of two values, a mean drawn from a [[Gaussian distribution]] and a variance drawn from an [[inverse gamma distribution]], which are the conjugate priors of the mean and variance, respectively, of a Gaussian distribution.
| |
| | |
| Meanwhile, if the observations are words and the <math>K</math> components represent different topics, then <math>F</math> might be a [[categorical distribution]] over a vocabulary of size <math>V</math>, with unknown frequencies of each word in the vocabulary, specifying the distribution of words in each particular topic. Then the parameter <math>\theta</math> will be a vector of <math>V</math> values, each representing a probability and all summing to one, drawn from a [[Dirichlet distribution]], which is the conjugate prior of the categorical distribution.
| |
| | |
| Now imagine we consider the limit as <math>K\to\infty</math>. Conceptually this means that we have no idea how many components are present. The result will be as follows:
| |
| | |
| : <math>
| |
| \begin{array}{lcl}
| |
| \theta_{1,\dots,\infty} &\sim& H() \\
| |
| \boldsymbol\beta &\sim& \operatorname{Stick}(1,\alpha) \\
| |
| z_{1,\dots,N} &\sim& \operatorname{Categorical}(\boldsymbol\beta) \\
| |
| x_{i=1,\dots,N} &\sim& F(\theta_{z_i})
| |
| \end{array}
| |
| </math>
| |
| | |
| In this model, conceptually speaking there are an infinite number of components, each with a separate parameter value, and a correspondingly infinite number of prior probabilities for each component, drawn from a stick-breaking process (see section below). Note that a practical application of such a model would not actually store an infinite number of components. Instead, it would generate the component prior probabilities one at a time from the stick-breaking process, which by construction tends to return the largest probability values first. As each component probability is drawn, a corresponding parameter value is also drawn. At any one time, some of the prior probability mass will be assigned to components and some unassigned. To generate a new observation, a random number between 0 and 1 is drawn uniformly, and if it lands in the unassigned mass, new components are drawn as necessary (each one reducing the amount of unassigned mass) until enough mass has been allocated to place this number in an existing component. Each time a new component probability is generated by the stick-breaking process, a corresponding parameter value is drawn from <math>H</math>.
| |
| | |
| Sometimes, the stick-breaking process is denoted as <math>\operatorname{GEM}()</math>, after the authors of this process, instead of <math>\operatorname{Stick}()</math>.
| |
| | |
| Another view of this model comes from looking back at the finite-dimensional mixture model with mixing probabilities <math>\boldsymbol\beta</math> drawn from a Dirichlet distribution and considering the distribution of a particular component assignment <math>z_i</math> conditioned on all previous components, with the mixing probabilities [[marginal distribution|integrated out]]. This distribution is a [[Dirichlet-multinomial distribution]]. Note that, conditioned on a particular value of <math>\boldsymbol\beta</math>, each <math>z_i</math> is independent of the others, but marginalizing over <math>\boldsymbol\beta</math> introduces dependencies among the component assignments. It can be shown (see the [[Dirichlet-multinomial distribution]] article) that
| |
| | |
| :<math>p(z_i = k|\mathbf{z}_{1,\dots,i-1},\alpha,K) = \frac{n_k + \alpha/K}{i - 1 + \alpha}</math>
| |
| | |
| where <math>k = 1 \dots K</math> is a particular value of <math>z_i</math> and <math>n_k</math> is the number of times a topic assignment in the set <math>\{z_1,\dots,z_{i-1}\}</math> has the value <math>k</math>, i.e. probability of assigning an observation to a particular component is roughly proportional to the number of previous observations already assigned to this component.
| |
| | |
| Now consider the limit as <math>K\to\infty</math>. For a particular previously observed component <math>k</math>,
| |
| | |
| :<math>p(z_i = k|\mathbf{z}_{1,\dots,i-1},\alpha) = \lim_{K\to\infty} \frac{n_k + \alpha/K}{i - 1 + \alpha} = \frac{n_k}{i - 1 + \alpha}</math>
| |
| | |
| That is, the probability of seeing a previously observed component is directly proportional to the number of times the component has already been seen. This is often expressed as the '''rich get richer'''.
| |
| | |
| For an unseen component <math>k</math>, <math>n_k = 0</math>, and as <math>K\to\infty</math> the probability of seeing this component goes to 0. However, the number of unseen components approaches infinity. Consider instead the set of all unseen components <math>\mathbf{Q}</math>. Note that, if there are <math>L</math> components seen so far, the number of unseen components <math>|\mathbf{Q}| = K - L</math>. Then, consider the probability of seeing any of these components:
| |
| | |
| :<math> | |
| \begin{align}
| |
| p(z_i \in \mathbf{Q}|\mathbf{z}_{1,\dots,i-1},\alpha)
| |
| & = \lim_{K\to\infty} \sum_{u\in\mathbf{Q}} \frac{\alpha/K}{i - 1 + \alpha}
| |
| \\
| |
| &= \frac{\alpha}{i-1+\alpha} \lim_{K\to\infty} \frac{K-L}{K}
| |
| \\
| |
| &= \frac{\alpha}{i-1+\alpha}
| |
| \end{align}
| |
| </math>
| |
| | |
| In other words:
| |
| | |
| #The probability of seeing an already-seen component is proportional to the number of times that component has been seen.
| |
| #The probability of seeing any unseen component is proportional to the concentration parameter <math>\alpha</math>.
| |
| | |
| This process is termed a ''[[Chinese restaurant process]]'' (CRP). In terms of the CRP, the infinite-dimensional model can equivalently be written:
| |
| | |
| : <math>
| |
| \begin{array}{lcl}
| |
| \theta_{1,\dots,\infty} &\sim& H() \\
| |
| z_{1,\dots,N} &\sim& \operatorname{CRP}(\alpha) \\
| |
| x_{i=1,\dots,N} &\sim& F(\theta_{z_i})
| |
| \end{array}
| |
| </math>
| |
| | |
| Note that we have [[marginalized out]] the mixing probabilities <math>\boldsymbol\beta</math>, and thereby produced a more compact representation of the model.
| |
| | |
| Now imagine further that we also marginalize out the component assignments <math>z_{1,\dots,N}</math>, and instead we look directly at the distribution of <math>\phi_i = \theta_{z_i}</math>. Then, we can write the model directly in terms of the Dirichlet process:
| |
| | |
| : <math>
| |
| \begin{array}{lcl}
| |
| G &\sim& \operatorname{DP}_1(H,\alpha) \\
| |
| \phi_{1,\dots,N} &\sim& G \\
| |
| x_{i=1,\dots,N} &\sim& F(\phi_i)
| |
| \end{array}
| |
| </math>
| |
| | |
| <math>\operatorname{DP}_1</math> represents one view (the ''distribution-centered view'') of the Dirichlet process as producing a random, infinite-dimensional discrete distribution <math>G</math> with values drawn from <math>H</math>.
| |
| | |
| An alternative view of the Dirichlet process (the ''process-centered view''), adhering more closely to its definition as a [[stochastic process]], sees it as directly producing an infinite stream of <math>\boldsymbol\phi</math> values. Notating this view as <math>\operatorname{DP}_2</math>, we can write the model as
| |
| | |
| : <math>
| |
| \begin{array}{lcl}
| |
| \phi_{1,\dots} &\sim& \operatorname{DP}_2(H,\alpha) \\
| |
| x_{i=1,\dots,N} &\sim& F(\phi_i)
| |
| \end{array}
| |
| </math>
| |
| | |
| In this view, although the Dirichet process generates an infinite stream of parameter values, we only care about the first ''N'' values. Note that some of these values will be the same as previously seen values, in a "rich get richer" scheme, as determined by the Chinese restaurant process.
| |
| | |
| ==Formal definition==
| |
| A Dirichlet process over a set ''S'' is a [[stochastic process]] whose [[sample path]] (i.e. an infinite-dimensional set of random variates drawn from the process) is a probability distribution on ''S''. The finite dimensional distributions are from the [[Dirichlet distribution]]: If ''H'' is a finite [[Measure (mathematics)|measure]] on ''S'', <math>\alpha</math> is a positive real number and ''X'' is a sample path drawn from a Dirichlet process, written as
| |
| | |
| :<math>X \sim \mathrm{DP}\left(\alpha, H\right)</math>
| |
| | |
| then for any measureable [[partition of a set|partition]] of ''S'', say <math>\left\{B_i\right\}_{i=1}^{n}</math>, we have that
| |
| | |
| :<math>\left(X\left(B_1\right),\dots,X\left(B_n\right)\right) \sim \mathrm{Dirichlet}\left(\alpha H\left(B_1\right),\dots, \alpha H\left(B_n\right)\right).</math>
| |
| | |
| ==The Chinese restaurant process==
| |
| {{main|Chinese restaurant process}}
| |
| As shown above, a simple distribution, the so-called '''[[Chinese restaurant process]]''', results from considering the conditional distribution of one component assignment given all previous ones in a [[Dirichlet distribution]] [[mixture model]] with <math>K</math> components, and then taking the limit as <math>K</math> goes to infinity. It can be shown, using the above formal definition of the Dirichlet process and considering the process-centered view of the process, that the conditional distribution of the component assignment of one sample from the process given all previous samples follows a Chinese restaurant process.
| |
| | |
| Suppose that <math>J</math> samples, <math>\left\{\theta_j\right\}_{j=1}^{J}</math> have already been obtained. According to the Chinese Restaurant Process, the <math>\left(J+1\right)^{\mathrm{th}}</math> sample should be drawn from
| |
| | |
| :<math>\theta_{J+1} \sim \frac{1}{H\left(S\right)+J} \left( H + \sum_{j=1}^{J}\delta_{\theta_j}\right) </math>
| |
| | |
| where <math>\delta_{\theta}</math> is an atomic distribution centered on <math>\theta</math>. Interpreting this, two properties are clear:
| |
| | |
| # Even if <math>S</math> is a countable set, there is a finite probability that two samples will have exactly the same value. Samples from a Dirichlet process are therefore discrete.
| |
| # The Dirichlet process exhibits a self-reinforcing property; the more often a given value has been sampled in the past, the more likely it is to be sampled again.
| |
| | |
| The name "Chinese restaurant process" is derived from the following analogy: imagine an infinitely large restaurant containing an infinite number of tables, and able to serve an infinite number of dishes. The restaurant in question operates a somewhat unusual seating policy whereby new diners are seated either at a currently occupied table with probability proportional to the number of guests already seated there, or at an empty table with probability proportional to a constant. Guests who sit at an occupied table must order the same dish as those currently seated, whereas guests allocated a new table are served a new dish at random. The distribution of dishes after <math>J</math> guests are served is a sample drawn as described above. The Chinese Restaurant Process is related to the [[Urn problem|Polya Urn]] sampling scheme for finite Dirichlet distributions.
| |
| | |
| ==The stick-breaking process==
| |
| A third approach to the Dirichlet process is provided by the so-called ''stick-breaking process'', which can be used to provide a constructive algorithm (the ''stick-breaking construction'') for generating a Dirichlet process. Let <math>\left\{\beta'_k\right\}_{k=1}^\infty </math> be a set of random variables such that
| |
| | |
| :<math>\beta'_k \sim \mathrm{Beta}\left(1,\alpha\right)</math>
| |
| | |
| where <math>\alpha</math> is the normalisation constant for the measure <math>H</math>, so that <math>H = \alpha H_\text{norm}</math>. Define <math>\left\{\beta_k\right\}_{k=1}^\infty </math> according to
| |
| | |
| :<math>\beta_k = \beta'_k\cdot\prod_{i=1}^{k-1}\left(1-\beta'_i\right)</math>
| |
| | |
| and let <math>\left\{\theta_k\right\}_{k=1}^{\infty}</math> be a set of samples from <math>H_\text{norm}</math>. The distribution given by the density <math>f(\theta) = \sum_{k=1}^{\infty}\beta_k\cdot\delta_{\theta_k}(\theta)</math> (where <math>\delta</math> is the [[Dirac_delta_function#As_a_measure|Dirac delta measure]], here used as an [[indicator function]] which evaluates to <math>0</math> except for <math>\delta_{\theta_k}(\theta_k)=1</math>), is then a sample from the corresponding Dirichlet process. This method provides an explicit construction of the non-parametric sample, and makes clear the fact that the samples are discrete.
| |
| | |
| The name 'stick-breaking' comes from the interpretation of <math>\beta_k</math> as the length of the piece of a unit-length stick assigned to the ''k''th value. After the first ''k'' − 1 values have their portions assigned, the length of the remainder of the stick, <math>\prod_{i=1}^{k-1}\left(1-\beta'_i\right)</math>,
| |
| is broken according to a sample <math>\beta'_k</math> from a [[beta distribution]]. In this analogy, <math>\beta'_k</math> indicates the portion of the remainder to be assigned to the k-th value.
| |
| The smaller <math>\alpha</math> is, the less of the stick will be left for subsequent values (on average).
| |
| | |
| ==The Polya urn scheme==
| |
| Yet another way to visualize the Dirichlet process and Chinese restaurant process is as a modified [[Polya urn scheme]]. Imagine that we start with an urn filled with <math>\alpha</math> black balls. Then we proceed as follows:
| |
| | |
| #Each time we need an observation, we draw a ball from the urn.
| |
| #If the ball is black, we generate a new (non-black) color uniformly, label a new ball this color, drop the new ball into the urn along with the ball we drew, and return the color we generated.
| |
| #Otherwise, label a new ball with the color of the ball we drew, drop the new ball into the urn along with the ball we drew, and return the color we observed.
| |
| | |
| The resulting distribution over colors is the same as the distribution over tables in the Chinese restaurant process. Furthermore, when we draw a black ball, if rather than generating a new color, we instead pick a random value from a base distribution <math>H</math> and use that value to label the new ball, the resulting distribution over labels will be the same as the distribution over values in a Dirichlet process.
| |
| | |
| ==Applications of the Dirichlet process==
| |
| Dirichlet processes are frequently used in ''Bayesian [[Non-parametric statistics|nonparametric statistics]]''. "Nonparametric" here does not mean a parameter-less model, rather a model in which representations grow as more data are observed. Bayesian nonparametric models have gained considerable popularity in the field of machine learning because of the above-mentioned flexibility, especially in [[unsupervised learning]]. In a Bayesian nonparametric model, the prior and posterior distributions are not parametric distributions, but stochastic processes.<ref>{{cite book
| |
| | title = Bayesian Nonparametrics
| |
| | isbn = 0-521-51346-4
| |
| | publisher = Cambridge University Press
| |
| | author = [[Nils Lid Hjort]], Chris Holmes, Peter Müller and Stephen G. Walker
| |
| | year = 2010
| |
| }}</ref> The fact that the Dirichlet distribution is a probability distribution on the [[simplex]] of non-negative numbers that sum to one makes it a good candidate to model distributions of distributions or distributions of functions. Additionally, the non-parametric nature of this model makes it an ideal candidate for clustering problems where the distinct number of clusters is unknown beforehand.
| |
| | |
| As draws from a Dirichlet process are discrete, an important use is as a [[prior probability]] in [[infinite mixture model]]s. In this case, <math>S</math> is the parametric set of component distributions. The generative process is therefore that a sample is drawn from a Dirichlet process, and for each data point in turn a value is drawn from this sample distribution and used as the component distribution for that data point. The fact that there is no limit to the number of distinct components which may be generated makes this kind of model appropriate for the case when the number of mixture components is not well-defined in advance. For example, the infinite mixture of Gaussians model.<ref>{{cite journal
| |
| | last = Rasmussen
| |
| | first = Carl
| |
| | title = The Infinite Gaussian Mixture Model
| |
| | journal = Advances in Neural Information Processing Systems (NIPS)
| |
| | volume = 12
| |
| | pages = 554–560
| |
| | year = 2000
| |
| | url = http://www.gatsby.ucl.ac.uk/~edward/pub/inf.mix.nips.99.pdf
| |
| | format = PDF
| |
| }}</ref>
| |
| | |
| The infinite nature of these models also lends them to [[natural language processing]] applications, where it is often desirable to treat the vocabulary as an infinite, discrete set.
| |
| | |
| ==Related distributions==
| |
| * The [[Pitman–Yor process]] is a generalization of the Dirichlet process to accommodate power-law tails
| |
| * The [[hierarchical Dirichlet process]] extends the ordinary Dirichlet process for modelling grouped data.
| |
| | |
| ==References==
| |
| <references/>
| |
| | |
| ==External links==
| |
| * [https://www.ee.washington.edu/techsite/papers/documents/UWEETR-2010-0006.pdf Introduction to the Dirichlet Distribution and Related Processes by Frigyik, Kapila and Gupta]
| |
| * [http://www.gatsby.ucl.ac.uk/~ywteh/research/npbayes/Teh2010a.pdf Yee Whye Teh's overview of Dirichlet processes]
| |
| * [http://www.cs.toronto.edu/~beal/npbayes/ Webpage for the NIPS 2003 workshop on non-parametric Bayesian methods]
| |
| * [http://www.cs.berkeley.edu/~jordan/nips-tutorial05.ps Michael Jordan's NIPS 2005 tutorial: ''Nonparametric Bayesian Methods: Dirichlet Processes, Chinese Restaurant Processes and All That'']
| |
| * [http://www.maths.bris.ac.uk/~maxvd/cribsheet.pdf Peter Green's summary of construction of Dirichlet Processes]
| |
| * [http://www.stats.bris.ac.uk/~peter/papers/GreenCDP.pdf Peter Green's paper on probabilistic models of Dirichlet Processes with implications for statistical modelling and analysis]
| |
| * [http://learning.eng.cam.ac.uk/zoubin/talks/uai05tutorial-b.pdf Zoubin Ghahramani's UAI 2005 tutorial on Nonparametric Bayesian methods]
| |
| * [http://ClusterAnalysis.org GIMM software for performing cluster analysis using Infinite Mixture Models]
| |
| * [http://www.ece.sunysb.edu/~zyweng/dpcluster.html A Toy Example of Clustering using Dirichlet Process.] by Zhiyuan Weng
| |
| | |
| {{Stochastic processes}}
| |
| | |
| {{DEFAULTSORT:Dirichlet Process}}
| |
| [[Category:Stochastic processes]]
| |
| [[Category:Non-parametric Bayesian methods]]
| |