|
|
Line 1: |
Line 1: |
| {{Unreferenced|date=August 2009}}
| | I would like to introduce myself to you, I am Andrew and my spouse doesn't like it at all. He is an purchase clerk and it's something he really appreciate. To play lacross is the thing I adore most of all. I've usually loved living in Alaska.<br><br>Here is my web blog ... [http://isaworld.pe.kr/?document_srl=392088 clairvoyant psychic] |
| | |
| <div style="width: 367px; border: solid #aaa 1px; margin: 0 0 1em 1em; font-size: 90%; background: #f9f9f9; padding: 4px; text-align: left; float: right;">
| |
| <div>A monotonic likelihood ratio in distributions <math>f(x)</math> and <math>g(x)</math></div>
| |
| <div>[[Image:MLRP-illustration.png|none|]]</div>
| |
| The ratio of the [[probability density function|density functions]] above is increasing in the parameter <math>x</math>, so <math>f(x)</math>/<math>g(x)</math> satisfies the '''monotone likelihood ratio''' property.
| |
| </div>
| |
| </div>
| |
| | |
| In [[statistics]], the '''monotone likelihood ratio property''' is a property of the ratio of two [[probability density function]]s (PDFs). Formally, distributions ''ƒ''(''x'') and ''g''(''x'') bear the property if
| |
| | |
| : for any <math>x_1 > x_0</math>, <math>\frac{f(x_1)}{g(x_1)} \geq \frac{f(x_0)}{g(x_0)}</math>
| |
| | |
| that is, if the ratio is nondecreasing in the argument <math>x</math>.
| |
| | |
| If the functions are first-differentiable, the property may sometimes be stated
| |
| :<math>\frac{\partial}{\partial x} \left( \frac{f(x)}{g(x)} \right) \geq 0</math>
| |
| | |
| For two distributions that satisfy the definition with respect to some argument x, we say they "have the MLRP in ''x''." For a family of distributions that all satisfy the definition with respect to some statistic ''T''(''X''), we say they "have the MLR in ''T''(''X'')."
| |
| | |
| ==Intuition==
| |
| | |
| The MLRP is used to represent a data-generating process that enjoys a straightforward relationship between the magnitude of some observed variable and the distribution it draws from. If <math>f(x)</math> satisfies the MLRP with respect to <math>g(x)</math>, the higher the observed value <math>x</math>, the more likely it was drawn from distribution <math>f</math> rather than <math>g</math>. As usual for monotonic relationships, the likelihood ratio's monotonicity comes in handy in statistics, particularly when using [[Maximum likelihood|maximum-likelihood]] [[estimation]]. Also, distribution families with MLR have a number of well-behaved stochastic properties, such as [[first-order stochastic dominance]] and increasing [[hazard ratio]]s. Unfortunately, as is also usual, the strength of this assumption comes at the price of realism. Many processes in the world do not exhibit a monotonic correspondence between input and output.
| |
| | |
| ===Example: Working hard or slacking off===
| |
| | |
| Suppose you are working on a project, and you can either work hard or slack off. Call your choice of effort <math>e</math> and the quality of the resulting project <math>q</math>. If the MLRP holds for the distribution of ''q'' conditional on your effort <math>e</math>, the higher the quality the more likely you worked hard. Conversely, the lower the quality the more likely you slacked off.
| |
| | |
| #Choose effort <math>e \in \{H,L\}</math> where H means high, L means low
| |
| #Observe <math>q</math> drawn from <math>f(q\mid e)</math>. By [[Bayes' law]] with a uniform prior,
| |
| #:<math>Pr[e=H\mid q]=\frac{f(q\mid H)}{f(q\mid H)+f(q\mid L)}</math>
| |
| #Suppose <math>f(q|e)</math> satisfies the MLRP. Rearranging, the probability the worker worked hard is
| |
| :: <math>\frac{1}{1+f(q\mid L)/f(q\mid H)}</math>
| |
| : which, thanks to the MLRP, is monotonically increasing in <math>q</math>. Hence if some employer is doing a "performance review" he can infer his employee's behavior from the merits of his work.
| |
| | |
| ==Families of distributions satisfying MLR==
| |
| | |
| Statistical models often assume that data are generated by a distribution from some family of distributions and seek to determine that distribution. This task is simplified if the family has the Monotone Likelihood Ratio Property (MLRP).
| |
| | |
| A family of density functions <math>\{ f_\theta (x)\}_{\theta\in \Theta}</math> indexed by a parameter <math>\theta</math> taking values in an ordered set <math>\Theta</math> is said to have a '''monotone likelihood ratio (MLR)''' in the [[statistic]] <math>T(X)</math> if for any <math>\theta_1 < \theta_2</math>,
| |
| :<math>\frac{f_{\theta_2}(X=x_1,x_2,x_3,\dots)}{f_{\theta_1}(X=x_1,x_2,x_3,\dots)} </math> is a non-decreasing function of <math>T(X)</math>.
| |
| | |
| Then we say the family of distributions "has MLR in <math>T(X)</math>".
| |
| | |
| ===List of families===
| |
| | |
| {| class="wikitable" style="margin: 1em 0 1em 0" border="1"
| |
| ! Family || <math>T(X)</math> in which <math>f_\theta(X)</math> has the MLR
| |
| |-
| |
| | [[Exponential distribution|Exponential<math>[\lambda]</math>]] || <math>\sum x_i</math> observations
| |
| |-
| |
| | [[Binomial distribution|Binomial<math>[n,p]</math>]] || <math>\sum x_i</math> observations
| |
| |-
| |
| | [[Poisson distribution|Poisson<math>[\lambda]</math>]] || <math>\sum x_i</math> observations
| |
| |-
| |
| | [[Normal distribution|Normal<math>[\mu,\sigma]</math>]] || if <math>\sigma</math> known, <math>\sum x_i</math> observations
| |
| |}
| |
| | |
| ===Hypothesis testing===
| |
| | |
| If the family of random variables has the MLRP in <math>T(X)</math>, a [[uniformly most powerful test]] can easily be determined for the hypotheses <math>H_0 : \theta \le \theta_0</math> versus <math>H_1 : \theta > \theta_0</math>.
| |
| | |
| ===Example:Effort and output===
| |
| Example: Let <math>e</math> be an input into a stochastic technology --- worker's effort, for instance --- and <math>y</math> its output, the likelihood of which is described by a probability density function <math>f(y;e).</math> Then the monotone likelihood ratio property (MLRP) of the family <math>f</math> is expressed as follows: for any <math>e_1,e_2</math>, the fact that <math>e_2 > e_1</math> implies that the ratio <math>f(y;e_2)/f(y;e_1)</math> is increasing in <math>y</math>.
| |
| | |
| ==Relation to other statistical properties==
| |
| | |
| If a family of distributions <math>f_\theta(x)</math> has the monotone likelihood ratio property in <math>T(X)</math>,
| |
| # the family has monotone decreasing [[hazard rate]]s in <math>\theta</math> (but not necessarily in <math>T(X)</math>)
| |
| # the family exhibits the first-order (and hence second-order) stochastic dominance in <math>x</math>, and the best Bayesian update of <math>\theta</math> is increasing in <math>T(X)</math>.
| |
| | |
| But not conversely: neither monotone hazard rates nor stochastic dominance imply the MLRP.
| |
| | |
| ===Proofs===
| |
| | |
| Let distribution family <math>f_\theta</math> satisfy MLR in x, so that for <math>\theta_1>\theta_0</math> and <math>x_1>x_0</math>:
| |
| | |
| : <math>\frac{f_{\theta_1}(x_1)}{f_{\theta_0}(x_1)} \geq \frac{f_{\theta_1}(x_0)}{f_{\theta_0}(x_0)},</math>
| |
| | |
| or equivalently:
| |
| | |
| : <math>f_{\theta_1}(x_1) f_{\theta_0}(x_0) \geq f_{\theta_1}(x_0) f_{\theta_0}(x_1). \, </math>
| |
| | |
| Integrating this epression twice, we obtain:
| |
| | |
| {| cellpadding="2" style="border:1px solid darkgray;"
| |
| |- border=0;
| |
| | ''1. To <math>x_1</math> with respect to <math>x_0</math>''
| |
| | |
| : <math>\int_{\min_x \in X}^{x_1} f_{\theta_1}(x_1) f_{\theta_0}(x_0) \, dx_0 </math>
| |
| | |
| : <math> \geq \int_{\min_x \in X}^{x_1} f_{\theta_1}(x_0) f_{\theta_0}(x_1) \, dx_0</math>
| |
| | |
| integrate and rearrange to obtain
| |
| | |
| :<math> \frac{f_{\theta_1}}{f_{\theta_0}}(x) \geq \frac{F_{\theta_1}}{F_{\theta_0}}(x) </math>
| |
| | |
| !width="50"|
| |
| | |
| | 2. ''From <math>x_0</math> with respect to <math>x_1</math>''
| |
| | |
| : <math>\int_{x_0}^{\max_x \in X} f_{\theta_1}(x_1) f_{\theta_0}(x_0) \, dx_1</math>
| |
| | |
| : <math> \geq \int_{x_0}^{\max_x \in X} f_{\theta_1}(x_0) f_{\theta_0}(x_1) \, dx_1</math>
| |
| | |
| integrate and rearrange to obtain
| |
| | |
| :<math> \frac{1-F_{\theta_1}(x)}{1-F_{\theta_0}(x)} \geq \frac{f_{\theta_1}}{f_{\theta_0}}(x) </math>
| |
| |}
| |
| | |
| ====First-order stochastic dominance====
| |
| | |
| Combine the two inequalities above to get first-order dominance:
| |
| :<math>F_{\theta_1}(x) \leq F_{\theta_0}(x) \ \forall x</math>
| |
| | |
| ====Monotone hazard rate====
| |
| | |
| Use only the second inequality above to get a monotone hazard rate:
| |
| :<math>\frac{f_{\theta_1}(x)}{1-F_{\theta_1}(x)} \leq \frac{f_{\theta_0}(x)}{1-F_{\theta_0}(x)} \ \forall x </math>
| |
| | |
| ===Example===
| |
| | |
| ==Uses==
| |
| | |
| ===Economics===
| |
| | |
| The MLR is an important condition on the type distribution of agents in [[mechanism design]]. Most solutions to mechanism design models assume a type distribution to satisfy the MLR to take advantage of a common solution method.
| |
| | |
| {{Theory of probability distributions}}
| |
| | |
| {{DEFAULTSORT:Monotone Likelihood Ratio Property}}
| |
| [[Category:Theory of probability distributions]]
| |
| [[Category:Hypothesis testing]]
| |
I would like to introduce myself to you, I am Andrew and my spouse doesn't like it at all. He is an purchase clerk and it's something he really appreciate. To play lacross is the thing I adore most of all. I've usually loved living in Alaska.
Here is my web blog ... clairvoyant psychic