Conway's LUX method for magic squares: Difference between revisions
en>Xanthoxyl →Method: expand |
en>Addbot |
||
Line 1: | Line 1: | ||
In [[probability theory]], the '''total variation distance''' is a distance measure for probability distributions. It is an example of a [[statistical distance]] metric, and is sometimes just called "the" '''statistical distance'''. | |||
==Definition== | |||
The total variation distance between two [[probability measure]]s ''P'' and ''Q'' on a [[sigma-algebra]] ''<math>\mathcal{F}</math>'' of [[subset]]s of the sample space <math>\Omega</math> is defined via<ref name=Chatterjee2007>{{cite web|last=Chatterjee|first=Sourav|title=Distances between probability measures|url=http://www.stat.berkeley.edu/~sourav/Lecture2.pdf|publisher=UC Berkeley|accessdate=21 June 2013}}</ref> | |||
:<math>\delta(P,Q)=\sup_{ A\in \mathcal{F}}\left|P(A)-Q(A)\right|. </math> | |||
Informally, this is the largest possible difference between the probabilities that the two [[probability distribution]]s can assign to the same event. | |||
For a [[Categorical distribution|finite alphabet]] we can relate the total variation distance to the [[Lp-norm|1-norm]] of the difference of the two probability distributions as follows:<ref>http://books.google.com/books?id=6Cg5Nq5sSv4C&lpg=PP1&pg=PA48#v=onepage&q&f=false</ref> | |||
:<math>\delta(P,Q) = \frac 1 2 \|P-Q\|_1 = \frac 1 2 \sum_x \left| P(x) - Q(x) \right|\;.</math> | |||
For arbitrary sample spaces, an equivalent definition of the total variation distance is | |||
:<math>\delta(P,Q) = \frac 1 2 \int_\Omega \left| f_P - f_Q \right|d\mu\;.</math> | |||
where <math>\mu</math> is an arbitrary positive measure such that both <math>P</math> and <math>Q</math> are [[absolutely continuous]] with respect to it and where <math>f_P</math> and <math>f_Q</math> are the [[Radon-Nikodym]] derivatives of <math>P</math> and <math>Q</math> with respect to <math>\mu</math>. | |||
The total variation distance is related to the [[Kullback–Leibler divergence]] by [[Pinsker's inequality]]. | |||
==See also== | |||
*[[Total variation]] | |||
*[[Kolmogorov–Smirnov test]] | |||
*[[Wasserstein metric]] | |||
==References== | |||
{{reflist}} | |||
[[Category:Probability theory]] | |||
[[Category:F-divergences]] | |||
{{probability-stub}} |
Revision as of 15:54, 7 March 2013
In probability theory, the total variation distance is a distance measure for probability distributions. It is an example of a statistical distance metric, and is sometimes just called "the" statistical distance.
Definition
The total variation distance between two probability measures P and Q on a sigma-algebra of subsets of the sample space is defined via[1]
Informally, this is the largest possible difference between the probabilities that the two probability distributions can assign to the same event.
For a finite alphabet we can relate the total variation distance to the 1-norm of the difference of the two probability distributions as follows:[2]
For arbitrary sample spaces, an equivalent definition of the total variation distance is
where is an arbitrary positive measure such that both and are absolutely continuous with respect to it and where and are the Radon-Nikodym derivatives of and with respect to .
The total variation distance is related to the Kullback–Leibler divergence by Pinsker's inequality.
See also
References
43 year old Petroleum Engineer Harry from Deep River, usually spends time with hobbies and interests like renting movies, property developers in singapore new condominium and vehicle racing. Constantly enjoys going to destinations like Camino Real de Tierra Adentro.