Uncertainty quantification: Difference between revisions

From formulasearchengine
Jump to navigation Jump to search
No edit summary
 
en>Yobot
m Sources of uncertainty: WP:CHECKWIKI error fixes using AWB (9773)
Line 1: Line 1:
Marvella is what you can call  at [http://www.animecontent.com/blog/436250 home std test] [http://Www.everydayhealth.com/sexual-health/trichomoniasis.aspx std testing] her but it's not the most female title out there. In her professional life she is a [http://Www.nlm.nih.gov/medlineplus/ency/article/000886.htm payroll clerk] but she's always wanted her own company. My family lives in Minnesota and my family enjoys it. What I love doing is doing ceramics but I haven't std testing at [http://chatmast.com/index.php?do=/BookerSipessk/info/ home std test kit] produced a dime with it.<br><br>Feel free to surf to my blog; std testing at home [[http://spermdonorinfo.co.uk/groups/eliminate-a-yeast-infection-with-one-of-these-tips/ one-time offer]]
In [[quantum information theory]], '''quantum relative entropy''' is a measure of distinguishability between two [[density matrix|quantum states]]. It is the quantum mechanical analog of [[relative entropy]].
 
== Motivation ==
 
For simplicity, it will be assumed that all objects in the article are finite dimensional.
 
We first discuss the classical case. Suppose the probabilities of a finite sequence of events is given by the probability distribution ''P'' = {''p''<sub>1</sub>...''p''<sub>''n''</sub>}, but somehow we mistakenly assumed it to be ''Q'' = {''q''<sub>1</sub>...''q''<sub>''n''</sub>}. For instance, we can mistake an unfair coin for a fair one. According to this erroneous assumption, our uncertainty about the ''j''-th event, or equivalently, the amount of information provided after observing the ''j''-th event, is
 
:<math>\; - \log q_j.</math>
 
The (assumed) average uncertainty of all possible events is then
 
:<math>\; - \sum_j p_j \log q_j.</math>
 
On the other hand, the [[Shannon entropy]] of the probability distribution ''p'', defined by
 
:<math>\; - \sum_j p_j \log p_j,</math>
 
is the real amount of uncertainty before observation. Therefore the difference between these two quantities
 
:<math>\; - \sum_j p_j \log q_j - \left(- \sum_j p_j \log p_j\right) = \sum_j p_j \log p_j - \sum_j p_j \log q_j</math>
 
is a measure of the distinguishability of the two probability distributions ''p'' and ''q''. This is precisely the classical relative entropy, or [[Kullback–Leibler divergence]]:
 
:<math>D_{\mathrm{KL}}(P\|Q) = \sum_j p_j \log \frac{p_j}{q_j} \!.</math>
 
'''Note'''
#In the definitions above, the convention that 0·log&nbsp;0&nbsp;=&nbsp;0 is assumed, since lim<sub>''x''&nbsp;→&nbsp;0</sub> ''x''&nbsp;log&nbsp;''x''&nbsp;=&nbsp;0. Intuitively, one would expect that an event of zero probability to contribute nothing towards entropy.
#The relative entropy is not a [[metric space|metric]]. For example, it is not symmetric. The uncertainty discrepancy in mistaking a fair coin to be unfair is not the same as the opposite situation.
 
== Definition ==
 
As with many other objects in quantum information theory, quantum relative entropy is defined by extending the classical definition from probability distributions to [[density matrix|density matrices]]. Let ''ρ'' be a density matrix. The [[von Neumann entropy]] of ''ρ'', which is the quantum mechanical analog of the Shannon entropy, is given by
 
:<math>S(\rho) = - \operatorname{Tr} \rho \log \rho.</math>
 
For two density matrices ''ρ'' and ''σ'', the '''quantum relative entropy of ''ρ'' with respect to ''σ''''' is defined by
 
:<math>
S(\rho \| \sigma) = - \operatorname{Tr} \rho \log \sigma - S(\rho) = \operatorname{Tr} \rho \log \rho - \operatorname{Tr} \rho \log \sigma = \operatorname{Tr}\rho (\log \rho - \log \sigma).
</math>
 
We see that, when the states are classical, i.e. ''ρσ'' = ''σρ'', the definition coincides with the classical case.
 
=== Non-finite relative entropy ===
 
In general, the ''support'' of a matrix ''M'' is the orthogonal complement of its [[kernel (matrix)|kernel]], i.e. ''supp''(''M''):=''ker''(''M'')<sup>⊥</sup>. When consider the quantum relative entropy, we assume the convention that &minus;''s''&nbsp;·&nbsp;log&nbsp;0&nbsp;=&nbsp;∞ for any ''s''&nbsp;>&nbsp;0. This leads to the definition that
 
:<math>S(\rho \| \sigma) = \infty</math>
 
when
 
:<math>supp(\rho) \cap ker(\sigma) \neq 0.</math>
 
This makes physical sense. Informally, the quantum relative entropy is a measure of our ability to distinguish two quantum states. But orthogonal quantum states can always be distinguished via [[quantum measurement|projective measurement]]s. In the present context, this is reflected by non-finite quantum relative entropy.
 
In the interpretation given in the previous section, if we erroneously assume the state ''ρ'' has support in
''ker''(''σ''), this is an error impossible to recover from.
 
== Klein's inequality ==
=== Corresponding classical statement ===
 
For the classical Kullback–Leibler divergence, it can be shown that
 
:<math>D_{\mathrm{KL}}(P\|Q) = \sum_j p_j \log \frac{p_j}{q_j} \geq 0,</math>
 
and the equality holds if and only if ''P'' = ''Q''. Colloquially, this means that the uncertainty calculated using erroneous assumptions is always greater than the real amount of uncertainty.
 
To show the inequality, we rewrite
 
:<math>D_{\mathrm{KL}}(P\|Q) = \sum_j p_j \log \frac{p_j}{q_j} =  \sum_j (- \log \frac{q_j}{p_j})(p_j).</math>
 
Notice that log is a [[concave function]]. Therefore -log is [[convex function|convex]]. Applying [[Jensen's inequality]] to -log gives
 
:<math>
D_{\mathrm{KL}}(P\|Q) = \sum_j (- \log \frac{q_j}{p_j})(p_j) \geq - \log ( \sum_j \frac{q_j}{p_j} p_j ) = 0.
</math>
 
Jensen's inequality also states that equality holds if and only if, for all ''i'', ''q<sub>i</sub>'' = (∑''q<sub>j</sub>'') ''p<sub>i</sub>'', i.e. ''p'' = ''q''.
 
=== The result ===
 
Klein's inequality states that the quantum relative entropy
 
:<math>
S(\rho \| \sigma) = \operatorname{Tr}\rho (\log \rho - \log \sigma).
</math>
 
is non-negative in general. It is zero if and only ''ρ'' = ''σ''.
 
'''Proof'''
 
Let ''ρ'' and ''σ'' have spectral decompositions
 
:<math>\rho = \sum_i p_i v_i v_i ^* \; , \; \sigma = \sum_i q_i w_i w_i ^*.</math>
 
So
 
:<math>\log \rho = \sum_i (\log p_i) v_i v_i ^* \; , \; \log \sigma = \sum_i (\log q_i)w_i w_i ^*.</math>
 
Direct calculation gives
 
:<math>S(\rho \| \sigma)</math>
:<math>= \sum_k p_k \log p_k - \sum_{i,j} (p_i \log q_j) | v_i ^* w_j |^2</math>
:<math>= \sum_i p_i ( \log p_i - \sum_j \log q_j | v_i ^* w_j |^2)</math>
:<math>\;= \sum_i p_i (\log p_i - \sum_j (\log q_j )P_{ij}),</math> where ''P<sub>i j</sub>'' = |''v<sub>i</sub>*w<sub>j</sub>''|<sup>2</sup>.
 
Since the matrix (''P<sub>i j</sub>'')''<sub>i j</sub>'' is a [[doubly stochastic matrix]] and -log is a convex function, the above expression is
 
:<math>\geq \sum_i p_i (\log p_i - \log (\sum_j q_j P_{ij})</math>
 
:<math>\; = \sum_i p_i (\log p_i - \log (\sum_j q_j P_{ij}).</math>
 
Define ''r''<sub>i</sub> = ∑<sub>''j''</sub>''q<sub>j</sub> P<sub>i j</sub>''. Then {''r''<sub>i</sub>} is a probability distribution. From the non-negativity of classical relative entropy, we have
 
:<math>S(\rho \| \sigma) \geq \sum_i p_i \log \frac{p_i}{r_i} \geq 0.</math>
 
The second part of the claim follows from the fact that, since -log is strictly convex, equality is achieved in
 
:<math>
\sum_i p_i (\log p_i - \sum_j (\log q_j )P_{ij}) \geq \sum_i p_i (\log p_i - \log (\sum_j q_j P_{ij})
</math>
 
if and only if (''P<sub>i j</sub>'') is a [[permutation matrix]], which implies ''ρ'' = ''σ'', after a suitable labeling of the eigenvectors {''v<sub>i</sub>''} and {''w<sub>i</sub>''}.
 
== An entanglement measure ==
 
Let a composite quantum system have state space
 
:<math>H = \otimes _k H_k</math>
 
and ''ρ'' be a density matrix acting on ''H''.
 
The '''relative entropy of entanglement''' of ''ρ'' is defined by
 
:<math>\; D_{\mathrm{REE}} (\rho) = \min_{\sigma} S(\rho \| \sigma)</math>
 
where the minimum is taken over the family of [[separable state]]s. A physical interpretation of the quantity is the optimal distinguishability of the state ''ρ'' from separable states.
 
Clearly, when ''ρ'' is not [[quantum entanglement|entangled]]
 
:<math>\; D_{\mathrm{REE}} (\rho) = 0</math>
 
by Klein's inequality.
 
== Relation to other quantum information quantities ==
 
One reason the quantum relative entropy is useful is that several other important quantum information quantities are special cases of it.  Often, theorems are stated in terms of the quantum relative entropy, which lead to immediate corollaries concerning the other quantities.  Below, we list some of these relations.
 
Let ''ρ''<sub>AB</sub> be the joint state of a bipartite system with subsystem ''A'' of dimension ''n''<sub>A</sub> and ''B'' of dimension ''n''<sub>B</sub>.  Let ''ρ''<sub>A</sub>, ''ρ''<sub>B</sub> be the respective reduced states, and ''I''<sub>A</sub>, ''I''<sub>B</sub> the respective identities. The [[maximally mixed state]]s are ''I''<sub>A</sub>/''n''<sub>A</sub> and ''I''<sub>B</sub>/''n''<sub>B</sub>.  Then it is possible to show with direct computation that
 
:<math>S(\rho_{A} || I_{A}/n_A) = \mathrm{log}(n_A)- S(\rho_{A}), \;</math>
 
:<math>S(\rho_{AB} || \rho_{A} \otimes \rho_{B}) = S(\rho_{A}) + S(\rho_{B}) - S(\rho_{AB}) = I(A:B), </math>
 
:<math>S(\rho_{AB} || \rho_{A} \otimes I_{B}/n_B) = \mathrm{log}(n_B) + S(\rho_{A}) - S(\rho_{AB}) = \mathrm{log}(n_B)- S(B|A), </math>
 
where ''I''(''A'':''B'') is the [[quantum mutual information]] and ''S''(''B''|''A'') is the [[quantum conditional entropy]].
 
== References ==
 
* Vedral V., 2002, [http://link.aps.org/doi/10.1103/RevModPhys.74.197 Rev. Mod. Phys. 74, 197] , [http://arxiv.org/abs/quant-ph/0102094 eprint quant-ph/0102094]
 
[[Category:Quantum mechanical entropy]]
[[Category:Quantum information theory]]

Revision as of 00:52, 5 December 2013

In quantum information theory, quantum relative entropy is a measure of distinguishability between two quantum states. It is the quantum mechanical analog of relative entropy.

Motivation

For simplicity, it will be assumed that all objects in the article are finite dimensional.

We first discuss the classical case. Suppose the probabilities of a finite sequence of events is given by the probability distribution P = {p1...pn}, but somehow we mistakenly assumed it to be Q = {q1...qn}. For instance, we can mistake an unfair coin for a fair one. According to this erroneous assumption, our uncertainty about the j-th event, or equivalently, the amount of information provided after observing the j-th event, is

logqj.

The (assumed) average uncertainty of all possible events is then

jpjlogqj.

On the other hand, the Shannon entropy of the probability distribution p, defined by

jpjlogpj,

is the real amount of uncertainty before observation. Therefore the difference between these two quantities

jpjlogqj(jpjlogpj)=jpjlogpjjpjlogqj

is a measure of the distinguishability of the two probability distributions p and q. This is precisely the classical relative entropy, or Kullback–Leibler divergence:

DKL(PQ)=jpjlogpjqj.

Note

  1. In the definitions above, the convention that 0·log 0 = 0 is assumed, since limx → 0 x log x = 0. Intuitively, one would expect that an event of zero probability to contribute nothing towards entropy.
  2. The relative entropy is not a metric. For example, it is not symmetric. The uncertainty discrepancy in mistaking a fair coin to be unfair is not the same as the opposite situation.

Definition

As with many other objects in quantum information theory, quantum relative entropy is defined by extending the classical definition from probability distributions to density matrices. Let ρ be a density matrix. The von Neumann entropy of ρ, which is the quantum mechanical analog of the Shannon entropy, is given by

S(ρ)=Trρlogρ.

For two density matrices ρ and σ, the quantum relative entropy of ρ with respect to σ is defined by

S(ρσ)=TrρlogσS(ρ)=TrρlogρTrρlogσ=Trρ(logρlogσ).

We see that, when the states are classical, i.e. ρσ = σρ, the definition coincides with the classical case.

Non-finite relative entropy

In general, the support of a matrix M is the orthogonal complement of its kernel, i.e. supp(M):=ker(M). When consider the quantum relative entropy, we assume the convention that −s · log 0 = ∞ for any s > 0. This leads to the definition that

S(ρσ)=

when

supp(ρ)ker(σ)0.

This makes physical sense. Informally, the quantum relative entropy is a measure of our ability to distinguish two quantum states. But orthogonal quantum states can always be distinguished via projective measurements. In the present context, this is reflected by non-finite quantum relative entropy.

In the interpretation given in the previous section, if we erroneously assume the state ρ has support in ker(σ), this is an error impossible to recover from.

Klein's inequality

Corresponding classical statement

For the classical Kullback–Leibler divergence, it can be shown that

DKL(PQ)=jpjlogpjqj0,

and the equality holds if and only if P = Q. Colloquially, this means that the uncertainty calculated using erroneous assumptions is always greater than the real amount of uncertainty.

To show the inequality, we rewrite

DKL(PQ)=jpjlogpjqj=j(logqjpj)(pj).

Notice that log is a concave function. Therefore -log is convex. Applying Jensen's inequality to -log gives

DKL(PQ)=j(logqjpj)(pj)log(jqjpjpj)=0.

Jensen's inequality also states that equality holds if and only if, for all i, qi = (∑qj) pi, i.e. p = q.

The result

Klein's inequality states that the quantum relative entropy

S(ρσ)=Trρ(logρlogσ).

is non-negative in general. It is zero if and only ρ = σ.

Proof

Let ρ and σ have spectral decompositions

ρ=ipivivi*,σ=iqiwiwi*.

So

logρ=i(logpi)vivi*,logσ=i(logqi)wiwi*.

Direct calculation gives

S(ρσ)
=kpklogpki,j(pilogqj)|vi*wj|2
=ipi(logpijlogqj|vi*wj|2)
=ipi(logpij(logqj)Pij), where Pi j = |vi*wj|2.

Since the matrix (Pi j)i j is a doubly stochastic matrix and -log is a convex function, the above expression is

ipi(logpilog(jqjPij)
=ipi(logpilog(jqjPij).

Define ri = ∑jqj Pi j. Then {ri} is a probability distribution. From the non-negativity of classical relative entropy, we have

S(ρσ)ipilogpiri0.

The second part of the claim follows from the fact that, since -log is strictly convex, equality is achieved in

ipi(logpij(logqj)Pij)ipi(logpilog(jqjPij)

if and only if (Pi j) is a permutation matrix, which implies ρ = σ, after a suitable labeling of the eigenvectors {vi} and {wi}.

An entanglement measure

Let a composite quantum system have state space

H=kHk

and ρ be a density matrix acting on H.

The relative entropy of entanglement of ρ is defined by

DREE(ρ)=minσS(ρσ)

where the minimum is taken over the family of separable states. A physical interpretation of the quantity is the optimal distinguishability of the state ρ from separable states.

Clearly, when ρ is not entangled

DREE(ρ)=0

by Klein's inequality.

Relation to other quantum information quantities

One reason the quantum relative entropy is useful is that several other important quantum information quantities are special cases of it. Often, theorems are stated in terms of the quantum relative entropy, which lead to immediate corollaries concerning the other quantities. Below, we list some of these relations.

Let ρAB be the joint state of a bipartite system with subsystem A of dimension nA and B of dimension nB. Let ρA, ρB be the respective reduced states, and IA, IB the respective identities. The maximally mixed states are IA/nA and IB/nB. Then it is possible to show with direct computation that

S(ρA||IA/nA)=log(nA)S(ρA),
S(ρAB||ρAρB)=S(ρA)+S(ρB)S(ρAB)=I(A:B),
S(ρAB||ρAIB/nB)=log(nB)+S(ρA)S(ρAB)=log(nB)S(B|A),

where I(A:B) is the quantum mutual information and S(B|A) is the quantum conditional entropy.

References