|
|
Line 1: |
Line 1: |
| [[File:Dempster in Brest.JPG|thumb|Prof [[Arthur P. Dempster]] at the [http://bfas.iutlan.univ-rennes1.fr/belief2010/ Workshop on Theory of Belief Functions] ([[Brest, France|Brest]], 1 April 2010).]]
| | If you have the desire to process settings instantly, loading files immediately, but your body is logy and torpid, what would we do? If you are a giant "switchboard" that is deficiency of efficient administration system plus effective housekeeper, what would you do? If you have send the exact commands to your notice, however, the body will not do properly, what would you do? Yes! We require a full-featured repair registry!<br><br>If it is very not as big of a issue as you think it really is, it may probably be resolved easily by running a Startup Repair or by System Restore Utility. Again it can be because easy because running an anti-virus check or cleaning the registry.<br><br>With the Internet, the risk to the registry is a bit more plus windows XP error messages could appear frequently. Why? The malicious wares like viruses, Trojans, spy-wares, ad wares, and the like gets recorded too. Cookies are ideal examples. We get to save passwords, plus stuff, appropriate? That is a easy example of the register functioning.<br><br>Registry cleaners have been crafted for 1 purpose - to wash out the 'registry'. This really is the central database that Windows relies on to function. Without this database, Windows wouldn't even exist. It's so important, that your computer is continually adding plus updating the files inside it, even if you're browsing the Internet (like now). This is awesome, however the problems occur whenever certain of those files become corrupt or lost. This happens a lot, plus it takes a superior tool to fix it.<br><br>Google Chrome crashes on Windows 7 when the registry entries are improperly modified. Missing registry keys or registry keys with wrong values can lead to runtime mistakes plus thereby the issue happens. You are recommended to scan the entire system registry plus review the result. Attempt the registry repair task using third-party [http://bestregistrycleanerfix.com/registry-mechanic registry mechanic] software.<br><br>Another key element when we compare registry cleaners is having a facility to manage the start-up tasks. This simply means that you can choose what programs you want to start when you commence your PC. If you have unwanted programs starting when you boot up a PC this might cause a slow running computer.<br><br>Reboot PC - Simply reboot a PC to find if the error is gone. Frequently, rebooting the PC readjusts the internal settings and software plus hence fixes the problem. If it doesn't then move on to follow the instructions under.<br><br>Fortunately, there's a simple method to fix most your computer mistakes. You just have to be able to fix corrupt registry files on the computer. And to do which, we will merely employ a tool well-known as a registry cleaner. These easy pieces of software actually scan through your PC plus fix each corrupt file which would cause a issue to Windows. This allows your computer to use all of the files it wants, that not only speeds it up - and stops all the errors on a system because well. |
| | |
| The '''Dempster–Shafer theory''' ('''DST''') is a mathematical theory of [[evidence]].<ref name="SH76">Shafer, Glenn; ''A Mathematical Theory of Evidence'', Princeton University Press, 1976, ISBN 0-608-02508-9</ref> It allows one to combine evidence from different sources and arrive at a degree of belief (represented by a belief function) that takes into account all the available evidence. The theory was first developed by [[Arthur P. Dempster]]<ref>{{cite journal|doi=10.1214/aoms/1177698950|title=Upper and lower probabilities induced by a multivalued mapping|journal=The Annals of Mathematical Statistics|year=1967|first=A. P.|last=Dempster|coauthors=|volume=38|issue=2|pages=325–339|id= |url= |format=|accessdate=}}</ref> and Glenn Shafer.<ref name="SH76"/><ref>{{cite journal|author=Fine, Terrence L.|title=Review: Glenn Shafer, ''A mathematical theory of evidence''|journal=Bull. Amer. Math. Soc.|year=1977|volume=83|issue=4|pages=667–672|url=http://projecteuclid.org/euclid.bams/1183538896}}</ref>
| |
| | |
| In a narrow sense, the term '''Dempster–Shafer theory''' refers to the original conception of the theory by Dempster and Shafer. However, it is more common to use the term in the wider sense of the same general approach, as adapted to specific kinds of situations. In particular, many authors have proposed different rules for combining evidence, often with a view to handling conflicts in evidence better.<ref name=Sentz-Ferson>Kari Sentz and Scott Ferson (2002); [http://www.sandia.gov/epistemic/Reports/SAND2002-0835.pdf ''Combination of Evidence in Dempster–Shafer Theory''], Sandia National Laboratories SAND 2002-0835</ref>
| |
| | |
| ==Overview==
| |
| Dempster–Shafer theory is a generalization of the [[Bayesian probability|Bayesian theory of subjective probability]]; whereas the latter requires probabilities for each question of interest, belief functions base degrees of belief (or confidence, or trust) for one question on the probabilities for a related question. These degrees of belief may or may not have the mathematical properties of probabilities; how much they differ depends on how closely the two questions are related.<ref name="SH02">Shafer, Glenn; [http://www.glennshafer.com/assets/downloads/articles/article48.pdf ''Dempster–Shafer theory''], 2002</ref> Put another way, it is a way of representing [[epistemology|epistemic]] plausibilities but it can yield answers that contradict those arrived at using [[probability theory]].
| |
| | |
| Often used as a method of [[sensor fusion]], Dempster–Shafer theory is based on two ideas: obtaining degrees of belief for one question from subjective probabilities for a related question, and Dempster's rule<ref name="DE68">Dempster, Arthur P.; ''A generalization of Bayesian inference'', Journal of the Royal Statistical Society, Series B, Vol. 30, pp. 205–247, 1968</ref> for combining such degrees of belief when they are based on independent items of evidence. In essence, the degree of belief in a proposition depends primarily upon the number of answers (to the related questions) containing the proposition, and the subjective probability of each answer. Also contributing are the rules of combination that reflect general assumptions about the data.
| |
| | |
| In this formalism a '''degree of belief''' (also referred to as a '''mass''') is represented as a '''belief function''' rather than a [[Bayesianism|Bayesian]] [[probability distribution]]. Probability values are assigned to ''sets'' of possibilities rather than single events: their appeal rests on the fact they naturally encode evidence in favor of propositions.
| |
| | |
| Dempster–Shafer theory assigns its masses to all of the non-empty subsets of the entities that compose a system.{{clarify|date=April 2012}}
| |
| | |
| ===Belief and plausibility===
| |
| Shafer's framework allows for belief about propositions to be represented as intervals, bounded by two values, ''belief'' (or ''support'') and ''plausibility'':
| |
| | |
| :''belief'' ≤ ''plausibility''.
| |
| | |
| ''Belief'' in a hypothesis is constituted by the sum of the masses of all sets enclosed by it (''i.e.'' the sum of the masses of all subsets of the hypothesis).{{clarify|date=April 2012}} It is the amount of belief that directly supports a given hypothesis at least in part, forming a lower bound. Belief (usually denoted ''Bel'') measures the strength of the evidence in favor of a set of propositions. It ranges from 0 (indicating no evidence) to 1 (denoting certainty). ''Plausibility'' is 1 minus the sum of the masses of all sets whose intersection with the hypothesis is empty. It is an upper bound on the possibility that the hypothesis could be true, ''i.e.'' it “could possibly be the true state of the system” up to that value, because there is only so much evidence that contradicts that hypothesis. Plausibility (denoted by ''Pl'') is defined to be ''Pl(s)=1-Bel(~s)''. It also ranges from 0 to 1 and measures the extent to which evidence in favor of ''~s'' leaves room for belief in ''s''. For example, suppose we have a belief of 0.5 and a plausibility of 0.8 for a proposition, say “the cat in the box is dead.” This means that we have evidence that allows us to state strongly that the proposition is true with a confidence of 0.5. However, the evidence contrary to that hypothesis (i.e. “the cat is alive”) only has a confidence of 0.2. The remaining mass of 0.3 (the gap between the 0.5 supporting evidence on the one hand, and the 0.2 contrary evidence on the other) is “indeterminate,” meaning that the cat could either be dead or alive. This interval represents the level of uncertainty based on the evidence in your system.
| |
| | |
| {| class="wikitable"
| |
| ! Hypothesis !! Mass !! Belief!! Plausibility
| |
| |-
| |
| | Null (neither alive nor dead) || 0 || 0 || 0
| |
| |-
| |
| | Alive || 0.2 || 0.2 || 0.5
| |
| |-
| |
| | Dead || 0.5 || 0.5 || 0.8
| |
| |-
| |
| | Either (alive or dead) || 0.3 || 1.0 || 1.0
| |
| |}
| |
| | |
| The null hypothesis is set to zero by definition (it corresponds to “no solution”). The orthogonal hypotheses “Alive” and “Dead” have probabilities of 0.2 and 0.5, respectively. This could correspond to “Live/Dead Cat Detector” signals, which have respective reliabilities of 0.2 and 0.5. Finally, the all-encompassing “Either” hypothesis (which simply acknowledges there is a cat in the box) picks up the slack so that the sum of the masses is 1. The belief for the “Alive” and “Dead” hypotheses matches their corresponding masses because they have no subsets; belief for “Either” consists of the sum of all three masses (Either, Alive, and Dead) because “Alive” and “Dead” are each subsets of “Either”. The “Alive” plausibility is 1 − ''m'' (Dead) and the “Dead” plausibility is 1 − ''m'' (Alive). Finally, the “Either” plausibility sums ''m''(Alive) + ''m''(Dead) + ''m''(Either). The universal hypothesis (“Either”) will always have 100% belief and plausibility —it acts as a [[checksum]] of sorts.
| |
| | |
| Here is a somewhat more elaborate example where the behavior of belief and plausibility begins to emerge. We're looking through a variety of detector systems at a single faraway signal light, which can only be coloured in one of three colours (red, yellow, or green):
| |
| | |
| {| class="wikitable"
| |
| ! Hypothesis !! Mass !! Belief !! Plausibility
| |
| |-
| |
| | Null || 0 || 0 || 0
| |
| |-
| |
| | Red || 0.35 || 0.35 || 0.56
| |
| |-
| |
| | Yellow || 0.25 || 0.25 || 0.45
| |
| |-
| |
| | Green || 0.15 || 0.15 || 0.34
| |
| |-
| |
| | Red or Yellow || 0.06 || 0.66 || 0.85
| |
| |-
| |
| | Red or Green || 0.05 || 0.55 || 0.75
| |
| |-
| |
| | Yellow or Green || 0.04 || 0.44 || 0.65
| |
| |-
| |
| | Any || 0.1 || 1.0 || 1.0
| |
| |}
| |
| | |
| Events of this kind would not be modeled as disjoint sets in probability space as they are here in mass assignment space. Rather the event "Red or Yellow" would be considered as the union of the events "Red" and "Yellow", and (see [[probability axioms]]) ''P''(Red or Yellow) ≥ ''P''(Yellow), and ''P''(Any)=1, where ''Any'' refers to ''Red'' or ''Yellow'' or ''Green''. In DST the mass assigned to ''Any'' refers to the proportion of evidence that can't be assigned to any of the other states, which here means evidence that says there is a light but doesn't say anything about what color it is. In this example, the proportion of evidence that shows the light is either ''Red'' or ''Green'' is given a mass of 0.05. Such evidence might, for example, be obtained from a R/G color blind person. DST lets us extract the value of this sensor's evidence. Also, in DST the Null set is considered to have zero mass, meaning here that the signal light system exists and we are examining its possible states, not speculating as to whether it exists at all.
| |
| | |
| ===Combining beliefs===
| |
| Beliefs from different sources can be combined with various fusion operators to model specific situations of belief fusion, e.g. with ''[[#Dempster's rule of combination|Dempster's rule of combination]]'', which combines belief constraints<ref name="Jos12">{{cite journal|author=Jøsang, A., and Simon, P.|title=Dempster's Rule as Seen by Little Colored Balls|journal=Computational Intelligence|year=2012|volume=28|issue=4|pages=453–474|url=http://onlinelibrary.wiley.com/doi/10.1111/j.1467-8640.2012.00421.x/pdf|doi=10.1111/j.1467-8640.2012.00421.x}}</ref> that are dictated by independent belief sources, such as in the case of combining hints<ref name="KM95">Kohlas, J., and Monney, P.A., 1995. ''A Mathematical Theory of Hints. An Approach to the Dempster-Shafer Theory of Evidence''. Vol. 425 in Lecture Notes in Economics and Mathematical Systems. Springer Verlag.</ref> or combining preferences.<ref name="JH12">Jøsang, A., and Hankin, R., 2012. ''Interpretation and Fusion of Hyper Opinions in Subjective Logic''. 15th International Conference on Information Fusion (FUSION) 2012. E-ISBN 978-0-9824438-4-2, IEEE.|url=http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6289948</ref> Note that the probability masses from propositions that contradict each other can be used to obtain a measure of conflict between the independent belief sources. Other situations can be modeled with different fusion operators, such as cumulative fusion of beliefs from independent sources which can be modeled with the cumulative fusion operator.<ref name="JDR10">{{cite journal|author=Jøsang, A., Diaz, J., and Rifqi, M. |title=Cumulative and averaging fusion of beliefs|journal=Information Fusion|year=2010|volume=11|issue=2|pages=192–200|url=http://www.sciencedirect.com/science/article/pii/S156625350900044X|doi=10.1016/j.inffus.2009.05.005}}</ref>
| |
| | |
| [[#Dempster's rule of combination|Dempster's rule of combination]] is sometimes interpreted as an approximate generalisation of [[Bayes' rule]]. In this interpretation the priors and conditionals need not be specified, unlike traditional Bayesian methods, which often use a symmetry (minimax error) argument to assign prior probabilities to random variables (''e.g.'' assigning 0.5 to binary values for which no information is available about which is more likely). However, any information contained in the missing priors and conditionals is not used in [[Dempster's rule of combination]] unless it can be obtained indirectly—and arguably is then available for calculation using Bayes equations.
| |
| | |
| Dempster–Shafer theory allows one to specify a degree of ignorance in this situation instead of being forced to supply prior probabilities that add to unity. This sort of situation, and whether there is a real distinction between ''[[risk]]'' and ''[[ignorance]]'', has been extensively discussed by statisticians and economists. See, for example, the contrasting views of [[Ellsberg's paradox|Daniel Ellsberg]], [[Howard Raiffa]], [[Arrovian uncertainty|Kenneth Arrow]] and [[Knightian uncertainty|Frank Knight]].
| |
| | |
| ==Formal definition==
| |
| Let ''X'' be the ''[[universal set]]'': the set representing all possible states of a system under consideration. The [[power set]]
| |
| | |
| :<math>2^X \,\!</math>
| |
| | |
| is the set of all subsets of ''X'', including the [[empty set]] <math>\varnothing</math>. For example, if:
| |
| | |
| :<math>X = \left \{ a, b \right \} \,\!</math>
| |
| | |
| then
| |
| | |
| :<math>2^X = \left \{ \varnothing, \left \{ a \right \}, \left \{ b \right \}, X \right \}. \,</math>
| |
| | |
| The elements of the power set can be taken to represent propositions concerning the actual state of the system, by containing all and only the states in which the proposition is true.
| |
| | |
| The theory of evidence assigns a belief mass to each element of the power set. Formally, a function
| |
| | |
| :<math>m: 2^X \rightarrow [0,1] \,\!</math>
| |
| | |
| is called a ''basic belief assignment'' (BBA), when it has two properties. First, the mass of the empty set is zero:
| |
| | |
| :<math>m(\varnothing) = 0. \,\!</math>
| |
| | |
| Second, the masses of the remaining members of the power set add up to a total of 1:
| |
| | |
| :<math>\sum_{A \in 2^X} m(A) = 1 \,\!</math>
| |
| | |
| The mass ''m''(''A'') of ''A'', a given member of the power set, expresses the proportion of all relevant and available evidence that supports the claim that the actual state belongs to ''A'' but to no particular subset of ''A''. The value of ''m''(''A'') pertains ''only'' to the set ''A'' and makes no additional claims about any subsets of ''A'', each of which have, by definition, their own mass.
| |
| | |
| From the mass assignments, the upper and lower bounds of a probability interval can be defined. This interval contains the precise probability of a set of interest (in the classical sense), and is bounded by two non-additive continuous measures called '''belief''' (or '''support''') and '''plausibility''':
| |
| | |
| :<math>\operatorname{bel}(A) \le P(A) \le \operatorname{pl}(A).</math> | |
| | |
| The belief bel(''A'') for a set ''A'' is defined as the sum of all the masses of subsets of the set of interest:
| |
| | |
| :<math>\operatorname{bel}(A) = \sum_{B \mid B \subseteq A} m(B). \, </math>
| |
| | |
| The plausibility pl(''A'') is the sum of all the masses of the sets ''B'' that intersect the set of interest ''A'':
| |
| | |
| :<math>\operatorname{pl}(A) = \sum_{B \mid B \cap A \ne \varnothing} m(B). \, </math>
| |
| | |
| The two measures are related to each other as follows:
| |
| | |
| :<math>\operatorname{pl}(A) = 1 - \operatorname{bel}(\overline{A}).\,</math>
| |
| | |
| And conversely, for finite ''A'', given the belief measure bel(''B'') for all subsets ''B'' of ''A'', we can find the masses m(''A'') with the following inverse function:
| |
| | |
| :<math>m(A) = \sum_{B \mid B \subseteq A} (-1)^{|A-B|}\operatorname{bel}(B) \, </math>
| |
| | |
| where |''A'' − ''B''| is the difference of the cardinalities of the two sets.<ref name=Sentz-Ferson/>
| |
| | |
| It [[Logical consequence|follows from]] the last two equations that, for a finite set ''X'', you need know only one of the three (mass, belief, or plausibility) to deduce the other two; though you may need to know the values for many sets in order to calculate one of the other values for a particular set. In the case of an infinite ''X'', there can be well-defined belief and plausibility functions but no well-defined mass function.<ref>J.Y. Halpern (2003) ''Reasoning about Uncertainty'' MIT Press</ref>
| |
| | |
| ==Dempster's rule of combination==
| |
| The problem we now face is how to combine two independent sets of probability mass assignments in specific situations. In case different sources express their beliefs over the frame in terms of belief constraints such as in case of giving hints or in case of expressing preferences, then Dempster's rule of combination is the appropriate fusion operator. This rule derives common shared belief between multiple sources and ignores ''all'' the conflicting (non-shared) belief through a normalization factor. Use of that rule in other situations than that of combining belief constraints has come under serious criticism, such as in case of fusing separate beliefs estimates from multiple sources that are to be integrated in a cumulative manner, and not as constraints. Cumulative fusion means that all probability masses from the different sources are reflected in the derived belief, so no probability mass is ignored.
| |
| | |
| Specifically, the combination (called the '''joint mass''') is calculated from the two sets of masses ''m''<sub>1</sub> and ''m''<sub>2</sub> in the following manner:
| |
| | |
| :<math>m_{1,2}(\varnothing) = 0 \, </math>
| |
| | |
| :<math>m_{1,2}(A) = (m_1 \oplus m_2) (A) = \frac {1}{1 - K} \sum_{B \cap C = A \ne \varnothing} m_1(B) m_2(C) \,\!</math>
| |
| | |
| where
| |
| | |
| :<math>K = \sum_{B \cap C = \varnothing} m_1(B) m_2(C). \, </math>
| |
| | |
| ''K'' is a measure of the amount of conflict between the two mass sets.
| |
| | |
| ===Effects of conflict===
| |
| The normalization factor above, 1 − ''K'', has the effect of completely ignoring conflict and attributing ''any'' mass associated with conflict to the null set. This combination rule for evidence can therefore produce counterintuitive results, as we show next.
| |
| | |
| ====Example producing correct results in case of high conflict====
| |
| The following example shows how Dempster's rule produces intuitive results when applied in a preference fusion situation, even when there is high conflict.
| |
| | |
| :Suppose that two friends, Alice and Bob, want to see a film at the cinema one evening, and that there are only three films showing: X, Y and Z. Alice expresses her preference for film X with probability 0.99, and her preference for film Y with a probability of only 0.01. Bob expresses his preference for film Z with probability 0.99, and his preference for film Y with a probability of only 0.01. When combining the preferences with Dempster's rule of combination it turns out that their combined preference results in probability 1.0 for film Y, because it is the only film that they both agree to see.
| |
| | |
| :Dempster's rule of combination produces intuitive results even in case of totally conflicting beliefs when interpreted in this way. Assume that Alice prefers film X with probability 1.0, and that Bob prefers film Z with probability 1.0. When trying to combine their preferences with Dempster's rule it turns out that it is undefined in this case, which means that there is no solution. This would mean that they can not agree on seeing any film together, so they don't go to the cinema together that evening. However, the semantics of interpreting preference as a probability is vague - if it is referring to the probability of seeing film X tonight, then we face the [[False dilemma|Fallacy of the excluded middle]]: the event that actually occurs, seeing none of the films tonight, has a probability mass of 0.
| |
| | |
| ====Example producing counter-intuitive results in case of high conflict====
| |
| An example with exactly the same numerical values was introduced by Zadeh in 1979,<ref name="Zadeh79">L. Zadeh, On the validity of Dempster's rule of combination, Memo M79/24, Univ. of California, Berkeley, USA, 1979</ref><ref name="Zadeh84">L. Zadeh, Book review: A mathematical theory of evidence, The Al Magazine, Vol. 5, No. 3, pp. 81-83, 1984</ref><ref name="Zadeh86">L. Zadeh, A simple view of the Dempster-Shafer Theory of Evidence and its implication for the rule of combination, The Al Magazine, Vol. 7, No. 2, pp. 85-90, Summer 1986.</ref>
| |
| to point out counter-intuitive results generated by Dempster's rule when there is a high degree of conflict. The example goes as follows:
| |
| | |
| :Suppose that one has two equi-reliable doctors and one doctor believes a patient has either a brain tumor— with a probability (i.e. a basic belief assignment - bba's, or mass of belief) of 0.99 — or meningitis—with a probability of only 0.01. A second doctor believes the patient has a concussion — with a probability of 0.99 — and believes the patient suffers from meningitis — with a probability of only 0.01. Applying Dempster’s rule to combine these two sets of masses of belief, one gets finally ''m(meningitis)=1'' (the meningitis is diagnosed with 100 percent of confidence).
| |
| | |
| Such result goes against the common sense since both doctors agree that there is a little chance that the patient has a meningitis. This example has been the starting point of many research works for trying to find a solid justification for Dempster's rule and for foundations of Dempster-Shafer Theory<ref name="Ruspini88">E. Ruspini, "The logical foundations of evidential reasoning", ''SRI Technical Note'' '''408''', December 20, 1986 (revised April 27, 1987)</ref><ref name="Wilson93">N. Wilson, "The assumptions behind Dempster's rule", in ''Proceedings of the 9th Conference on Uncertainty in Artificial Intelligence'', pages 527--534, Morgan Kaufmann Publishers, San Mateo, CA, USA, 1993</ref> or to show the inconsistencies of this theory.<ref name="Voorbraak88">F. Voorbraak, "On the justification of Dempster's rule of combination", ''Artificial Intelligence'', Vol. '''48''', pp. 171--197, 1991</ref><ref name="Wang1994">Pei Wang, "A Defect in Dempster-Shafer Theory", in ''Proceedings of the 10th Conference on Uncertainty in Artificial Intelligence'', pages 560-566, Morgan Kaufmann Publishers, San Mateo, CA, USA, 1994</ref><ref name="Walley91">P. Walley, "Statistical Reasoning with Imprecise Probabilities", Chapman and Hall, London, pp. 278-281, 1991</ref>
| |
| | |
| ====Example producing counter-intuitive results in case of low conflict====
| |
| The following example shows where Dempster's rule produces a counter-intuitive result, even when there is low conflict.
| |
| | |
| :Suppose that one doctor believes a patient has either a brain tumor, with a probability of 0.99, or meningitis, with a probability of only 0.01. A second doctor also believes the patient has a brain tumor, with a probability of 0.99, and believes the patient suffers from concussion, with a probability of only 0.01. If we calculate m (brain tumor) with Dempster’s rule, we obtain
| |
| | |
| ::<math>m(\text{brain tumor}) = \operatorname{Bel}(\text{brain tumor}) = 1. \, </math>
| |
| | |
| This result implies ''complete support'' for the diagnosis of a brain tumour, which both doctors believed ''very likely''. The agreement arises from the low degree of conflict between the two sets of evidence comprised by the two doctors' opinions.
| |
| | |
| In either case, it would be reasonable to expect that:
| |
| | |
| :<math>m(\text{brain tumor}) < 1\text{ and } \operatorname{Bel}(\text{brain tumor}) < 1,\,</math>
| |
| | |
| since the existence of non-zero belief probabilities for other diagnoses implies ''less than complete support'' for the brain tumour diagnosis.
| |
| | |
| ==Criticism==
| |
| [[Judea Pearl]] (1988a, chapter 9;<ref name="Pearl-88">Pearl, J. (1988a), ''Probabilistic Reasoning in Intelligent Systems,'' (Revised Second Printing) San Mateo, CA: Morgan Kaufmann.</ref> 1988b<ref name="Pearl-1988b">{{Cite journal | doi = 10.1016/0888-613X(88)90117-X | last1 = Pearl | first1 = J. | year = 1988b | title = On Probability Intervals | url = | journal = International Journal of Approximate Reasoning | volume = 2 | issue = 3| pages = 211–216 }}</ref> and 1990)<ref name="Pearl-1990">{{Cite journal | doi = 10.1016/0888-613X(90)90013-R | last1 = Pearl | first1 = J. | year = 1990 | title = Reasoning with Belief Functions: An Analysis of Compatibility | url = | journal = The International Journal of Approximate Reasoning | volume = 4 | issue = 5/6| pages = 363–389 }}</ref> has argued that it is misleading to interpret belief functions as representing either “probabilities of an event,” or “the confidence one has in the probabilities assigned to various outcomes,” or “degrees of belief (or confidence, or trust) in a proposition,” or “degree of ignorance in a situation.” Instead, belief
| |
| functions represent the probability that a given proposition is ''provable'' from a set of other propositions, to which probabilities are assigned. Confusing probabilities of ''truth'' with probabilities of ''provability'' may lead to counterintuitive results in reasoning tasks such as (1) representing incomplete knowledge, (2) belief-updating and (3) evidence pooling. He further demonstrated that, if partial knowledge is encoded and updated by belief function methods, the resulting beliefs cannot serve as a basis for rational decisions.
| |
| | |
| Kłopotek and Wierzchoń<ref name="KW-98">M.A. Kłopotek, S.T. Wierzchoń': "A New Qualitative Rough-Set Approach to Modeling Belief Functions." [in:] L. Polkowski, A, Skowron eds: ''Rough Sets And Current Trends In Computing. Proc. 1st International Conference RSCTC'98'', Warsaw, June 22–26, 1998, ''Lecture Notes in Artificial Intelligence 1424'', Springer-Verlag, pp. 346–353.</ref> proposed to interpret the Dempster–Shafer theory in terms of statistics of decision tables (of the [[rough set theory]]), whereby the operator of combining evidence should be seen as relational joining of decision tables. In another interpretation M.A. Kłopotek and S.T. Wierzchoń<ref name="KW-02">M.A. Kłopotek and S.T. Wierzchoń, "Empirical Models for the Dempster–Shafer Theory". in: Srivastava, R.P., Mock, T.J., (Eds.). ''Belief Functions in Business Decisions''. Series: ''Studies in Fuzziness and Soft Computing''. Vol. '''88''' Springer-Verlag. March 2002. ISBN 3-7908-1451-2, pp. 62–112
| |
| </ref> propose to view this theory as describing destructive material processing (under loss of properties), ''e.g.'' like in some semiconductor production processes. Under both interpretations reasoning in DST gives correct results, contrary to the earlier probabilistic interpretations, criticized by Pearl in the cited papers and by other researchers.
| |
| | |
| Jøsang proved that Dempster's rule of combination actually is a method for fusing belief constraints.<ref name="Jos12"/> It only represents an approximate fusion operator in other situations, such as cumulative fusion of beliefs, but generally produces incorrect results in such situations. The confusion around the validity of Dempster's rule therefore originates in the failure of correctly interpreting the nature of situations to be modeled. Dempster's rule of combination always produces correct and intuitive results in situation of fusing belief constraints from different sources.
| |
| | |
| ==See also==
| |
| *[[Imprecise probability]]
| |
| *[[Upper and lower probabilities]]
| |
| *[[Possibility theory]]
| |
| *[[Probabilistic logic]]
| |
| *[[Bayes' theorem]]
| |
| *[[Bayesian network]]
| |
| *[[G. L. S. Shackle]]
| |
| *[[Transferable belief model]]
| |
| *[[Info-gap decision theory]]
| |
| *[[Subjective logic]]
| |
| *[[Doxastic logic]]
| |
| *[[Linear belief function]]
| |
| | |
| ==References==
| |
| {{Reflist|30em}}
| |
| | |
| ==Further reading==
| |
| * Yang, J. B. and Xu, D. L. ''Evidential Reasoning Rule for Evidence Combination'', Artificial Intelligence, Vol.205, pp. 1–29, 2013.
| |
| * Yager, R. R., & Liu, L. (2008). ''Classic works of the Dempster–Shafer theory of belief functions.'' Studies in fuzziness and soft computing, v. 219. Berlin: [[Springer Science+Business Media|Springer]]. ISBN 978-3-540-25381-5.
| |
| * [http://bfasociety.org/wiki/extensions/Wikindx/wikindx3/index.php more references]
| |
| * Joseph C. Giarratano and Gary D. Riley (2005); ''Expert Systems: principles and programming'', ed. Thomson Course Tech., ISBN 0-534-38447-1
| |
| | |
| ==External links==
| |
| *[http://www.bfasociety.org/ BFAS: Belief Functions and Applications Society]
| |
| | |
| {{DEFAULTSORT:Dempster-Shafer theory}}
| |
| [[Category:Dempster–Shafer theory]]
| |
| [[Category:Probability theory]]
| |
| [[Category:Belief]]
| |