Orders of magnitude (mass): Difference between revisions

From formulasearchengine
Jump to navigation Jump to search
en>Monkbot
en>Monkbot
 
Line 1: Line 1:
{{multiple issues|
It is time to address the slow computer issues even if you do not understand how. Just because a computer is functioning so slow or keeps freezing up; refuses to mean to not address the issue plus fix it. You may or can not be aware which any computer owner should learn that there are certain aspects that a computer requires to maintain the number one performance. The sad truth is that numerous individuals who own a system have no idea that it needs routine maintenance simply like their vehicles.<br><br>Document files enable the user to input data, images, tables and alternative ingredients to improve the presentation. The just issue with this formatting compared to alternative file kinds including .pdf for example is its ability to be easily editable. This means which anyone viewing the file could change it by accident. Additionally, this file formatting could be opened by different programs yet it does not guarantee which what you see inside the Microsoft Word application usually nevertheless be the same whenever we view it utilizing another program. However, it is nonetheless preferred by many computer consumers for its ease of employ and qualities.<br><br>Although this problem affects millions of computer consumers throughout the globe, there is an convenient method to fix it. We see, there's 1 reason for a slow loading computer, and that's considering your PC cannot read the files it must run. In a nutshell, this just means that whenever we do anything on Windows, it must read up on how to do it. It's traditionally a pretty 'dumb' program, that has to have files to tell it to do everything.<br><br>It is regular that the imm32.dll error is caused as a result of a mis-deletion activity. If you cannot discover the imm32.dll anywhere on a computer, there is not any doubt that it must be mis-deleted whenever uninstalling programs or additional unneeded files. Hence, you are able to directly deal it from other programs or download it from a safe internet and then put it on your computer.<br><br>The [http://bestregistrycleanerfix.com/registry-mechanic registry mechanic] should come because standard with a back up and restore facility. This ought to be an convenient to apply procedure.That signifies which should you encounter a problem with a PC after utilizing a registry cleaning you are able to merely restore a settings.<br><br>Turn It Off: Chances are if you are like me; then you spend a lot of time on the computer on a daily basis. Try giving your computer certain time to do completely nothing; this can sound funny yet should you have an elder computer you are asking it to do too much.<br><br>As the hub center of the computer, the important settings are stored the registry. Registry is structured because keys and each key relates to a program. The system reads the keys plus uses the data to launch plus run programs. However, the big issue is that there are too countless unwelcome settings, useless info occuping the useful room. It makes the system run gradually and huge amounts of settings become unreadable.<br><br>A system plus registry cleaner can be downloaded within the internet. It's user friendly and the process does not take lengthy. All it does is scan and then whenever it finds errors, it can fix plus clean those errors. An error free registry may protect the computer from mistakes and provide we a slow PC fix.
{{more footnotes|date=February 2011}}
{{Technical|date=September 2010}}
}}
 
[[Image:SimpleBayesNetNodes.svg|thumb|right|A simple Bayesian network. Rain influences whether the sprinkler is activated, and both rain and the sprinkler influence whether the grass is wet.]]
<!-- Note: to keep the citation format consistent, please use the "cite" family of templates. -->
 
A '''Bayesian network''', '''Bayes network''', '''belief network''', '''Bayes(ian) model''' or '''probabilistic directed acyclic graphical model''' is a [[Graphical model|probabilistic graphical model]] (a type of [[statistical model]]) that represents a set of [[random variables]] and their [[conditional independence|conditional dependencies]] via a [[directed acyclic graph]] (DAG). For example, a Bayesian network could represent the probabilistic relationships between diseases and symptoms. Given symptoms, the network can be used to compute the probabilities of the presence of various diseases.
 
Formally, Bayesian networks are [[directed acyclic graph]]s whose nodes represent [[random variables]] in the [[Bayesian probability|Bayesian]] sense: they may be observable quantities, [[latent variable]]s, unknown parameters or hypotheses. Edges represent conditional dependencies; nodes that are not connected represent variables that are [[conditional independence|conditionally independent]] of each other. Each node is associated with a [[probability function]] that takes as input a particular set of values for the node's [[Glossary of graph theory#Directed acyclic graphs|parent]] variables and gives the probability of the variable represented by the node. For example, if the parents are <math>m</math> [[Boolean data type|Boolean variables]] then the probability function could be represented by a table of <small><math>2^m</math></small> entries, one entry for each of the <small><math>2^m</math></small> possible combinations of its parents being true or false. Similar ideas may be applied to undirected, and possibly cyclic, graphs; such are called [[Markov network]]s.
 
Efficient algorithms exist that perform [[inference]] and [[machine learning|learning]] in Bayesian networks. Bayesian networks that model sequences of variables (''e.g.'' [[speech recognition|speech signals]] or [[peptide sequence|protein sequences]]) are called [[dynamic Bayesian network]]s. Generalizations of Bayesian networks that can represent and solve decision problems under uncertainty are called [[influence diagrams]].
 
==Example==
[[Image:SimpleBayesNet.svg|400px|thumb|right|A simple Bayesian network.]]
 
Suppose that there are two events which could cause grass to be wet: either the sprinkler is on or it's raining. Also, suppose that the rain has a direct effect on the use of the sprinkler (namely that when it rains, the sprinkler is usually not turned on). Then the situation can be modeled with a Bayesian network (shown).  All three variables have two possible values, T (for true) and F (for false).
 
The [[Joint probability distribution|joint probability function]] is:
 
: <math>\mathrm P(G,S,R)=\mathrm P(G|S,R)\mathrm P(S|R)\mathrm P(R)</math>
 
where the names of the variables have been abbreviated to ''G = Grass wet (yes/no)'', ''S = Sprinkler turned on (yes/no)'', and ''R = Raining (yes/no)''.
 
The model can answer questions like "What is the probability that it is raining, given the grass is wet?" by using the [[conditional probability]] formula and summing over all [[nuisance variable]]s:
:<math>
\mathrm P(\mathit{R}=T \mid \mathit{G}=T)
 
=\frac{
  \mathrm P(\mathit{G}=T,\mathit{R}=T)
}
{
  \mathrm P(\mathit{G}=T)
}
 
=\frac{
  \sum_{\mathit{S} \in \{T, F\}}\mathrm P(\mathit{G}=T,\mathit{S},\mathit{R}=T)
}
{
  \sum_{\mathit{S}, \mathit{R} \in \{T, F\}} \mathrm P(\mathit{G}=T,\mathit{S},\mathit{R})
}
</math>
 
::<math>
=\frac{
  \mathrm P(\mathit{G}=T,\mathit{S}=T,\mathit{R}=T)_{TTT} + \mathrm P(\mathit{G}=T,\mathit{S}=F,\mathit{R}=T)_{TFT}
}
{
  \mathrm P(\mathit{G}=T,\mathit{S}=T,\mathit{R}=T)_{TTT} + \mathrm P(\mathit{G}=T,\mathit{S}=T,\mathit{R}=F)_{TTF} + \mathrm P(\mathit{G}=T,\mathit{S}=F,\mathit{R}=T)_{TFT} + \mathrm P(\mathit{G}=T,\mathit{S}=F,\mathit{R}=F)_{TFF}
}
</math>
 
::<math>
=\frac{
  \mathrm (P(G=T|S=T,R=T)\mathrm P(S=T|R=T)\mathrm P(R=T))_{TTT} + \mathrm (P(G=T|S=F,R=T)\mathrm P(S=F|R=T)\mathrm P(R=T))_{TFT}
}
{
  \mathrm (P(G=T|S=T,R=T)\mathrm P(S=T|R=T)\mathrm P(R=T))_{TTT} + \mathrm (P(G=T|S=T,R=F)\mathrm P(S=T|R=F)\mathrm P(R=F))_{TTF} + \mathrm (P(G=T|S=F,R=T)\mathrm P(S=F|R=T)\mathrm P(R=T))_{TFT} + \mathrm (P(G=T|S=F,R=F)\mathrm P(S=F|R=F)\mathrm P(R=F))_{TFF}
}
</math>
 
::<math>
=\frac{
  (0.99 \times 0.01 \times 0.2)_{TTT} + (0.8 \times 0.99 \times 0.2)_{TFT}
}
{
  (0.99 \times 0.01 \times 0.2)_{TTT} + (0.9 \times 0.4 \times 0.8)_{TTF} + (0.8 \times 0.99 \times 0.2)_{TFT} + (0.0 \times 0.6 \times 0.8)_{TFF}
}
</math>
 
::<math>
= \frac{
  0.00198_{TTT} + 0.1584_{TFT}
}
{
  0.00198_{TTT} + 0.288_{TTF} + 0.1584_{TFT} + 0.0_{TFF}
} =\frac{891}{2491} \approx 35.77 %.</math>
 
As is pointed out explicitly in the example numerator, the joint probability function is used to calculate each iteration of the summation function, marginalizing over <math>\mathit{S}</math> in the [[numerator]], and marginalizing over <math>\mathit{S}</math> and <math>\mathit{R}</math> in the [[denominator]].
 
If, on the other hand, we wish to answer an interventional question: "What is the likelihood that it would rain, given that we wet the grass?" the answer would be governed by the post-intervention joint distribution function <math>\mathrm P(S,R|do(G=T)) = P(S|R) P(R)</math> obtained by removing the factor <math>\mathrm P(G|S,R)</math> from the pre-intervention distribution. As expected, the likelihood of rain is unaffected by the action: <math>\mathrm P(R|do(G=T)) = P(R)</math>.
 
If, moreover, we wish to predict the impact of turning the sprinkler on, we have
: <math>P(R,G|do(S=T)) = P(R)P(G|R,S=T)</math>
with the term <math>P(S=T|R)</math> removed, showing that the action has an effect on the grass but not on the rain.
 
These predictions may not be feasible when some of the variables are unobserved, as in most policy evaluation problems. The effect of the action <math>do(x)</math> can still be predicted, however, whenever a criterion called "back-door" is satisfied.<ref name=pearl2000/>  It states that, if a set ''Z'' of nodes can be observed that ''d''-separates (or blocks) all back-door paths from ''X'' to ''Y'' then <math>P(Y,Z|do(x)) = P(Y,Z,X=x)/P(X=x|Z)</math>. A back-door path is one that ends with an arrow into ''X''.  Sets that satisfy the back-door criterion are called "sufficient" or "admissible." For example, the set ''Z''&nbsp;=&nbsp;''R'' is admissible for predicting the effect of ''S''&nbsp;=&nbsp;''T'' on ''G'', because ''R'' ''d''-separate the (only) back-door path
''S''&nbsp;←&nbsp;''R''&nbsp;→&nbsp;''G''. However, if ''S'' is not observed, there is no other set that ''d''-separates this path and the effect of turning the sprinkler on (''S''&nbsp;=&nbsp;''T'') on the grass (''G'') cannot be predicted from passive observations. We then say that ''P''(''G|do''(''S''&nbsp;=&nbsp;''T'')) is not "identified."  This reflects the fact that, lacking interventional data, we cannot determine if the observed dependence between ''S'' and ''G'' is due to a causal connection or is spurious
(apparent dependence arising from a common cause, ''R''). (see [[Simpson's paradox]])
 
To determine whether a causal relation is identified from an arbitrary Bayesian network with unobserved variables, one can use the three rules of "''do''-calculus"<ref name="pearl2000"/><ref name="pearl-r212">{{cite conference |url=http://dl.acm.org/ft_gateway.cfm?id=2074452&ftid=1062250&dwn=1&CFID=161588115&CFTOKEN=10243006 |title=A Probabilistic Calculus of Actions |first=Pearl |last=J. |year=1994 |editor1-first=R. |editor1-last=Lopez de Mantaras |editor2-first=D. |editor2-last=Poole |booktitle=UAI'94 Proceedings of the Tenth international conference on Uncertainty in artificial intelligence |publisher=Morgan Kaufman |location=San Mateo CA |pages=454–462 |isbn=1-55860-332-8 }}</ref>
and test whether all ''do'' terms can be removed from the
expression of that relation, thus confirming that the desired quantity is estimable from frequency data.<ref>I. Shpitser, J. Pearl, "Identification of Conditional Interventional Distributions"  In R. Dechter and T.S. Richardson (Eds.), ''Proceedings of the Twenty-Second Conference on Uncertainty in Artificial Intelligence'', 437–444, Corvallis, OR: AUAI Press, 2006.</ref>
 
Using a Bayesian network can save considerable amounts of memory, if the dependencies in the joint distribution are sparse. For example, a naive way of storing the conditional probabilities of 10 two-valued variables as a table requires storage space for <math>2^{10} = 1024</math> values. If the local distributions of no variable depends on more than 3 parent variables, the Bayesian network representation only needs to store at most <math>10\cdot2^3 = 80</math> values.
 
One advantage of Bayesian networks is that it is intuitively easier for a human to understand (a sparse set of) direct dependencies and local distributions than complete joint distribution.
 
==Inference and learning==
There are three main inference tasks for Bayesian networks.
 
===Inferring unobserved variables===
Because a Bayesian network is a complete model for the variables and their relationships, it can be used to answer probabilistic queries about them. For example, the network can be used to find out updated knowledge of the state of a subset of variables when other variables (the ''evidence'' variables) are observed. This process of computing the ''posterior'' distribution of variables given evidence is called probabilistic inference. The posterior gives a universal [[sufficient statistic]] for detection applications, when one wants to choose values for the variable subset which minimize some expected loss function, for instance the probability of decision error.  A Bayesian network can thus be considered a mechanism for automatically applying [[Bayes' theorem]] to complex problems.
 
The most common exact inference methods are: [[variable elimination]], which eliminates (by integration or summation) the non-observed non-query variables one by one by distributing the sum over the product; [[clique tree propagation]], which caches the computation so that many variables can be queried at one time and new evidence can be propagated quickly; and [[recursive conditioning]] and [[AND/OR search]], which allow for a [[space-time tradeoff]] and match the efficiency of variable elimination when enough space is used.  All of these methods have complexity that is exponential in the network's [[treewidth]]. The most common [[approximate inference]] algorithms are [[importance sampling]], stochastic [[Markov chain Monte Carlo|MCMC]] simulation, [[mini-bucket elimination]], [[loopy belief propagation]], [[generalized belief propagation]], and [[variational Bayes|variational methods]].
 
===Parameter learning===
In order to fully specify the Bayesian network and thus fully represent the [[joint probability distribution]], it is necessary to specify for each node ''X'' the probability distribution for ''X'' conditional upon ''X'''s parents. The distribution of ''X'' conditional upon its parents may have any form. It is common to work with discrete or [[normal distribution|Gaussian distributions]] since that simplifies calculations. Sometimes only constraints on a distribution are known; one can then use the [[principle of maximum entropy]] to determine a single distribution, the one with the greatest [[information entropy|entropy]] given the constraints.  (Analogously, in the specific context of a [[dynamic Bayesian network]], one commonly specifies the conditional distribution for the hidden state's temporal evolution to maximize the [[entropy rate]] of the implied stochastic process.)
 
Often these conditional distributions include parameters which are unknown and must be estimated from data, sometimes using the [[maximum likelihood]] approach.  Direct maximization of the likelihood (or of the [[posterior probability]]) is often complex when there are unobserved variables.  A classical approach to this problem is the [[expectation-maximization algorithm]] which alternates computing expected values of the unobserved variables conditional on observed data, with maximizing the complete likelihood (or posterior) assuming that previously computed expected values are correct.  Under mild regularity conditions this process converges on maximum likelihood (or maximum posterior) values for parameters.
 
A more fully Bayesian approach to parameters is to treat parameters as additional unobserved variables and to compute a full posterior distribution over all nodes conditional upon observed data, then to integrate out the parameters. This approach can be expensive and lead to large dimension models, so in practice classical parameter-setting approaches are more common.
 
===Structure learning===
 
In the simplest case, a Bayesian network is specified by an expert and is then used to perform inference. In other applications the task of defining the network is too complex for humans. In this case the network structure and the parameters of the local distributions must be learned from data.
 
Automatically learning the graph structure of a Bayesian network is a challenge pursued within [[machine learning]]. The basic idea goes back to a recovery algorithm
developed by Rebane and Pearl (1987)<ref>Rebane, G. and Pearl, J., "The Recovery of Causal Poly-trees from Statistical Data," ''Proceedings, 3rd Workshop on Uncertainty in AI,'' (Seattle, WA) pages 222–228, 1987</ref> and rests
on the distinction between the three possible types of
adjacent triplets allowed in a directed acyclic graph (DAG):
# <math>X \rightarrow Y \rightarrow Z</math>
# <math>X \leftarrow Y \rightarrow Z</math>
# <math>X \rightarrow Y \leftarrow Z</math>
Type 1 and type 2 represent the same dependencies (<math>X</math> and <math>Z</math> are independent given <math>Y</math>) and are, therefore, indistinguishable. Type 3, however, can be uniquely identified, since <math>X</math> and <math>Z</math> are marginally independent and all other pairs are dependent. Thus, while the ''skeletons'' (the graphs stripped of arrows) of these three triplets are identical, the directionality of the arrows is partially identifiable. The same distinction applies when <math>X</math> and <math>Z</math> have common parents, except that one must first condition on those parents. Algorithms have been developed to systematically determine the skeleton of the underlying graph and, then, orient all arrows whose directionality is dictated by the conditional independencies observed.<ref name=pearl2000>{{Cite book
| first = Judea
| last = Pearl
| authorlink = Judea Pearl
| title = Causality:  Models, Reasoning, and Inference
| publisher = [[Cambridge University Press]]
| year = 2000
| isbn = 0-521-77362-8
}}</ref><ref>{{cite journal |first1=P. |last1=Spirtes |first2=C. |last2=Glymour |title=An algorithm for fast recovery of sparse causal graphs |journal=Social Science Computer Review |volume=9 |issue=1 |pages=62–72 |year=1991 |doi=10.1177/089443939100900106 |url=http://repository.cmu.edu/cgi/viewcontent.cgi?article=1316&context=philosophy |format=PDF}}</ref><ref>{{cite book |first1=Peter |last1=Spirtes |first2=Clark N. |last2=Glymour |first3=Richard |last3=Scheines |title=Causation, Prediction, and Search |url=http://books.google.com/books?id=VkawQgAACAAJ |year=1993 |publisher=Springer-Verlag |isbn=978-0-387-97979-3 |edition=1st}}</ref><ref>{{cite conference |url= |title=Equivalence and synthesis of causal models |first1=Thomas |last1=Verma |first2=Judea |last2=Pearl |year=1991 |editor1-first=P. |editor1-last=Bonissone |editor2-first=M. |editor2-last=Henrion |editor3-first=L.N. |editor3-last=Kanal |editor4-first=J.F. |editor4-last=Lemmer |booktitle=UAI '90 Proceedings of the Sixth Annual Conference on Uncertainty in Artificial Intelligence |publisher=Elsevier |pages=255–270 |isbn=0-444-89264-8 }}</ref>
 
An alternative method of structural learning uses optimization based search. It requires a [[scoring function]] and a [[search strategy]]. A common scoring function is [[posterior probability]] of the structure given the training data. The time requirement of an [[exhaustive search]] returning a structure that maximizes the score is [[Tetration|superexponential]] in the number of variables. A local search strategy makes incremental changes aimed at improving the score of the structure. A global search algorithm like [[Markov chain Monte Carlo]] can avoid getting trapped in [[maxima and minima|local minima]]. Friedman et al.<ref>{{cite doi|10.1023/A:1007465528199}}</ref><ref>{{cite doi|10.1089/106652700750050961}}</ref> talk about using [[mutual information]] between variables and finding a structure that maximizes this. They do this by restricting the parent candidate set to ''k'' nodes and exhaustively searching therein.
 
Another method consists of focusing on the sub-class of decomposable models, for which the [[Maximum likelihood estimate|MLE]] have a closed form. It is then possible to discover a consistent structure for hundreds of variables.<ref name="Petitjean">{{cite conference |url=http://www.tiny-clues.eu/Research/Petitjean2013-ICDM.pdf |title= Scaling log-linear analysis to high-dimensional data |last1=Petitjean |first1=F. |last2=Webb |first2=G.I. |last3=Nicholson |first3=A.E. |year=2013 |publisher=IEEE |conference=International Conference on Data Mining |location=Dallas, TX, USA }}</ref>
 
==Statistical introduction==
Given data <math>x\,\!</math> and parameter <math>\theta</math>, a simple [[Bayesian statistics|Bayesian analysis]] starts with a [[prior probability]] (''prior'') <math>p(\theta)</math> and [[likelihood function|likelihood]] <math>p(x|\theta)</math> to compute a [[posterior probability]] <math>p(\theta|x) \propto p(x|\theta)p(\theta)</math>.
 
Often the prior on <math>\theta</math> depends in turn on other parameters <math>\varphi</math> that are not mentioned in the likelihood. So, the prior <math>p(\theta)</math> must be replaced by a likelihood <math>p(\theta|\varphi)</math>, and a prior <math>p(\varphi)</math> on the newly introduced parameters <math>\varphi</math> is required, resulting in a posterior probability
 
:<math>p(\theta,\varphi|x) \propto p(x|\theta)p(\theta|\varphi)p(\varphi).</math>
 
This is the simplest example of a ''hierarchical Bayes model''.{{clarify|date=October 2009|reason=what makes it hierarchical? Are we talking [[hierarchy (mathematics)]] or [[hierarchical structure]]? Link to whichever one it is.}}
 
The process may be repeated; for example, the parameters <math>\varphi</math> may depend in turn on additional parameters <math>\psi\,\!</math>, which will require their own prior. Eventually the process must terminate, with priors that do not depend on any other unmentioned parameters.
 
===Introductory examples===
{{Expand section|date=March 2009|reason=More examples needed}}
 
Suppose we have measured the quantities <math>x_1,\dots,x_n\,\!</math>each  with [[Normal distribution|normally distributed]] errors of known [[standard deviation]] <math>\sigma\,\!</math>,
 
:<math>
x_i \sim N(\theta_i, \sigma^2)
</math>
 
Suppose we are interested in estimating the <math>\theta_i</math>. An approach would be to estimate the <math>\theta_i</math> using a [[maximum likelihood]] approach; since the observations are independent, the likelihood factorizes and the maximum likelihood estimate is simply
 
:<math>
\theta_i = x_i
</math>
 
However, if the quantities are related, so that for example we may think that the individual <math>\theta_i</math> have themselves been drawn from an underlying distribution, then this relationship destroys the independence and suggests a more complex model, e.g.,
 
:<math>
x_i \sim N(\theta_i,\sigma^2),
</math>
:<math>
\theta_i\sim N(\varphi, \tau^2)
</math>
 
with [[improper prior]]s <math>\varphi\sim</math>flat,  <math>\tau\sim</math>flat<math> \in (0,\infty)</math>. When <math>n\ge 3</math>, this is an [[identified model]] (i.e. there exists a unique solution for the model's parameters), and the posterior distributions of the individual <math>\theta_i</math> will tend to move, or ''[[Shrinkage estimator|shrink]]'' away from the maximum likelihood estimates towards their common mean. This ''shrinkage'' is a typical behavior in hierarchical Bayes models.
 
===Restrictions on priors===
Some care is needed when choosing priors in a hierarchical model, particularly on scale variables at higher levels of the hierarchy such as the variable <math>\tau\,\!</math> in the example. The usual priors such as the [[Jeffreys prior]] often do not work, because the posterior distribution will be improper (not normalizable), and estimates made by minimizing the [[Loss function#Expected loss|expected loss]] will be [[admissible decision rule|inadmissible]].
 
==Definitions and concepts==
{{See also|Glossary of graph theory#Directed acyclic graphs}}
There are several equivalent definitions of a Bayesian network. For all the following, let ''G'' = (''V'',''E'') be a [[directed acyclic graph]] (or DAG), and let ''X'' = (''X''<sub>''v''</sub>)<sub>''v'' ∈ ''V''</sub> be a set of [[random variable]]s indexed by ''V''.
 
===Factorization definition===
''X'' is a Bayesian network with respect to ''G'' if its joint [[probability density function]] (with respect to a [[product measure]]) can be written as a product of the individual density functions, conditional on their parent variables:{{sfn|Russell|Norvig|2003|p=496}}
 
<math> p (x) = \prod_{v \in V} p \left(x_v \,\big|\,  x_{\operatorname{pa}(v)} \right) </math>
 
where pa(''v'') is the set of parents of ''v'' (i.e. those vertices pointing directly to ''v'' via a single edge).
 
For any set of random variables, the probability of any member of a [[joint distribution]] can be calculated from conditional probabilities using the [[chain rule (probability)|chain rule]] (given a [[topological ordering]] of ''X'') as follows:{{sfn|Russell|Norvig|2003|p=496}}
 
<math>\mathrm  P(X_1=x_1, \ldots, X_n=x_n) = \prod_{v=1}^n  \mathrm P \left(X_v=x_v \mid X_{v+1}=x_{v+1}, \ldots, X_n=x_n \right)</math>
 
Compare this with the definition above, which can be written as:
 
<math>\mathrm  P(X_1=x_1, \ldots, X_n=x_n) = \prod_{v=1}^n  \mathrm P (X_v=x_v \mid X_j=x_j </math> for each <math>X_j\,</math> which is a parent of <math> X_v\, )</math>
 
The difference between the two expressions is the [[conditional independence]] of the variables from any of their non-descendents, given the values of their parent variables.
 
===Local Markov property===
''X'' is a Bayesian network with respect to ''G'' if it satisfies the ''local Markov property'': each variable is [[Conditional independence|conditionally independent]] of its non-descendants given its parent variables:{{sfn|Russell|Norvig|2003|p=499}}
 
:<math> X_v \perp\!\!\!\perp X_{V \setminus \operatorname{de}(v)} \,|\, X_{\operatorname{pa}(v)} \quad\text{for all }v \in V</math>
 
where de(''v'') is the set of descendants of ''v''.
 
This can also be expressed in terms similar to the first definition, as
 
:<math>\mathrm  P(X_v=x_v \mid  X_i=x_i </math> for each <math>X_i\,</math> which is not a descendent of <math> X_v\, ) = P(X_v=x_v \mid X_j=x_j </math> for each <math>X_j\,</math> which is a parent of <math> X_v\, )</math>
 
Note that the set of parents is a subset of the set of non-descendants because the graph is [[Cycle (graph theory)|acyclic]].
 
===Developing Bayesian networks===
To develop a Bayesian network, we often first develop a DAG ''G'' such that we believe ''X'' satisfies the local Markov property with respect to ''G''. Sometimes this is done by creating a causal DAG. We then ascertain the conditional probability distributions of each variable given its parents in ''G''. In many cases, in particular in the case where the variables are discrete, if we define the joint distribution of ''X'' to be the product of these conditional distributions, then ''X'' is a Bayesian network with respect to ''G''.<ref>{{cite book |first=Richard E. |last=Neapolitan |title=Learning Bayesian networks |url=http://books.google.com/books?id=OlMZAQAAIAAJ |year=2004 |publisher=Prentice Hall |isbn=978-0-13-012534-7 }}</ref>
 
===Markov blanket===
The [[Markov blanket]] of a node is the set of nodes consisting of its parents, its children, and any other parents of its children. This set renders it independent of the rest of the network; the joint distribution of the variables in the Markov blanket of a node is sufficient knowledge for calculating the distribution of the node.  ''X'' is a Bayesian network with respect to ''G'' if every node is conditionally independent of all other nodes in the network, given its [[Markov blanket]].{{sfn|Russell|Norvig|2003|p=499}}
 
====''d''-separation====
This definition can be made more general by defining the "d"-separation of two nodes, where d stands for directional.<ref>{{cite journal|last=Geiger |first=Dan |last2=Verma |first2=Thomas |last3=Pearl |first3=Judea |title=Identifying independence in Bayesian Networks |journal=Networks |year=1990 |volume=20 |pages=507–534 |doi=10.1177/089443939100900106 |url=http://ftp.cs.ucla.edu/pub/stat_ser/r116.pdf |format=PDF}}</ref><ref>{{citation |author=Richard Scheines|title=D-separation|url=http://www.andrew.cmu.edu/user/scheines/tutor/d-sep.html}}</ref> Let ''P'' be a trail (that is, a collection of edges which is like a path, but each of whose edges may have any direction) from node ''u'' to ''v''. Then ''P'' is said to be ''d''-separated by a set of nodes ''Z'' if and only if (at least) one of the following holds:
# ''P'' contains a ''chain'', ''x'' ← ''m'' ← ''y'', such that the middle node ''m'' is in ''Z'',
# ''P'' contains a ''fork'', ''x'' ← ''m'' → ''y'', such that the middle node ''m'' is in ''Z'', or
# ''P'' contains an ''inverted fork'' (or ''collider''), ''x'' → ''m'' ← ''y'', such that the middle node ''m'' is '''not''' in ''Z'' and no descendant of ''m'' is in ''Z''.
Thus ''u'' and ''v'' are said to be ''d''-separated by ''Z'' if all trails between them are ''d''-separated. If ''u'' and ''v'' are not d-separated, they are called d-connected.
 
''X'' is a Bayesian network with respect to ''G'' if, for any two nodes ''u'', ''v'':
:<math>X_u \perp\!\!\!\perp X_v \, | \, X_Z</math>
where ''Z'' is a set which ''d''-separates ''u'' and ''v''.  (The [[Markov blanket]] is the minimal set of nodes which ''d''-separates node ''v'' from all other nodes.)
 
===Hierarchical models===
 
The term ''hierarchical model'' is sometimes considered a particular type of Bayesian network, but has no formal definition.  Sometimes the term is reserved for models with three or more levels of random variables; other times, it is reserved for models with [[latent variable]]s.  In general, however, any moderately complex Bayesian network is usually termed "hierarchical".
 
===Causal networks===
Although Bayesian networks are often used to represent [[causality|causal]] relationships, this need not be the case: a directed edge from ''u'' to ''v'' does not require that ''X<sub>v</sub>'' is causally dependent on ''X<sub>u</sub>''. This is demonstrated by the fact that Bayesian networks on the graphs:
 
:<math> a \longrightarrow b \longrightarrow c \qquad \text{and} \qquad a \longleftarrow b \longleftarrow c </math>
 
are equivalent: that is they impose exactly the same conditional independence requirements.
 
A [[causal network]] is a Bayesian network with an explicit requirement that the relationships be causal. The additional semantics of the causal networks specify that if a node ''X'' is actively caused to be in a given state ''x'' (an action written as ''do''(''X''=''x'')), then the probability density function changes to the one of the network obtained by cutting the links from ''X'''s parents to ''X'', and setting ''X'' to the caused value ''x''.<ref name=pearl2000/> Using these semantics, one can predict the impact of external interventions from data obtained prior to intervention.
 
==Applications==
Bayesian networks are used for [[mathematical model|modelling]] knowledge in [[computational biology]] and [[bioinformatics]]  ([[gene regulatory network]]s, [[protein structure]], [[gene expression]] analysis,<ref name="geneexpr">{{cite journal | author= N. Friedman, M. Linial, I. Nachman, D. Pe'er |title= Using Bayesian Networks to Analyze Expression Data | journal= [[Journal of Computational Biology]]|publisher= [[Mary Ann Liebert, Inc.]]| location = [[Larchmont, New York]] |issn= 1066-5277| volume= 7 | issue = 3/4 | pages= 601–620|date=August 2000|doi= 10.1089/106652700750050961 | pmid= 11108481}}</ref> learning epistasis from GWAS data sets <ref>{{cite journal |author=Jiang, X.; Neapolitan, R.E.; Barmada, M.M.; Visweswaran, S. |title=Learning Genetic Epistasis using Bayesian Network Scoring Criteria |journal=BMC Bioinformatics |volume=12 |issue= |pages=89 |year=2011 |pmid=21453508 |pmc=3080825 |url=http://www.biomedcentral.com/1471-2105/12/89 |doi=10.1186/1471-2105-12-89}}</ref>) [[medicine]],<ref name="Uebersax2004">{{Cite book|author = J. Uebersax|year = 2004|title = Genetic Counseling and Cancer Risk Modeling:  An Application of Bayes Nets|publisher = Ravenpack International|location = Marbella, Spain|url = http://www.john-uebersax.com/stat/bayes_net_breast_cancer.doc}}</ref> [[biomonitoring]],<ref>{{cite journal |author=Jiang X, Cooper GF. |title=A Bayesian spatio-temporal method for disease outbreak detection |journal=J Am Med Inform Assoc |volume=17 |issue=4 |pages=462–71 |date=July–August 2010 |pmid=20595315 |pmc=2995651 |url=http://jamia.bmj.com/cgi/pmidlookup?view=long&pmid=20595315 |doi=10.1136/jamia.2009.000356}}</ref> [[document classification]], [[information retrieval]],<ref name="infpro">{{ cite journal | author=Luis M. de Campos, Juan M. Fernández-Luna and Juan F. Huete|title=Bayesian networks and information retrieval: an introduction to the special issue | journal=Information Processing & Management|publisher=Elsevier| isbn=0-471-14182-8| volume=40 | pages=727–733| year=2004| doi=10.1016/j.ipm.2004.03.001 | issue=5}}</ref> [[semantic search]],<ref>Christos L. Koumenides and Nigel R. Shadbolt. 2012. [http://eprints.soton.ac.uk/342220 Combining link and content-based information in a Bayesian inference model for entity search.] In Proceedings of the 1st Joint International Workshop on Entity-Oriented and Semantic Search (JIWES '12). ACM, New York, NY, USA, , Article 3 , 6 pages. [http://doi.acm.org/10.1145/2379307.2379310  DOI=10.1145/2379307.2379310]</ref> [[image processing]], [[data fusion]], [[decision support system]]s,<ref name="Diez1997">{{Cite journal | author = F.J. Díez, J. Mira, E. Iturralde and S. Zubillaga | title = DIAVAL, a Bayesian expert system for echocardiography | journal = Artificial Intelligence in Medicine | volume = 10 | publisher=Elsevier | pages=59–73| year=1997 | pmid = 9177816 | url=http://www.cisiad.uned.es/papers/diaval.php | issue=1}}</ref> [[engineering]], gaming  and [[law]].<ref name="davis">{{cite journal | author=G. A. Davis | title=Bayesian reconstruction of traffic accidents | journal=Law, Probability and Risk | year=2003 | volume=2 | pages=69–89  | doi=10.1093/lpr/2.2.69 | issue=2}}</ref><ref name=kadane>{{ cite book | author=J. B. Kadane and D. A. Schum | title=A Probabilistic Analysis of the Sacco and Vanzetti Evidence|location=New York|publisher=Wiley|isbn=0-471-14182-8 | year=1996 }}</ref><ref>{{Cite book|author = O. Pourret, P. Naim and B. Marcot|year = 2008|title = Bayesian Networks: A Practical Guide to Applications|publisher = Wiley|location = Chichester, UK|isbn = 978-0-470-06030-8|url = http://www.wiley.com/go/pourret}}</ref> There are texts applying Bayesian networks to bioinformatics <ref>{{cite book|last=Neapolitan|first=Richard|title=Probabilistic Methods for Bioinformatics|year=2009|publisher=Morgan Kaufmann|location=Burlington, MA|isbn=9780123704764|pages=406|url=https://www.elsevier.com/books/probabilistic-methods-for-bioinformatics/neapolitan/978-0-12-370476-4}}</ref> and financial and marketing informatics.<ref>{{cite book|author = Neapolitan, Richard, and Xia Jiang|title=Probabilistic Methods for Financial and Marketing Informatics|year=2007|publisher=Morgan Kaufmann|location=Burlingon, MA|isbn=0123704774|pages=432|url=http://store.elsevier.com/Probabilistic-Methods-for-Financial-and-Marketing-Informatics/Richard-E_-Neapolitan/isbn-9780123704771/}}</ref>
 
===Software===
* [[WinBUGS]]
* [[OpenBUGS]] ([http://www.openbugs.info/w/FrontPage website]), further (open source) development of WinBUGS.
* [http://www.openmarkov.org/ OpenMarkov], open source software and API implemented in Java
* [http://melodi.ee.washington.edu/gmtk Graphical Models Toolkit] (GMTK) — GMTK is an open source, publicly available toolkit for rapidly prototyping statistical models using dynamic graphical models (DGMs) and dynamic Bayesian networks (DBNs). GMTK can be used for applications and research in speech and language processing, bioinformatics, activity recognition, and any time series application.
* [[Just another Gibbs sampler]] (JAGS) ([http://www-fis.iarc.fr/~martyn/software/jags/ website])
* Stan ([http://mc-stan.org/ website]) — Stan is an open-source package for obtaining Bayesian inference using the No-U-Turn sampler, a variant of Hamiltonian Monte Carlo. It’s somewhat like BUGS, but with a different language for expressing models and a different sampler for sampling from their posteriors. RStan is the R interface to Stan.
* [http://pymc-devs.github.io/pymc/ PyMC] — PyMC is a python module that implements Bayesian statistical models and fitting algorithms, including Markov chain Monte Carlo. Its flexibility and extensibility make it applicable to a large suite of problems. Along with core sampling functionality, PyMC includes methods for summarizing output, plotting, goodness-of-fit and convergence diagnostics.
* GeNIe&Smile ([http://genie.sis.pitt.edu/ website]) — SMILE is a C++ library for BN and ID, and GeNIe is a GUI for it
* SamIam ([http://reasoning.cs.ucla.edu/samiam/ website]), a Java-based system with GUI and Java API
* [http://www.BayesServer.com/ Bayes Server] - User Interface and API for Bayesian networks, includes support for time series and sequences
* Belief and Decision Networks on [http://www.aispace.org/bayes/index.shtml AIspace]
* [http://library.bayesia.com/display/HOME/The+BayesiaLab+Library/ BayesiaLab] by Bayesia
* [http://www.hugin.com/ Hugin]
* [http://www.norsys.com/netica.html Netica] by Norsys
* [http://www.aparasw.com/index.php/en dVelox] by Apara Software
* [http://www.inatas.com System Modeler] by Inatas AB
 
==History==
The term "Bayesian networks" was coined by [[Judea Pearl]] in 1985 to emphasize three aspects:<ref>{{cite conference |last=Pearl |first=J. |authorlink=Judea Pearl |year=1985 |title=Bayesian Networks: A Model of Self-Activated Memory for Evidential Reasoning |conference=Proceedings of the 7th Conference of the Cognitive Science Society, University of California, Irvine, CA
|pages=329&ndash;334 |url=http://ftp.cs.ucla.edu/tech-report/198_-reports/850017.pdf|accessdate=2009-05-01 |format=UCLA Technical Report CSD-850017}}</ref>
#The often subjective nature of the input information.
#The reliance on Bayes' conditioning as the basis for updating information.
#The distinction between causal and evidential modes of reasoning, which underscores [[Thomas Bayes]]' posthumously published paper of 1763.<ref>{{Cite journal |last=Bayes |first=T. |authorlink=Thomas Bayes |year=1763 |title = [[An Essay towards solving a Problem in the Doctrine of Chances]] |journal = [[Philosophical Transactions of the Royal Society]] |volume = 53
|pages = 370–418 |doi = 10.1098/rstl.1763.0053 |last2=Price |first2=Mr.}}</ref>
 
In the late 1980s Judea Pearl's text ''Probabilistic Reasoning in Intelligent Systems''<ref>{{cite book |last=Pearl |first=J. |title=Probabilistic Reasoning in Intelligent Systems |publisher=Morgan Kaufmann |location=San Francisco CA |year= |isbn=1558604790 |pages=1988 |url=http://books.google.com/books?id=AvNID7LyMusC}}</ref> and Richard E. Neapolitan's text ''Probabilistic Reasoning in Expert Systems''<ref>{{cite book |first=Richard E. |last=Neapolitan |title=Probabilistic reasoning in expert systems: theory and algorithms |url=http://www.amazon.com/Probabilistic-Reasoning-Expert-Systems-Algorithms/dp/1477452540/ref=sr_1_3?s=books&ie=UTF8&qid=1389578837&sr=1-3&keywords=probabilistic+reasoning+in+expert+systems |year=1989 |publisher=Wiley |isbn=978-0-471-61840-9}}</ref> summarized the properties of Bayesian networks and established Bayesian networks as a field of study.
 
Informal variants of such networks were first used by [[legal scholar]] [[John Henry Wigmore]], in the form of [[Wigmore chart]]s, to analyse [[trial (law)|trial]] [[evidence (law)|evidence]] in 1913.<ref name=kadane/>{{Rp|66–76|date=May 2009}} Another variant, called [[path analysis (statistics)|path diagrams]], was developed by the geneticist [[Sewall Wright]]<ref>{{cite journal
|last=Wright |first=S. |authorlink=Sewall Wright |year=1921 |title=Correlation and Causation
|journal=Journal of Agricultural Research |volume=20 |issue=7 |pages=557–585 |url=http://www.ssc.wisc.edu/soc/class/soc952/Wright/Wright_Correlation%20and%20Causation.pdf |format=PDF
}}</ref> and used in [[Social sciences|social]] and [[behavioral science]]s (mostly with linear parametric models).
 
==See also==
{{Portal|Artificial intelligence|Statistics}}
{{columns-list|2|width=95%|
*[[Artificial intelligence]]
*[[Bayes' theorem]]
*[[Dempster–Shafer theory]] – a Generalization of Bayes' theorem
*[[Bayesian inference]]
*[[Bayesian probability]]
*[[Bayesian programming]]
*[[Belief propagation]]
*[[Chow–Liu tree]]
*[[Computational intelligence]]
*[[Computational phylogenetics]]
*[[Deep belief network]]
*[[Dynamic Bayesian network]]
*[[Expectation-maximization algorithm]]
*[[Factor graph]]
*[[Graphical model]]
*[[Hierarchical temporal memory]]
*[[Influence diagram]]
*[[Judea Pearl]]
*[[Kalman filter]]
*[[Machine learning]]
*[[Memory prediction framework]]
*[[Mixture density]]
*[[Mixture model]]
*[[Naive Bayes classifier]]
*[[Path analysis (statistics)|Path analysis]]
*[[Polytree]]
*[[Sensor fusion]]
*[[Sequence alignment]]
*[[Speech recognition]]
*[[Structural equation modeling]]
*[[Subjective logic]]
*[[Variable-order Bayesian network]]
*[[Wigmore chart]]
*[[World view]]
}}
 
==Notes==
 
{{Reflist|2}}
 
==General references==
{{Refbegin|2}}
* {{cite encyclopedia |last= Ben-Gal |first= Irad |editor= Ruggeri, Fabrizio; Kennett, Ron S.; Faltin, Frederick W |encyclopedia= Encyclopedia of Statistics in Quality and Reliability |title= Encyclopedia of Statistics in Quality and Reliability|url=http://www.eng.tau.ac.il/~bengal/BN.pdf |format=PDF |year= 2007 |publisher= [[John Wiley & Sons]] |isbn= 978-0-470-01861-3 |doi= 10.1002/9780470061572.eqr089 |chapter= Bayesian Networks}}
*{{cite book |last1= Bertsch McGrayne|first1= Sharon |title= The Theory That Would not Die|publisher= [[Yale]]  }}
*{{cite book |last1= Borgelt|first1= Christian|last2= Kruse|first2= Rudolf |title= Graphical Models: Methods for Data Analysis and Mining |url= http://fuzzy.cs.uni-magdeburg.de/books/gm/ |date=March 2002|publisher= [[John Wiley & Sons|Wiley]] |location= [[Chichester|Chichester, UK]] |isbn= 0-470-84337-3}}
*{{cite encyclopedia |last= Borsuk |first= Mark Edward |editor= [[Sven Erik Jørgensen|Jørgensen , Sven Erik]], Fath, Brian |encyclopedia= Encyclopedia of Ecology |title= Ecological informatics: Bayesian networks |year= 2008| publisher= Elsevier|isbn= 978-0-444-52033-3}}
*{{cite book |last1=Castillo|first1=Enrique|last2=Gutiérrez |first2=José Manuel |last3=Hadi  |first3=Ali S. |title= Expert Systems and Probabilistic Network Models |series= Monographs in computer science |volume= |year= 1997 |publisher= [[Springer Science+Business Media|Springer-Verlag]]|location=New York |isbn= 0-387-94858-9|pages= 481–528 |chapter= Learning Bayesian Networks}}
*{{Cite book  | last=Comley  | first =Joshua W.  |author2=[http://www.csse.monash.edu.au/~dld Dowe, David L.]  | year = October 2003  | chapter = Minimum Message Length and Generalized Bayesian Nets with Asymmetric Languages  | chapter-url = http://www.csse.monash.edu.au/~dld/David.Dowe.publications.html#ComleyDowe2005 | editor-last = Grünwald | editor-first = Peter D.| editor2-last = Myung | editor2-first = In Jae | editor3-last = Pitt| editor3-first = Mark A.
| title = Advances in Minimum Description Length: Theory and Applications | series=Neural information processing series | place = [[Victoria (Australia)|Victoria, Australia]]| publication-place = [[Cambridge, Massachusetts]] | publisher = Bradford Books ([[MIT Press]])| publication-date = April 2005 | pages = 265–294 |isbn =  0-262-07262-9}} (This paper puts [[Decision tree learning|decision tree]]s in internal nodes of Bayes networks using [http://www.csse.monash.edu.au/~dld/MML.html Minimum Message Length] ([[Minimum message length|MML]]). An earlier version is [http://www.csse.monash.edu.au/~dld/David.Dowe.publications.html#ComleyDowe2003 Comley and Dowe (2003)], [http://www.csse.monash.edu.au/~dld/Publications/2003/Comley+Dowe03_HICS2003_GeneralBayesianNetworksAsymmetricLanguages.pdf .pdf].)
<!-- cite templates don't work for this one! -->
* {{Cite book | last = Darwiche|first=Adnan |title = [http://www.cambridge.org/9780521884389 Modeling and Reasoning with Bayesian Networks] |publisher = [[Cambridge University Press]] |year = 2009 |isbn = 978-0521884389}}
* Dowe, David L. (2010). [http://www.csse.monash.edu.au/~dld/Publications/2010/Dowe2010_MML_HandbookPhilSci_Vol7_HandbookPhilStat_MML+hybridBayesianNetworkGraphicalModels+StatisticalConsistency+InvarianceAndUniqueness_pp901-982.pdf MML, hybrid Bayesian network graphical models, statistical consistency, invariance and uniqueness], in Handbook of Philosophy of Science (Volume 7: Handbook of Philosophy of Statistics), Elsevier, [http://japan.elsevier.com/products/books/HPS.pdf ISBN 978-0-444-51862-0], pp [http://www.csse.monash.edu.au/~dld/Publications/2010/Dowe2010_MML_HandbookPhilSci_Vol7_HandbookPhilStat_MML+hybridBayesianNetworkGraphicalModels+StatisticalConsistency+InvarianceAndUniqueness_pp901-982.pdf 901–982].
* Fenton, Norman; Neil, Martin E. (November 2007). ''[http://www.agenarisk.com/resources/apps_bayesian_networks.pdf Managing Risk in the Modern World: Applications of Bayesian Networks]'' – A Knowledge Transfer Report from the London Mathematical Society and the Knowledge Transfer Network for Industrial Mathematics. [[London|London (England)]]: [[London Mathematical Society]].
*{{cite news |first= Norman|last= Fenton | first2= Martin E. |last2= Neil | title= Combining evidence in risk analysis using Bayesian Networks |url= https://www.dcs.qmul.ac.uk/~norman/papers/Combining%20evidence%20in%20risk%20analysis%20using%20BNs.pdf |format= PDF |work= Safety Critical Systems Club Newsletter |volume=13 |issue=4 |location= [[Newcastle upon Tyne]], England | pages= 8–13| date= July 23, 2004}}
*{{cite book |author1=Andrew Gelman |author2=John B Carlin |author3=Hal S Stern |coauthors=Donald B Rubin |title=Bayesian Data Analysis |chapter=Part II: Fundamentals of Bayesian Data Analysis: Ch.5 Hierarchical models |chapterurl=http://books.google.com/books?id=TNYhnkXQSjAC&pg=PA120 |year=2003 |publisher=CRC Press |isbn=978-1-58488-388-3 |pages=120– |url=http://books.google.com.au/books?id=TNYhnkXQSjAC}}
* {{Cite book| last = Heckerman | first =David | date = March 1, 1995
| contribution = Tutorial on Learning with Bayesian Networks | contribution-url = http://research.microsoft.com/research/pubs/view.aspx?msr_tr_id=MSR-TR-95-06
| editor-last = Jordan | editor-first = Michael Irwin
| title = Learning in Graphical Models | series = Adaptive Computation and Machine Learning
| publication-place = [[Cambridge, Massachusetts]] | publication-date = 1998 | publisher = [[MIT Press]] | pages = 301–354 | isbn = 0-262-60032-3}}.
:Also appears as {{cite journal |date=March 1997|title= Bayesian Networks for Data Mining |journal= [[Data Mining and Knowledge Discovery]]|volume= 1| issue= 1 |pages= 79–119 |publisher= [[Springer Science+Business Media|Springer Netherlands]] |location= [[Netherlands]] |issn= 1384-5810|doi= 10.1023/A:1009730122752 |last1= Heckerman |first1= David}}
:An earlier version appears as [http://research.microsoft.com/research/pubs/view.aspx?msr_tr_id=MSR-TR-95-06 Technical Report MSR-TR-95-06], Microsoft Research, March 1, 1995.  The paper is about both parameter and structure learning in Bayesian networks.
* {{Cite book| last=Jensen |first=Finn V| last2=Nielsen | first2 = Thomas D. | title = Bayesian Networks and Decision Graphs|edition=2nd |series=Information Science and Statistics series | publisher = [[Springer Science+Business Media|Springer-Verlag]] |location=[[New York]] | date = June 6, 2007| isbn=978-0-387-68281-5}}
* {{Cite book| last=Korb| first=Kevin B.|last2 = Nicholson|first2 = Ann E. | title = Bayesian Artificial Intelligence | edition = 2nd | publisher = [[Chapman & Hall]] ([[CRC Press]]) | date = December 2010 | isbn = 1-58488-387-1 | series=CRC Computer Science & Data Analysis | doi=10.1007/s10044-004-0214-5}}
* {{Cite journal |last=Lunn |first=D. |last2=''et al.'' |first2=D |year=2009 |last3=Thomas |first3=A |last4=Best |first4=N |title=The BUGS project: Evolution, critique and future directions |journal=Statistics in Medicine |volume=28 |pmid=19630097 |issue=25 |pages=3049–3067 |doi=10.1002/sim.3680 }}
* {{cite journal |last= Neil |first= Martin |last2 = Fenton|first2= Norman E.|last3= Tailor|first3= Manesh |date=August 2005|title=Using Bayesian Networks to Model Expected and Unexpected Operational Losses |editor = Greenberg, Michael R. |journal= [[Society for Risk Analysis|Risk Analysis: an International Journal]] |volume= 25 |issue= 4 |pages= 963–972 |publisher= [[John Wiley & Sons]] |doi= 10.1111/j.1539-6924.2005.00641.x |url= http://www.dcs.qmul.ac.uk/~norman/papers/oprisk.pdf|format= pdf |pmid= 16268944}}
* {{cite journal |last= Pearl|first= Judea |authorlink= Judea Pearl |date=September 1986|title= Fusion, propagation, and structuring in belief networks |journal= [[Artificial Intelligence (journal)|Artificial Intelligence]]|volume= 29 |issue= 3 |pages= 241–288|publisher= [[Elsevier]] |issn= 0004-3702 |doi= 10.1016/0004-3702(86)90072-X}}
* {{Cite book | last = Pearl|first=Judea|authorlink = Judea Pearl |title = Probabilistic Reasoning in Intelligent Systems: Networks of Plausible Inference |edition = 2nd printing|publisher = [[Morgan Kaufmann]] |location= [[San Francisco, California]] |year = 1988 |isbn = 0-934613-73-7 | series=Representation and Reasoning Series}}
* {{Cite book|last=Pearl|first=Judea|authorlink=Judea Pearl |last2=Russell |first2=Stuart |authorlink2=Stuart J. Russell |contribution=Bayesian Networks |editor-last=Arbib |editor-first=Michael A.|editor-link=Michael A. Arbib |title=Handbook of Brain Theory and Neural Networks |pages=157–160 |publication-place=[[Cambridge, Massachusetts]]|date=November 2002| publisher = Bradford Books ([[MIT Press]]) | isbn=0-262-01197-2}}
* {{Russell Norvig 2003}}.
* {{Cite journal|author=[http://www.cs.ust.hk/faculty/lzhang/bio.html Zhang, Nevin Lianwen]|author2 = [http://www.cs.ubc.ca/spider/poole/ Poole, David]| title = A simple approach to Bayesian network computations |journal = Proceedings of the Tenth Biennial Canadian Artificial Intelligence Conference (AI-94).| location = [[Banff, Alberta]] |date=May 1994| pages = 171–178}} This paper presents variable elimination for belief networks.
{{Refend}}
* ''Computational Intelligence: A Methodological Introduction'' by Kruse, Borgelt, Klawonn, Moewes, Steinbrecher, Held, 2013, Springer, ISBN 9781447150121
* ''Graphical Models - Representations for Learning, Reasoning and Data Mining'', 2nd Edition, by Borgelt, Steinbrecher, Kruse, 2009, J. Wiley & Sons, ISBN 9780470749562
 
==External links==
*[http://research.microsoft.com/research/pubs/view.aspx?msr_tr_id=MSR-TR-95-06 A tutorial on learning with Bayesian Networks]
*[http://www.niedermayer.ca/papers/bayesian/bayes.html An Introduction to Bayesian Networks and their Contemporary Applications]
*[http://www.dcs.qmw.ac.uk/%7Enorman/BBNs/BBNs.htm On-line Tutorial on Bayesian nets and probability]
*[http://princesofserendib.com/ Web-App to create Bayesian nets and run it with a Monte Carlo method]
*[http://robotics.stanford.edu/~nodelman/papers/ctbn.pdf Continuous Time Bayesian Networks]
*[http://wiki.syncleus.com/index.php/DANN:Bayesian_Network Bayesian Networks: Explanation and Analogy]
*[http://videolectures.net/kdd07_neapolitan_lbn/ A live tutorial on learning Bayesian networks]
*[http://www.biomedcentral.com/1471-2105/7/514/abstract A hierarchical Bayes Model for handling sample heterogeneity in classification problems], provides a classification model taking into consideration the uncertainty associated with measuring replicate samples.
*[http://www.labmedinfo.org/download/lmi339.pdf Hierarchical Naive Bayes Model for handling sample uncertainty], shows how to perform classification and learning with continuous and discrete variables with replicated measurements.
 
{{DEFAULTSORT:Bayesian Network}}
[[Category:Bayesian networks| ]]
[[Category:Networks]]
[[Category:Statistical models]]
[[Category:Graphical models]]

Latest revision as of 02:21, 26 February 2014

It is time to address the slow computer issues even if you do not understand how. Just because a computer is functioning so slow or keeps freezing up; refuses to mean to not address the issue plus fix it. You may or can not be aware which any computer owner should learn that there are certain aspects that a computer requires to maintain the number one performance. The sad truth is that numerous individuals who own a system have no idea that it needs routine maintenance simply like their vehicles.

Document files enable the user to input data, images, tables and alternative ingredients to improve the presentation. The just issue with this formatting compared to alternative file kinds including .pdf for example is its ability to be easily editable. This means which anyone viewing the file could change it by accident. Additionally, this file formatting could be opened by different programs yet it does not guarantee which what you see inside the Microsoft Word application usually nevertheless be the same whenever we view it utilizing another program. However, it is nonetheless preferred by many computer consumers for its ease of employ and qualities.

Although this problem affects millions of computer consumers throughout the globe, there is an convenient method to fix it. We see, there's 1 reason for a slow loading computer, and that's considering your PC cannot read the files it must run. In a nutshell, this just means that whenever we do anything on Windows, it must read up on how to do it. It's traditionally a pretty 'dumb' program, that has to have files to tell it to do everything.

It is regular that the imm32.dll error is caused as a result of a mis-deletion activity. If you cannot discover the imm32.dll anywhere on a computer, there is not any doubt that it must be mis-deleted whenever uninstalling programs or additional unneeded files. Hence, you are able to directly deal it from other programs or download it from a safe internet and then put it on your computer.

The registry mechanic should come because standard with a back up and restore facility. This ought to be an convenient to apply procedure.That signifies which should you encounter a problem with a PC after utilizing a registry cleaning you are able to merely restore a settings.

Turn It Off: Chances are if you are like me; then you spend a lot of time on the computer on a daily basis. Try giving your computer certain time to do completely nothing; this can sound funny yet should you have an elder computer you are asking it to do too much.

As the hub center of the computer, the important settings are stored the registry. Registry is structured because keys and each key relates to a program. The system reads the keys plus uses the data to launch plus run programs. However, the big issue is that there are too countless unwelcome settings, useless info occuping the useful room. It makes the system run gradually and huge amounts of settings become unreadable.

A system plus registry cleaner can be downloaded within the internet. It's user friendly and the process does not take lengthy. All it does is scan and then whenever it finds errors, it can fix plus clean those errors. An error free registry may protect the computer from mistakes and provide we a slow PC fix.