Bayesian network (or Bayesian belief network), which is used to classify XML document in this project,
is a directed acyclic graph consisting of the following elements \cite{Jensen2001}:
\begin{itemize}
 \item A set of variables and a set of directed edges between variables.
 \item Each variables has a finite set of mutually exclusive states.
 \item The variables together with the directed edges form an acyclic directed graph (DAG).
 \item To each variable $A$ with parents $B_1, B_2 ... B_n$, a conditional probability table
$P(A|B_1 ... B_n)$ is attached. If $A$ has no parents, then the table reduces to the unconditional
probability table $P(A)$.
\end{itemize}
In a Bayesian network, an edge from node (variable) B to A represents that variable B will probably
cause A. The probabilities under different condition are given by the probability table. 

Such kind of networks is usually used in the field of reasoning under uncertainty, such as medical
diagnosis, information retrieval, image processing and document classification.

\begin{figure}
\begin{center}
\includegraphics[scale=0.6]{background/sprinkler}
\caption{A simple Bayesian network}
\label{simple:bayesian}
\end{center}
\end{figure}
Here is a simple example of Bayesian network shown in Figure ref{simple:bayesian}
which is widely used in many materials of introduction of Bayesian network. From this figure, we could see that the event ``grass wet'' may be caused by two possible events: water sprinkler on or it is raining. The conditional probabilities are shown in the tables. For example, $Pr(Wet grass = True | Sprinker on = True, Rain = False) = 0.9$ which means given sprinkler is on and it is not raining, the probability that grass is wet is 0.9.

The independece relationships are encoded in Bayesian network which could be stated as: a node is independent of its ancestors given its parents. Take the network in Figure \ref{simple:bayesian} as an
example, given the state of prinker (on or off) and if it is raining, the event
``grass wet'' is independent of the event ``cloudy''. In another word, the knowledge of whether it is
cloudy could not provide any information of reasoning whether the grass is wet given the state of
sprinkler and if it is raining.

Some properties of Bayesian network are studied, like ``d-seperation'', ``explaining away'', etc.

``Explaining away'', in statistics, is known as Berkson's paradox, or ``selection bias''. Using Figure
\ref{simple:bayesian} again, suppose the grass is wet and it is raining, then the posterior probability that the sprinkler is on goes down which is $Pr(S = True|W = True, R = True) = 0.1945$.
This is because that the event ``raining'' is sufficient to explain the evidence that the grass is wet.

``D-seperation'' stands for dependency seperation. When apply inference on Bayesian network by given 
 evidence, some nodes cannot provide any information to the inference. For example, given the state 
of sprinkler and if it is raining, the event ``cloudy'' could not provide more information when infering
whether the grass is wet. So that ``grass wet'' and ``cloudy'' are d-seperated given the evidence of 
``sprinkler'' and ``rain'' (Or, ``grass wet'' and ``prinker'' are d-connected given the evidence). Mathmatically speaking,  if G is a directed graph in which X, Y and Z are disjoint sets of vertices, then X and Y are d-connected by Z in G if and only if there exists an undirected path U between
some vertex in X and some vertex in Y such that for every collider C on U, either C or
a descendent of C is in Z, and no non-collider on U is in Z. X and Y are d-separated by Z in G if and only if they are not d-connected by Z in G.
