\documentclass[11pt]{article}
\usepackage{fullpage}
\newcommand{\eat}[1]{}

\title{Tutorial for Libra 0.4.0}
\author{Daniel Lowd $<$lowd@cs.uoregon.edu$>$}
\begin{document}
\maketitle

\section*{Introduction}

This document describes how to use the Libra toolkit for various
learning and inference problems.  All necessary files are included in
the {\tt doc/tutorial/} directory of the standard distribution.

\section*{Quick Installation}

Libra was designed to run from the command line under Linux, Mac OS X,
or Windows (with the help of Cygwin).  The following assumes that you
have the OCaml programming language installed,
available from most package managers, such as MacPorts, and online
from: {\tt http://caml.inria.fr/}.

First, unpack the source distribution:
\begin{verbatim}
  tar -xzvf libra-tk-0.4.0.tar.gz
  cd libra-tk-0.4.0
\end{verbatim}
This creates the {\tt libra-tk-0.4.0/} directory and makes it your
working directory.  For the remainder of this document, all paths will
be relative to this directory.

Next, build the executables:
\begin{verbatim}
  cd src
  make clean; make
  cd ..
\end{verbatim}
All programs should now be present in the directory {\tt bin/}.  For
convenience, you may wish to add this directory to your path when
working with Libra.  Here are the commands to do so under the 
{\tt bash} shell:
\begin{verbatim}
  cd bin
  export PATH=$PATH:`pwd`
  cd ..
\end{verbatim}
Note that {\tt pwd} is surrounded by backticks ({\tt `}),
not apostrophes ({\tt '}).

Finally, to continue with this tutorial, change to the tutorial
directory:
\begin{verbatim}
  cd doc/tutorial
\end{verbatim}

\section*{Task Description}

In this tutorial, you will train a model from data in two different
ways, evaluate model accuracy on held-out data, and answer queries
exactly and approximately.

As our dataset, we will use the Microsoft Anonymous Web
Data\footnote{Available from the UCI repository at: {\tt
http://kdd.ics.uci.edu/databases/msweb/msweb.html}}, which records the
areas (Vroots) of microsoft.com that each user visited during one week
in February 1998.  A small subset of this data is present in the
tutorial directory, already converted into Libra's data format: each
line is one example, represented as a list of comma-separated values
(one discrete value per variable).

\section*{Step 1: Train a Chow-Liu Tree}

To train a Chow-Liu tree, use the {\bf cl} program:
\begin{verbatim}
  cl -i msweb.data -s msweb.schema -o msweb-cl.xmod -prior 1
\end{verbatim}
The most important options are {\tt -i}, for indicating the training
data, and {\tt -o}, for indicating the output file.  Libra supports
two formats for Bayesian networks, BIF and XMOD.  XMOD is preferred,
since it supports both tree and table CPDs.  Programs in Libra will 
automatically guess the type of Bayesian network you want by looking
at the suffix of the filename -- {\tt .xmod} for XMOD and {\tt .bif}
for BIF.

The {\tt -s} option allows you to specify a variable schema, the
number of values for each variable.  If unspecified, Libra will guess
from the training data.  {\tt -prior} specifies the prior counts to
use when estimating the parameters of the network, for smoothing. 

The {\tt -log} flag allows you to redirect log output to another file
instead of standard output.  You can use the flag {\tt -v} to
enable more verbose output, or {\tt -debug} to turn on detailed
debugging output.  These three flags are available in every program,
although for some programs they have minimal effect.  Try them with
{\bf cl} to see what happens.

To see a list of all options, run {\bf cl} without any arguments.
This also works with any other program in the toolkit.

\section*{Step 2: Circuit Compilation}

We can convert our newly-trained BN into an equivalent arithmetic
circuit using the {\bf acve} program:
\begin{verbatim}
  acve -m msweb-cl.xmod -o msweb-cl.ac
\end{verbatim}
{\bf acve} is a modified implementation of the AC variable elimination
algorithm (ACVE) proposed by (Chavira and Darwiche, 2007)\footnote{The
key difference is that our ADDs handle multivalued variables using
$k$-way splits rather than introducing additional variables.  We've
found that this sometimes leads to more compact circuits, and
sometimes less compact circuits.}.  In addition to compiling simple
models (such as our tree-structured BN), ACVE can handle many complex
models with internal structure by representing CPDs as algebraic
decision diagrams (ADDs).

If you run {\bf acve} with no arguments, you will notice a number of
options related to thresholds and pruning.  These are experimental
features for approximately compiling BNs or MNs when the exact compilation
would be intractable.  The use of these options is not currently
recommended.

\section*{Step 3: File Information}

To obtain basic information about most file types supported by Libra,
use the {\tt fstats} command:
\begin{verbatim}
  fstats -i msweb.data
  fstats -i msweb-cl.xmod
  fstats -i msweb-cl.ac
\end{verbatim}

The output of {\tt fstats} will depend on the file type as well as its
contents.  Below is the result of running {\tt fstats -i msweb.data}:
\begin{verbatim}
Filename: msweb.data
Filetype: Data
Points: 1000
Variables: 294
Schema: 1,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,
2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,
2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,
2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,
2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,
1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,
1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,
1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,2

Fraction missing: 0.000000
\end{verbatim}

The schema here is inferred from the variable ranges observed in the
data file.  Note that the range is listed as ``1'' for many of the
variables.  This is because these variables only appear with the value
``0'' in this short data file.

\section*{Step 4: Circuit Learning}

To learn models with higher treewidth, use the {\bf aclearnstruct}
program:
\begin{verbatim}
  aclearnstruct -i msweb.data -s msweb.schema -o msweb-ac.ac -ob msweb-ac.xmod
\end{verbatim}

As in {\bf cl}, {\tt -i} designates the data and {\tt -s} the
(optional) schema.  {\bf aclearnstruct} has two outputs: {\tt -o}
designates the output AC, and {\tt -ob} designates the output BN.  The
BN and AC are guaranteed to represent the same distribution, but the
AC is more convenient for inference.

{\bf aclearnstruct} is sensitive to a number of parameters that
control how it trades off the accuracy of the model with the size of
the circuit.  In experiments on a small set of domains, the following 
parameter settings worked fairly well:
\begin{verbatim}
  -pe 100 -ps 0 -maxe 100000 -shrink
\end{verbatim}
{\tt -pe} and {\tt -ps} specify per-edge and per-split penalties,
respectively, in the score function.  When the per-edge cost is
higher, {\bf aclearnstruct} will more strongly prefer structures with
few edges.  The per-split penalty is for early stopping, in order to
avoid overfitting.  With the {\bf -psthresh} option, {\bf
aclearnstruct} will output models for different values of {\bf -ps} as
learning progresses, without needing to rerun training.

The last two options specify the maximum number of edges (100,000, in
this case) and tell {\bf aclearnstruct} to keep running until this
edge limit is met, reducing the per-edge cost as necessary.  This way,
we can start with a conservative edge cost (such as 100), select only
the most promising simple structures, and make compromises later as
our edge budget allows.

A final option is {\tt -quick}, which relaxes the default, greedy
heuristic to be only approximately greedy.  In practice, it is often
an order of magnitude faster with only slightly worse accuracy.

The {\tt -parents} option allows you to control which variables are
allowed to be parents of which other variables.  See the manual for
more details.

% TODO -- where to really put this BN learning and dependency network
% learning discussion???

{\tt aclearnstruct} can also be used to learn BNs that cannot be
compactly represented as circuits, by using the {\tt -noac} flag.
When this flag is used, no AC is generated and the per-edge penalty is
zero.

\section*{Step 5: Model Scoring}

We can compute the average log likelihood per example using the {\bf
mscore} program:
\begin{verbatim}
  mscore -m msweb-cl.xmod -i msweb.test
  mscore -m msweb-cl.ac -i msweb.test
\end{verbatim}
{\tt -m} specifies the MN, BN, or AC to score, and {\tt -i} specifies
the test data.  The filetype is inferred from the extension: {\tt .mn}
for MNs; {\tt .xmod} and {\tt .bif} for BNs; and {\tt .ac} for ACs.
Since we generated this AC from the BN, both scores should be
identical, apart from minor rounding errors (which do show up in this
case).  To see the likelihood of every test case individually, enable
verbose output with {\tt -v}.

If we also score {\tt msweb-ac.xmod}, we see that the extra
expressiveness allowed by {\bf aclearnstruct} has indeed translated to
higher log likelihood on the test data ($-9.911$ vs.\ $-10.004$).

\section*{Step 6: AC Exact Inference}

Now that we have an AC, we wish to use it to answer queries using {\bf
acquery}.  The simplest query is computing all single-variable marginals:
\begin{verbatim}
  acquery -c msweb-cl.ac -marg
\end{verbatim}
{\tt -marg} specifies that we want to compute the marginals.  The output
list of 294 marginal distributions is a bit overwhelming.  For more
convenient reading, you can redirect just the marginals to a file
using the {\tt -mo} flag.  (Once {\tt -mo} has been specified, {\tt
-marg} is implied and may be omitted.)

We can also use {\bf acquery} to compute the probabilities of
configurations of multiple variables, such as the probability that
variables two through five all equal 0.  To do this, we need to create
a query file that defines every configuration we're interested in.
The format is identical to Libra data files, except that we use ``*''
in place of an actual value for variables whose values are
unspecified.  {\tt msweb.q} contains three example queries.
\begin{verbatim}
  acquery -c msweb-cl.ac -q msweb.q
\end{verbatim}
{\bf acquery} outputs the log probability of each query as
well as the average over all queries and its standard deviation.
With the {\tt -v} flag, {\bf acquery} will print out query times in
a second column.

An evidence file can be specified as well using {\tt -ev}, using the
same format as the query file:
\begin{verbatim}
  acquery -c msweb-cl.ac -q msweb.q -ev msweb.ev -v
\end{verbatim}

Evidence can also be used when computing marginals.  If you specify
both {\tt -q} and {\tt -marg}, then the probability of each query will be
computed as the product of the marginals computed.  This is typically
less accurate, since it ignores correlations among the query variables.

To obtain the most likely variable state (possibly conditioned on
evidence), use the {\tt -mpe} flag:
\begin{verbatim}
  acquery -c msweb-cl.ac -ev msweb.ev -mpe
\end{verbatim}
This computes the most probable explanation (MPE) state for each
evidence configuration.  If the MPE state is not unique, {\bf acquery}
will select one arbitrarily.  If you specify a query with {\tt -q},
then {\bf acquery} will print out the fraction of query variables that
differ from the predicted MPE state:  
\begin{verbatim}
  acquery -c msweb-cl.ac -q msweb.q -ev msweb.ev -mpe
\end{verbatim}

\section*{Step 7: Approximate Inference}

Sometimes, we have a BN or MN (perhaps learned with some other
toolkit) for which there is no efficient AC.  We can still obtain
answers from this network by running mean field, belief propagation,
or Gibbs sampling.

For example, to run the same queries as above using mean field
inference:%
\footnote{{\tt mf} is also the name of the executable for the Metafont 
program distributed with \TeX.
Depending on your system setup, you may need to use the path
when referring to {\bf mf}, e.g.\ {\tt ../../bin/mf}.}
\begin{verbatim}
  mf -m msweb-cl.xmod
  mf -m msweb-cl.xmod -q msweb.q
  mf -m msweb-cl.xmod -q msweb.q -ev msweb.ev -v
\end{verbatim}
Note that there is no {\tt -marg} option, since mean field always
approximates the distribution as a product of marginals.  To trade off
accuracy and speed in {\tt mf}, use the {\tt -thresh} and {\tt
-maxiter} arguments, which specify the convergence threshold and
maximum number of iterations to run, respectively.

Belief propagation ({\bf bp}) supports the same command line
parameters as {\bf mf}, e.g.:
\begin{verbatim}
  bp -m msweb-cl.xmod -q msweb.q -ev msweb.ev -v
  bp -m msweb-cl.xmod -q msweb.q -ev msweb.ev -sameev -v
\end{verbatim}
The {\tt -sameev} option in the second command runs BP only once with
the first evidence configuration in {\tt msweb.ev}, and then reuses the
marginals for answering all queries in {\tt msweb.q}.  (This is also
supported by {\tt mf}.)

Gibbs sampling is done by {\bf gibbs}, and follows similar
conventions.  Gibbs sampling is an anytime algorithm, which means you
can specify how long it should run.  The parameters controlling this
are {\tt -burnin} (the number of burn-in iterations), {\tt -sampling}
(the number of sampling iterations), and {\tt -chains} (the number of
simultaneous chains to run).  For convenience, you can also set these
parameters using the {\tt -speed} option.  Valid arguments are {\tt
fast}, {\tt medium}, {\tt slow}, {\tt slower}, and {\tt slowest},
corresponding to a total of 1k/10k/100k/1M/10M sampling iterations,
respectively.

Like {\bf acquery}, {\bf gibbs} can answer queries using either the
full joint distribution (the default) or just the marginals (with {\tt
-marg}).  The latter option is helpful for estimating the probabilities
of rare events that may never show up in the samples.

You can also print the raw samples to a file using {\tt -so}.  To
obtain repeatable experiments, use {\tt -seed} to specify a random
seed.

Examples:
\begin{verbatim}
  gibbs -m msweb.xmod -mo msweb-gibbs.marg
  gibbs -m msweb.xmod -q msweb.q -ev msweb.ev -v
  gibbs -m msweb.xmod -q msweb.q -ev msweb.ev -speed medium -v
  gibbs -m msweb.xmod -q msweb.q -ev msweb.ev -marg -sameev
\end{verbatim}

Approximate MPE (most probable explanation) inference is done by {\bf
maxprod}, an implementation of the max-product algorithm.  Max-product
is identical to belief propagation, except that it replaces summation
with maximization to find the (approximate) most likely variable
configuration for the non-evidence variables.  Like {\bf bp}, it is
exact on tree-structured networks.  Without a query file, {\bf
maxprod} prints the MPE states given the evidence.  Given a query
file, {\bf maxprod} prints the fraction of non-evidence variables that
are different from the estimated MPE state.

Examples:
\begin{verbatim}
  maxprod -m msweb-cl.xmod -ev msweb.ev -mo msweb-cl.mpe
  maxprod -m msweb.xmod -q msweb.q -ev msweb.ev -v
\end{verbatim}

\section*{Step 8: Markov Networks}

As of version 0.3.0, Libra now supports inference and
scoring of Markov networks.  One way to create a Markov network is to
convert a Bayesian network using {\tt mconvert}:
\begin{verbatim}
  mconvert -m msweb.xmod -o msweb.mn
\end{verbatim}

The resulting MN can be used for approximate inference with {\tt bp}, {\tt
gibbs}, or {\tt mf}:
\begin{verbatim}
  bp -m msweb.mn
  mf -m msweb.mn -ev msweb.ev
  gibbs -m msweb.mn -q msweb.q -ev msweb.ev -v
\end{verbatim}

You can also compute the pseudo-log-likelihood (PLL) of test data using {\tt
mscore}:
\begin{verbatim}
  mscore -m msweb.mn -i msweb.test
\end{verbatim}
In general, log-likelihood is intractable to compute in a Markov network.
PLL is the sum of the conditional log-probabilities of
each variable given its Markov blanket.  This is much faster to
compute, but not directly comparable to likelihood.  To force {\tt
mscore} to use PLL when scoring a BN, use the {\tt -pll} flag:
\begin{verbatim}
  mscore -m msweb.xmod -i msweb.test -pll
\end{verbatim}
If you do not use {\tt -pll}, {\tt mscore} will print the unnormalized
log-likelihood, which differs from the true log-likelihood by a
constant called the log partition function.

{\tt mconvert} can also be used to convert between XMOD and BIF BN
formats, e.g.:
\begin{verbatim}
  mconvert -m msweb-cl.xmod -o msweb-cl.bif
\end{verbatim}
If an XMOD file is the source, it must not contain trees, since these
are not supported by BIF.  ({\tt mconvert} can also be used to simplify
ACs and MNs by conditioning them on evidence, as described in the
manual.)

\section*{Dependency Networks}

A dependency network (DN) specifies a conditional probability
distribution for each variable given its parents.  However, unlike a
Bayesian network, the graph of parent-child relationships may contain
cycles.  With Libra, you can learn a dependency network with
tree-structured conditional probability distributions using {\tt
dnlearn}:
\begin{verbatim}
  dnlearn -i msweb.data -s msweb.schema -o msweb-dn.xmod -prior 1
\end{verbatim}
The resulting dependency network will be saved as a XMOD
file.  The XMOD file itself does not distinguish between BNs and DNs.

DNs can be scored using {\tt mscore}:
\begin{verbatim}
  mscore -m msweb-dn.xmod -i msweb.test -depnet
\end{verbatim}
The score of a DN is a pseudo-log-likelihood, which is not directly
comparable to log-likelihood.  Note that using the flag {\tt -pll} and
omitting the flag {\tt -depnet} will cause {\tt mscore} to interpret
the DN as a BN and generate incorrect results.

Two approximate inference algorithms are available for DNs: Gibbs
sampling and mean field.  To use them with a DN instead of a BN, just
add the {\tt -depnet} flag to the command line:
\begin{verbatim}
  gibbs -m msweb-dn.xmod -mo msweb-dn-gibbs.marg -depnet
  mf -m msweb-dn.xmod -ev msweb.ev -q msweb.q -depnet
\end{verbatim}

\section*{Miscellaneous}

This tutorial covers most of Libra, but not all.  The {\tt bnsample}
program can be used for generating iid samples from a Bayesian
network; its use is fairly straightforward.  The {\tt acopt} program
is for adjusting the weights of an arithmetic circuit, either to
maximize the log likelihood of training data (weight learning), or to
minimize the reverse Kullback-Leibler divergence to a Bayesian network
(variational inference).  Details are beyond the scope of the present
tutorial.

\subsubsection*{Acknowledgments} 
Thanks to Andrey Kolobov and Marc Sumner for beta testing this
tutorial.

\end{document}
