<html>

<head>
<title>URLearning Software</title>
</head>

<body>

This is software we have developed for learning a Bayesian network structure for a dataset which optimizes the score for a dataset. &nbsp;The&nbsp;<a href="http://www.cs.helsinki.fi/u/bmmalone/urlearning-files/urlearning-cpp.tar.gz" target="_blank">c++ source code</a> is available as a netbeans project. &nbsp;It requires <a href="http://www.boost.org/" target="_blank">boost</a>. &nbsp;<a href="http://www.cs.helsinki.fi/u/bmmalone/urlearning-files/ConfiguringNetBeansforURLearning.pdf" target="_blank">This file</a> describes the necessary steps to configure netbeans to compile the code. &nbsp;<a href="http://www.cs.helsinki.fi/u/bmmalone/urlearning-files/AddinganewconfigurationtoURLearning.pdf" target="_blank">This file</a> describes the necessary steps to add a new configuration (algorithm) to the project.
<div><br>
</div>
<div>You can also clone the developer version of the code with this command:&nbsp;<font face="courier new, monospace">hg clone https://bitbucket.org/bmmalone/urlearning-cpp</font><br>
<div><br>
</div>
<div>Furthermore, a set of scripts are available which implement most of the developments we published in <a href="http://www.cs.helsinki.fi/u/bmmalone/papers/Fan2014a.pdf" target="_blank">UAI 2014</a> and <a href="http://cs.helsinki.fi/u/bmmalone/papers/Fan2014.pdf" target="_blank">AAAI 2014</a>. They are available from the following repository:&nbsp;<font face="courier new, monospace">hg clone ssh://hg@bitbucket.org/bmmalone/bnscripts</font>. The readme.uai2014.txt file describes the prerequisites and necessary steps to run the code.</div>
<div><br>
</div>
<div>There is also an older <a href="http://www.cs.helsinki.fi/u/bmmalone/urlearning-files/URLearning.jar" target="_blank">Java version</a> available. &nbsp;It is not as efficient as the c++ version, but it includes more features and does not require any external dependencies.<br>
<div><br>
</div>
<div><font size="3"><b>Datasets</b></font></div>
<div><font size="2">These packages contain all of the files necessary to reproduce the experiments in many of our papers.</font></div>
<div><font size="2"><br>
</font></div>
<div><font size="2">Papers prior to UAI 2013</font></div>
<div>
<ul><li><font size="2"><a href="http://cs.helsinki.fi/u/bmmalone/urlearning-files/data.published.csv.tar.gz">Published datasets, (csv only)</a>&nbsp;- The input files necessary to reproduce the experiments in most of our papers prior to UAI 2013, including the input datasets (csv folder). &nbsp;Most of these datasets are processed versions from the <a href="http://archive.ics.uci.edu/ml/" target="_blank">UCI machine learning repository</a>. Continuous variables were binarized around their mean. &nbsp;Finally, each value of categorical variables were mapped arbitrary to integers (e.g., if a categorical variable had four categories, the categories would be mapped arbitrarily to {0, 1, 2, 3}); using these values, the categorical values were binarized around the mean.&nbsp;&nbsp;For most datasets, records with missing values were removed. &nbsp;This process sometimes results in variables with only a single observed value; these variables sometimes affect the scores in unexpected ways, especially fNML. &nbsp;I am working to remove these datasets.<br>
<br>
Scores were calculated for these datasets by setting a parent limit of 8; furthermore a time limit of no more than 10 minutes for actual score calculation times was imposed on each variable (note that post-processing pruning and writing to disk are not included in this limit). &nbsp;The scores were calculated on a node which has XXX. &nbsp;<a href="https://docs.google.com/spreadsheet/ccc?key=0AgSIIMVS1puadFNKTmFQT2REUGo1SDUzWV9lV2ltWkE&amp;usp=sharing" target="_blank">This spreadsheet (Published Datasets sheet)</a> gives runtime information on using the <font face="courier new, monospace">score</font> program in the C++ version of URLearning to calculate, prune, and write the scores to disk. &nbsp;Due to technical problems with the automation, some of the running times may not be exactly correct; I am working to correct these. &nbsp;The following scores are available:</font></li>
<ul><li><font size="2"><a href="http://cs.helsinki.fi/u/bmmalone/urlearning-files/data.published.bic.pss.tar.gz">BIC</a> (used in most of our papers)</font></li>
<li><font size="2">BDeu, equivalent sample size: <a href="http://cs.helsinki.fi/u/bmmalone/urlearning-files/data.published.bdeu.01.pss.tar.gz">0.1</a>, <a href="http://cs.helsinki.fi/u/bmmalone/urlearning-files/data.published.bdeu.1.pss.tar.gz">1</a>, <a href="http://cs.helsinki.fi/u/bmmalone/urlearning-files/data.published.bdeu.10.pss.tar.gz">10</a>, <a href="http://cs.helsinki.fi/u/bmmalone/urlearning-files/data.published.bdeu.100.pss.tar.gz">100</a></font></li>
<li><a href="http://cs.helsinki.fi/u/bmmalone/urlearning-files/data.published.fnml.pss.tar.gz">fNML</a><br>
<br>
</li></ul>
<li><font size="2"><a href="http://cs.helsinki.fi/u/bmmalone/urlearning-files/data.tar.gz">Unpublished datasets</a> - The preprocessing scheme described for the published datasets may not reflect real-world usage scenarios, so I recalculated the scores using a more sensible realistic preprocessing step. &nbsp;First, records with missing values were removed. &nbsp;Next, continuous variables were discretized according to the <a href="http://cosco.hiit.fi/Articles/aistat07.pdf" target="_blank">NML-optimal histogram</a> by Kontkanen and Myllymaki. &nbsp;Then, categorical variables with more than 10 values were typically removed (a note is included if some other step was taken to handle categorical variables with large cardinality). &nbsp;Finally, variables with a single value were removed from the dataset. &nbsp;<br>
<br>
This preprocessing scheme results in more large cardinality variables than the simple scheme used for the published results. &nbsp;Consequently, it can result in more parent set pruning for scoring functions with a heavy complexity penalty, such as BIC. &nbsp;On the other hand, larger parent sets can be more informative, so that can reduce the amount of parent set pruning. &nbsp;An interesting venue for future work is to consider how this preprocessing affects learning.</font></li></ul>
</div>
<div><font size="2"><br>
</font></div>
<div><font size="2"><b><a href="http://www.cs.helsinki.fi/u/bmmalone/papers/uai2013_AnytimeEvaluation.pdf" target="_blank">UAI 2013</a></b></font></div>
<div>
<ul><li><a href="http://www.cs.helsinki.fi/u/bmmalone/urlearning-files/uai2013.experimentSet2.tar.gz" style="font-size:small" target="_blank">Experiment set 2</a><font size="2"> - All of the files to reproduce the experiments in Section 5.4 of the paper, including the synthetic networks (Net folder), sampled datasets (Csv folder) and parent set scores (pss folder). &nbsp;</font><a href="https://docs.google.com/spreadsheet/ccc?key=0AgSIIMVS1puadFNKTmFQT2REUGo1SDUzWV9lV2ltWkE&amp;usp=sharing" style="font-size:small" target="_blank">This spreadsheet (UAI 2013 sheet)</a><font size="2">&nbsp;gives runtime information on using the <font face="courier new, monospace">score</font> program in the C++ version of URLearning to calculate the necessary parent set scores from the sampled datasets on nodes which<font face="arial, sans-serif">&nbsp;</font><span style="line-height:19px;background-color:rgb(255,255,251)"><font face="arial, sans-serif">have 32GB of RAM and 2 Intel Xeon E5540 2.53GHz CPU:s. The CPUs have 4 cores, so one node has a total of 8 cores. The cores are using hyperthreading (so 16 threads total, I think). &nbsp;10 threads were used during the score calculations.</font></span></font></li></ul>
<div><font size="2"><a href="http://www.cs.helsinki.fi/u/bmmalone/papers/Fan2014.pdf" target="_blank"><b>AAAI 2014</b></a>, <a href="http://www.cs.helsinki.fi/u/bmmalone/papers/Fan2014a.pdf" target="_blank"><b>UAI 2014</b></a></font></div>
</div>
<div>
<ul><li><font size="2"><a href="http://www.cs.helsinki.fi/u/bmmalone/urlearning-files/uai2014.tar.gz" target="_blank">Published datasets (csv, pss, binary scores)</a> - All of the files to reproduce the experiments in our AAAI 2014 and UAI 2014 papers.</font></li></ul>
</div>
<div><font size="2"><br>
</font></div>
<div><font size="3"><b>File Formats</b></font></div>
<div><font size="2">Unless otherwise noted, the files use the following formats.</font></div>
<div>
<ul><li><font size="2"><i>Hugin net (*.net)</i> - for representing Bayesian networks</font></li>
<li><font size="2"><i>Comma separated value (*.csv)</i> - for datasets. &nbsp;Many of the datasets include a header row that gives the names of the variables, but some do not.</font></li>
<li><font size="2"><i>Parent set scores (*.pss)</i> - for giving the parent set scores which most exact structure learning programs assume as input. Mark Bartlett and I are currently formalizing this format; some notes are available <a href="https://docs.google.com/spreadsheet/ccc?key=0ApQer1cn7xiOdDFlbU5HSnNUMmVZS0xtTkpvOEZoRHc&amp;usp=sharing" target="_blank">in this spreadsheet</a>. &nbsp;Mark has kindly extended the <a href="http://www.cs.york.ac.uk/aig/sw/gobnilp/" target="_blank">GOBNILP</a> program to use this format by recompiling GOBNILP using this version of the <a href="http://www.cs.helsinki.fi/u/bmmalone/urlearning-files/probdata_bn.c" target="_blank">probdata_bn.c</a> file. &nbsp;GOBNILP must then be run using the&nbsp;</font><span style="color:rgb(34,34,34)"><font face="courier new, monospace">-f=pss</font></span><span style="color:rgb(34,34,34);font-family:arial,sans-serif"> command line argument.</span></li></ul>
</div>
</div>
</div>

</body>

</html>
