\documentclass[11pt]{article}
\usepackage{hyperref}
\setlength{\oddsidemargin}{0pt}
\setlength{\textwidth}{460pt}
\setlength{\topmargin}{-0.5in}
\setlength{\textheight}{8in}

\begin{document}

\title{COMS 6111: Project 3}
\author{Ashish Tomar ({\tt ast2124@columbia.edu}) \\ and Ben Warfield ({\tt bbw2108@columbia.edu})}
\date{April 30, 2009}
\maketitle

\section{List of files submitted}

\begin{itemize}\addtolength{\itemsep}{-0.75\baselineskip}
\item[-] src/coms6111/astbbw/dataminer/
\begin{itemize}\addtolength{\itemsep}{-0.5\baselineskip}
\item[-----]DataMine.java
\item[-----]Miner.java
\item[-----]GlimpseBitIndexed.java
\item[-----]NaiveGlimpseMine.java
\item[-----]MineDumper.java
\end{itemize}
\item[-]lib/
\begin{itemize}\addtolength{\itemsep}{-0.5\baselineskip}
\item[--]log4j-1.2.15.jar
\end{itemize}

\item[-]build.xml
\item[-]Makefile
\item[-]build\_index.sh
\item[-]run\_miner.sh
\item[-]run\_miner\_noserver.sh
\item[-]run\_miner\_4args.sh
\item[-]Project3\_Writeup.pdf
\end{itemize}

\section{How to run the program}
To compile the project from the command prompt, simply type {\tt make} or {\tt make build}. 
This will use the default target and compile the project (using the {\tt ant} program).

\subsection{Normal execution}

To run the program there are two shell scripts, {\tt run\_miner.sh} and {\tt run\_miner\_4args.sh}.

These programs in turn require four inputs from the user:
\begin{enumerate}\addtolength{\itemsep}{-0.5\baselineskip}
\item the dataset name
\item a minimum support value {\it min\_sup} (a number between 0 and 1)
\item a word {\it w} (for getting the matching association rules)
\item a minimum confidence value {\it min\_conf} (a value between 0 and 1) 
\end{enumerate}

The {\tt run\_miner.sh} script takes a single argument (the dataset name) on the command-line, then prompts the user for the other required inputs.  The  {\tt run\_miner\_4args.sh} script takes anywhere between one and four inputs as command-line arguments, and prompts for any that are left out. For example, to run a simple search on the yahoo dataset without user intervention,  one might use the command
{\tt ./run\_miner\_4args.sh yahoo 0.08 touchdowns 0.5}.  This facility is very useful for benchmarking purposes, since it removes user typing and reaction speed from the execution time.

We also include a third run script, {\tt run\_miner\_noserver.sh}, which is almost but not quite identical to {\tt run\_miner.sh}: please see \autoref{sec:VMsettings} for details.

\subsection{Testing and benchmarking}
Since the indexing process is somewhat time-consuming, the program can be run using a pre-computed index stored in a text file.  
If the first argument to the main program ends with ``.txt'', the program will read a pre-computed index from the named file instead of building it from scratch.  
The pre-computed index file can be generated using the MineDumper class, using the included {\tt build\_index.sh} shell script:


{\tt  ./build\_index.sh yahoo yahoo\_dump.txt }

(Note that since this program is intended to be used during testing, somewhat more logging messages are emitted to standard output than is perhaps necessary.)
Once this index is built, the standard invocation above can be changed to

 {\tt ./run\_miner\_4args.sh yahoo\_dump.txt 0.08 touchdowns 0.5}.

\section{Basic Strategy}
\subsection{Algorithm}
During the indexing phase, we generate support information for every term that appears in at least one document in the data set, before discarding the top 397, as specified by the assignment.

For the first round of the {\it a priori} algorithm, the remaining 1-itemsets are simply filtered based on the user-specified cutoff, then sorted alphabetically.  For rounds two and three, we use the nested loop approach outlined in the {\it a priori} paper, with one modification: because our data (once the COMMON terms are removed) is fairly sparse, 
we store the supporting transaction-set information in each itemset, and calculate the support of each candidate directly
from its parent itemsets, 
rather than assembling all the candidates, then consulting each transaction to see which candidate itemsets it supports.
\subsection{Data Structures}

The internal data structures for this assignment were 
designed to allow easy substitution of multiple implementations of the internal logic, for comparison purposes.  
The main logic is implemented in the {\tt DataMine} abstract class, which is extended by the {\tt NaiveGlimpseMine } and {\tt GlimpseBitIndexed} classes.  Information about itemsets and their support is encapsulated in the {\tt ItemSet} class, which is an inner class of {\tt DataMine}, and is extended by {\tt ItemBitSet } and {\tt ItemSetInt} (both of which are inner classes contained within {\tt GlimpseBitIndexed}).  The principal method of the {\tt ItemSet} family of classes is {\tt intersect}, which finds the intersection of the support of two large itemsets, and returns a new ItemSet with the appropriate support information (or, if the support is too low, returns {\tt null}).  The original {\tt ItemSet } implementation simply calls back to the enclosing {\tt DataMine} instance to calculate the support of the new itemset, but the two subclasses do the calculation internally using the itemsets' stored support information, as described above.

\subsection{Indexing}
We begin building our index by calling {\tt glimpseindex} on the dataset, storing the data files in a tmp directory. We use the files generated by {\tt glimpseindex} to get the total size of the dataset, and  parse the
.glimpse\_index file to get all the single terms that appear in the dataset, and store them in a TreeSet. 
We then use this TreeSet and to call {\tt glimpse} and retrieve 
 the list of documents containing each of the terms present in the TreeSet. 
 We then create a new TreeSet of objects of class {\tt ItemSetInt} that contains for each object the given term,
  its support value,
and the document numbers of the documents that contain the given term.
From this TreeSet (sorted by document frequency), we find the 397 COMMON terms, which are then printed and removed from the set of known terms.
The details of the {\tt ItemSetInt} class are described in slightly more depth in the following section.


\section{Details and Refinements}
\subsection{Set implementation}\label{sec:ItemSetImpl}
The initial implementation of the transaction-membership set ({\tt ItemBitSet}) used the Java BitSet class to store which document IDs were included in the set of support for each itemset.  This is a conceptually simple approach, and initial performance testing using the Yahoo dataset showed very good results.  On the much larger and sparser 20newsgroups set, however, performance was much worse than expected.

This is easily explained by comparing the numbers involved: while the Yahoo dataset contains only 200 documents (and the maximum support in the WORDS set is over 11\%), the 20newsgroups set contains over 18,000 documents, and the highest support remaining after the COMMON terms are removed is under 5\%.  In practical terms, this means that a bit vector containing information about the support of one itemset will use up
to 2252 bytes, while containing no more than 900 set bits.  Taking the intersection of two such BitSet objects requires a number of operations linear in the number of bytes in the smaller set, regardless of the number of bits that are set.

Representing the set as an ordered list of integer document numbers allows for several improvements.  First and foremost, it makes the number of comparisons required to take an intersection a function of the size of the sets, rather than of the size of the overall document collection.  Conversely, it makes the size of the set representation proportional to the membership of the set, which should improve execution time simply by reducing the amount of data to be moved to and from the processor cache. Finally, since the intersection code is hand-coded, it can be tailored to the demands of this specific application in ways that the BitSet implementation could not (discussed further in \autoref{sec:earlyexit} below).

Changing the support-set implementation from BitSet to this implementation reduced the test execution time from 137-142 seconds of calculation time to 75-78 seconds.

As a further refinement on this method, since the maximum document ID in the 20newsgroups set is less than 32,000, the document IDs can be stored as short integers instead of standard integers, which has no algorithmic effect but should further improve cache locality.  Experimentally, making this change further reduced the times above, from 75-78 seconds to 69-70 seconds.

\subsection{Candidate Generation}
%FIXME need to make nested loop approach explicit

The {\it apriori-gen } function for finding candidate ($k+1$)-itemsets requires the generation of ordered pairs of {\it k}-itemsets with the properties that (1) the first $k-1$ items in each set are identical, and (2) the {\it k}th item of the second set be strictly greater (alphabetically speaking) than that of the first set.  If the itemsets are stored in alphabetical order (which they are), these pairs are easy to generate using a simple nested loop: for each $k$-itemset in our list, its eligible partners
begin with the one following it in the sorted list, and end with the first itemset in the list which violates property (1).  
% make more explicit?  Probably not worth it

The simplest implementation of the process described above uses a large number of string comparisons, which may be very fast (since many of the strings involved should be identical, which \emph{can} allow Java to make them all pointers to the same underlying object), but is still a potentially high overhead price.

As an alternative, we can pre-cache, for each $k-1$-itemset, the array index where we know that an itemset violating property (1) will be found, and simply iterate from the current index to that one, with no string comparisons required.  
At minimal cost, we reduce the cost of redundancy elimination to
one hash lookup per \emph{outer} loop, and the normal overhead for iterating over an array with a for-loop.


\subsection{Candidate Elimination}

The {\it apriori-gen } function also requires removal of all candidate itemsets that contain subsets that are not large.
When we are limited to 3-itemsets as our largest candidate sets, this becomes very simple to arrange: if we encounter ``A B'',  ``A C'', then it must be the case that ``A''  and  ``B''  and ``C'' are large, so only  ``B C''  needs to be checked.  
This is made simple by storing each known large itemset in a hash keyed on a unique representation of the terms it contains: if there is no ItemSet object stored under the key ``B C'', then we can eliminate this candidate.

\subsection{Early evaluation exit}\label{sec:earlyexit}

We include one further shortcut, alluded to in \autoref{sec:ItemSetImpl} above.

When we begin evaluation of a pair of document sets, we know the size of both sets, and the size that the final intersection must be in order for the combined itemset to meet our support cutoff.  As we progress through the merge process, we may reach a point where, even if all remaining unexamined document IDs were common between the two sets, the intersection could not possibly be large enough.  In this case, we can abort the evaluation procedure early.

When first implemented, this optimization produced a roughly 40-50\% speedup, but 
this is somewhat misleading: this change was made before Candidate Elimination was fully implemented,
which left many more poor candidates to be evaluated (and the poorer the candidate, the more time can be saved by early exit).  The actual speedup provided is probably closer to 20\%-30\%, which remains significant.

\subsection{VM parameter choices}\label{sec:VMsettings}
During early development, we found it necessary to substantially increase the amount of memory available to the Java VM, but the final version runs acceptably (using reasonable parameters) with the default parameters.

However, there is one other noticeable trade-off that can be made by changing the VM options; using the ``server'' settings of the HotSpot JVM slows down the indexing process noticeably, but improves performance of the {\it a priori} algorithm by roughly 25\%.  It is not immediately obvious why this should be the case, but both differences are noticeable and repeatable.  Accordingly, both of our standard run scripts use the {\tt -server} flag.  If this is considered unfair (or the tradeoff unacceptable), the {\tt run\_miner\_noserver.sh} script can be used instead (or, of course, the original script can be edited).

\section{Overall Performance}



\subsection{Pre-indexed performance}
All of the timing numbers cited above use the same standardized parameters: the program was run on a pre-calculated index of the ``20newsgroups'' dataset, and tasked with finding rules involving the term ``armenian'' with 0.5\% support and 50\% confidence (otherwise put, the command-line was {\tt run\_miner\_4args.sh 20newsgroups\_dump.txt 0.005 armenian 0.5}).  This produces 8896 large itemsets, 3023 of which are 3-itemsets and 2589 of which are 2-itemsets.  The final (and shortest observed) time for executing this command was 18 seconds, of which 14.75 were spent on the {\it a priori} loop.

Testing on the Yahoo set generally used a minimum confidence of 8\% (though we tried various levels at various points), and the term ``quarterback'': the final {\it a priori} time for this loop was under a second.

To evaluate the performance of this implementation under more difficult circumstances, we lowered the minimum support cutoff to 0.1\%, using the same {\it w} and {\it minconf} parameters.  
At this level of support, there are 11,958  large 1-itemsets and 629,307 large 2-itemsets.  
For round three, the generation function finds 187,097,223 candidates, of which roughly half are eliminated before the intersection is calculated (93,311,801 intersections are actually calculated).
From this total, we arrive at 2,177,701 large itemsets of exactly three terms, or roughly 2.8 million total, having used (at peak) approximately 710MB of memory and just under 10 minutes of total execution time (8 minutes spent on the {\it a priori} function itself).
\subsection{Performance with indexing}

The final indexing time for the Yahoo data set is approximately 90 seconds; indexing the 20newsgroup dataset requires something between 33 and 45 minutes, depending on whether or not the {\tt -system} flag is used, and with some random variation depending on system load.  In both cases, the bulk of the time is system time, spent on a large number of calls to the {\tt glimpse} program.

\end{document}
