\documentclass[11pt,a4paper]{article}
\usepackage[latin1]{inputenc}
\usepackage{amsmath,amsfonts,amssymb,subfigure,url,listings,fullpage,graphicx}
\include{biblio}
\def\code#1{\lstinline[basicstyle=\bfseries]!#1!}

\begin{document}
\lstset{language=Java}

\begin{center}
\large{\bf{COMS30205: Advanced Software Development - Stage 3}}
\\\Large{\bf{Java Pattern Matching Library for UTF-8 Encoded Data}}
\vspace{6 mm}
\\\normalsize{Leon Atkins (la5520) and James Hanlon (jh5330)}
\end{center}


\section*{Summary of Achievements in Stage 2}

Our work in stage 2 focused on implementing the specification we had
outlined from our research in stage 1. For the exact matching we wrote
implementations of Boyer-Moore, Horespool and Knuth-Morris-Pratt. Each had
several variations so that they could efficiently compute results to return to
the methods in \code{ExactSearch}. All data created in the preprocessing
phases for these algorithms could be serialised and saved. For matching over
large files we designed and wrote a system whereby single 'chunks' at a time
could be read into memory and searched over.

In order for us to adopt an agile approach to software development, our
specification from stage 1 gave a broad picture, and did not focus on small
details and specifics. Around half way through development, we found that
because we had made changes to what we had originally intended, for instance
including the Boyer-Moore algorithm, the class structure specified by the plan
at this point did not fit the modifications made to the library. We decided to
refactor our design, so that our code would be object orientated and remain
flexible. The final part of stage 2 was spent writing documentation in the
source files for Java Docs.

\section*{Discussion of Progress Since}

The final part of the library left to implement was the fuzzy matching
functionality. We planned originally to do this using suffix trees and the
Shift-Or algorithm. After further research into suffix Trees (and a few Advanced
Algorithms lectures), we found that they were unsuitable for our
requirements because all of the text has to be preprocessed, which when working
with large files incurs a costly overhead.

The problem of matching under the Hamming norm is a difficult one and has several
different approaches. One of which is by using the Fast Fourier transform,
achieving a complexity of $O(|\sum|nlogn)$. As we are comparing bytes, the
alphabet size, $|\sum|$ in this case is 256, which assuming most patterns
will be smaller than 256 characters, performs worse than a naive implementation
of $O(nm)$. The best known solution has a complexity of $O(\sqrt{k} n log m)$
but is dependent on the frequency of symbols in the pattern. We decided that an
adaptation of the counting algorithm for matching under the Hamming norm
presented in \cite{paper}, would best suit out needs.

\subsection*{Counting Algorithm}

The algorithm works by preprocessing the pattern to record the index of each
letter in hash table. It then steps through each character in the text and
if it exists in the pattern, looks up the places at which it occurs. For each
occurrence it increments a value in a match position array corresponding to where
the pattern has to be moved for the characters to match. The values in the match
position array then correspond to the number of matching characters between the
text and pattern, i.e. the Hamming distance. Normally, the algorithm works by
creating a match position array of the length of the text. If we implemented the
algorithm in this way the array would equal the size of the text as the size of
integers and bytes are equal. 

The main modification that we made was to constrain the array to the length of
the pattern. This works by introducing a second array to record the scope of
each of the matches and performing update operations on the match position array
modulo the size of the pattern. We also added code to skip the correct number of
bytes for characters in the text that are encoded over more than a single byte.
Figure \ref{countingcode} shows the pseudo-code for the preprocessing and search
algorithms.

In the worst case, the text and pattern would consist of just one unique
character causing the algorithm to run in $O(nm)$ time, where $m$ is the length
of the pattern and $n$ is the length of the text. In practice though, it is more
likely that the pattern and text will consist of many different characters,
making a considerable improvement on the worst case performance. In terms of
space, it will require $m$ bytes for each array. The preprocessing stage runs in
$O(m)$ time. Assuming a load factor of 0.75 on the hash table and a byte per
pointer in the linked lists the table will require, for a worst case pattern with
no recurring characters, about $1.25m + 2m$ bytes space.

\begin{figure}

\begin{lstlisting}
PREPROCESS(P)
hashtable H
for i=0 to P.length 
    H.add(i)
\end{lstlisting}

\begin{lstlisting}
k-MISMATCH-COUNTING-SEARCH(byte[] T, hashtable H, int distance)
for i=0 to T.length
    L = H.get(T[i])
    if(L==NULL)
        i+= calcSkip(text[i])-1;
        continue;
    for j=0 to L.size()
        p = i - L.get(j) % m;
        if(MP[p]==0)
             matchPositon[p]=0;
             scope[p] = i;
        else if(i >= scope[p]+m)
             matchPosition[p]=1;
             scope[p]=i;
        else
             matchPosition[p]++;
        if(matchPosition[p]>=m-distance)
             report(p+1);
\end{lstlisting}

\label{countingcode}
\caption{Pseudo-code for the modified $k$-mismatch counting algorithm. The
functions \code{add()} and \code{get()} insert and retrieve elements from the hash
table. The functions \code{calcSkip()} calculates the number of bytes a character
spans over and \code{report()} reports a match.}
\end{figure}

\subsection*{Bitap Algorithm}

The Bitap (Shift-or, Shift-and) algorithm is an efficient algorithm for both
approximate (edit distance) and exact matching. It is the algorithm implemented
in the 'grep' utility. It has reasonable performance over small patterns only,
as it takes advantage of bitwise operations to increase it's performance. The
size of a pattern is constrained to the word size of the computer, for longer
patterns the performance drops considerably.

The algorithm works by computing a set of bit-masks for the pattern, containing
one bit for each element of the pattern. For the inexact case, matching with $k$
mismatches, we hold an array $R$, of length $k$. Each of these elements of $R$
holds a bitwise representation of the prefixes of the pattern that match any
suffix of the current string, with fewer than $k$ errors. 

A good, and more detailed description of the algorithm for exact matching is
given at \url{http://www-igm.univ-mlv.fr/~lecroq/string/node6.html}.

\subsection*{Other Work}

We made one further change to the structure of the library by including a
utilities file to contain static methods used in a number of different places.
For instance, it includes a method to read and return all of the bytes from a
file, which is used to read in a pattern from a file. We spent the remainder of
our time documenting all of the new code in source file comments for Java Docs.

\subsection*{Tools}

We have used a number of different tools throughout the project. The most
contentious of these between the two of us was Eclipse. One of us felt that
using an IDE was helpful, especially when we were dealing with parts of the Java
library that we were not familiar with, whilst the other member was happier
using a text editor and the online API documentation. Whilst we did not
encounter any problems by using the two different tools, there were a few things
that we could have used but rejected. For example, Eclipse has a built in code
formatter, which will format a variety of different layout styles into a unified
type.  This would have been useful for making code readable and consistent
between the two of us, but we decided due to our own preferences not to.

As mentioned in our stage 2 report, we have made use of Google Code, and
consequently Subversion (SVN). This was particularly useful for us. It meant
that we always had a central working copy of the code and didn't have to
manually merge different versions. It also meant that we had reliable a very
backup of our code. Finally, it meant that we could both work separately on the
code even when we were in different locations and we wouldn't have problems with
keeping everything synchronised. 

We also used Javadocs again, initially because it seems the standard way of
documenting code in Java, but also because in stage 2 we found it a neat and
practical way of documenting our code, even for each other to read.

We decided not to use JUnit as our testing framework. There were two main
reasons for this. First of all was that with the complexity of this stage, we
did not have the time to learn a new tool, however simple the learning curve
would have been. Also, it made sense to combine our own testing framework as a
kind of demo program to illustrate our code for the purposes of the submission.


\subsection*{Techniques}

%\# Techniques: which techniques, particularly extreme programming or
%related ones, did you use or reject?

We used a variety of techniques over the course of stage 3. We kept the same
strategy as we did in stage 2, which was pair programming in an agile setting.
Although we aimed to do as much pair programming as possible, we were also aware
of the fact that we each had different commitments, which meant that we would
not always be available at the same time as each other to do some work. As such,
it made sense to program together when the opportunity arose, but not to be
averse to working individually if needs required. As we were documenting using
Javadocs, and we had a clear roadmap and a good idea of what the other person
was working on, as well as SVN logs, it was always easy for the other person to
keep up to date with developments. 

We again took advantage of the opportunity to do some small code restructuring
in stage 3. We found, after implementing our \code{FuzzySearchInterface} interface,
that there were several functions which were present in both this and our exact
searching interface. Consequently, we merged these into a utilities file and
made the methods static, so that we could call them without having to
instantiate the Utilities class. 

\subsection*{The Final Product}

%\# Final Product: what have you achieved in terms of the code written
%and its structure, did you meet your aims, and to what extent can you
%demonstrate results?

We have achieved, in our opinion, a good object oriented structure for our code.
We feel that the code should be readily extensible, both under the original
contract of our design, but also if people wanted to modify the code for another
purpose. This was part of our decision by releasing the code under the GPL on
Google Code. Although we didn't have the time to comprehensively bug test, or to
exhaustively implement many of the algorithms that our out there, by releasing the
code we hope that many people can contribute their ideas to the code and develop
it into something which is more useful than it's original scope allowed. 

We did not implement as much as we expected in stage 3. We had planned for many
different edit distance calculations to be included. Unfortunately, we found
that finding edit distances over large portions of text efficiently was a
difficult problem. Many of the algorithms were complex, and required
preprocessing the text array, which with large files is unreasonable. As this
was part of our brief we had to look at modifying algorithms to fit our purpose.
We were particularly happy with our counting algorithm, which employed a
reasonably simple trick to achieve a good space complexity and a reasonable time
complexity, without a large preprocessing stage. We found that, as mentioned
previously, the Bitap algorithm gave us many more problems. As such, the
implementation of these algorithms are not in the state we would have liked it
to have been for this release. Part of this is due to the unforeseen complexity
of inexact matching and the fact it was a topic we were not familiar with before
the project, and part of this is to do with the complexity of the Bitap
algorithm. Overall however, we are pleased with the stage we have reached with
progress on the library, and have found the challenge both interesting and
rewarding.

\section*{Pair Experiences}

We think that the project was overall, a very positive experience. It gave us
the opportunity to experiment with tools on a real project, as well as
experience working as part of a team. 

An interesting experience we had in stage 3 was distance working. As mentioned
by one of the guest lecturers, working as part of a team from different places
is becoming increasingly common and it was good to get some experience of this.
We found that using instant messenger was a useful way to keep in touch, but
that really, telephone conversations and e-mailing pictures was the best way to
discuss ideas. It is difficult to fully get across difficulties by typing alone,
and certainly we feel it must represent a challenge to those who work
collaboratively from a distance in their work environment. We did find that, as
mentioned earlier, tools such as SVN and Javadoc gave us a standardised way to
communicate ideas and transmit code. Without these, working on separate
computers and not being able to discuss ideas face to face would slow down the
development process considerably. As it was however, we found that it was
possible to work separately when needed without too many problems. It was
definitely preferable to work in the same place when possible though, and the
use of these tools do not provide a complete substitute for physical presence.

\section*{Demonstration of the Library}

The file \code{Test.java} gives a demonstration of the functionality of the
library and can be used as follows:

\begin{verbatim}
java Test <file.txt> <query> <distance> -run
java Test <file.txt> <query> <distance> -time <iterations>
\end{verbatim}

The input file to search should be given as the first argument, followed by the
query which can be specified as a string or from a file, depending on the flag
used, followed by the distance for the fuzzy matching. The final arguments
specify whether it is run as a demonstration or a test to observe performance.

\begin{thebibliography}{10}

\bibitem{paper},
 Karl Abrahamson,
 \newblock Generalized String Matching,
 \newblock SIAM Journal on Computing 1987

\end{thebibliography}

\end{document}

