<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE html
     PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN"
     "http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd">
<html xmlns="http://www.w3.org/1999/xhtml" xml:lang="en">
<head>
<title>LingPipe: Classification Tutorial</title>
<meta http-equiv="Content-type"
      content="application/xhtml+xml; charset=utf-8"/>
<meta http-equiv="Content-Language"
      content="en"/>
<link href="../../../web/css/lp-site.css"
      type="text/css"
      rel="stylesheet"
      title="lp-site"
      media="screen,projection,tv"/>

<link href="../../../web/css/lp-site-print.css"
      title="lp-site"
      type="text/css"
      rel="stylesheet"
      media="print,handheld,tty,aural,braille,embossed"/>
</head>

<body>

<div id="header">
<h1 id="product">LingPipe</h1><h1 id="pagetitle">Classification Tutorial</h1>
<a id="logo"
   href="http://alias-i.com/"
  ><img src="../../../web/img/logo-small.gif" alt="alias-i logo"/>
</a>
</div><!-- head -->


<div id="navig">

<!-- set class="current" for current link -->
<ul>
<li><a href="../../../index.html">home</a></li>

<li><a href="../../../web/demos.html">demos</a></li>

<li><a href="../../../web/licensing.html">license</a></li>

<li>download
<ul>
<li><a href="../../../web/download.html">lingpipe core</a></li>
<li><a href="../../../web/models.html">models</a></li>
</ul>
</li>

<li>docs
<ul>
<li><a href="../../../web/install.html">install</a></li>
<li><a class="current" href="../read-me.html">tutorials</a>
<ul>
<li><a class="current" href="../classify/read-me.html">classification</a></li>
<li><a href="../ne/read-me.html">named entity recognition</a></li>
<li><a href="../cluster/read-me.html">clustering</a></li>
<li><a href="../posTags/read-me.html">part of speech</a></li>
<li><a href="../sentences/read-me.html">sentences</a></li>
<li><a href="../querySpellChecker/read-me.html">spelling correction</a></li>
<li><a href="../stringCompare/read-me.html">string comparison</a></li>
<li><a href="../interestingPhrases/read-me.html">significant phrases</a></li>
<li><a href="../lm/read-me.html">character language models</a></li>
<li><a href="../db/read-me.html">database text mining</a></li>
<li><a href="../chineseTokens/read-me.html">chinese word segmentation</a></li>
<li><a href="../hyphenation/read-me.html">hyphenation and syllabification</a></li>
<li><a href="../sentiment/read-me.html">sentiment analysis</a></li>
<li><a href="../langid/read-me.html">language identification</a></li>
<li><a href="../wordSense/read-me.html">word sense disambiguation</a></li>
<li><a href="../svd/read-me.html">singular value decomposition</a></li>
<li><a href="../logistic-regression/read-me.html">logistic regression</a></li>
<li><a href="../crf/read-me.html">conditional random fields</a></li>
<li><a href="../em/read-me.html">expectation maximization</a></li>
<li><a href="../eclipse/read-me.html">eclipse</a></li>
</ul>
</li>
<li><a href="../../../docs/api/index.html">javadoc</a></li>
<li><a href="../../../web/book.html">textbook</a></li>
</ul>
</li>

<li>community
<ul>
<li><a href="../../../web/customers.html">customers</a></li>
<li><a href="http://groups.yahoo.com/group/LingPipe/">newsgroup</a></li>
<li><a href="http://lingpipe-blog.com/">blog</a></li>
<li><a href="../../../web/bugs.html">bugs</a></li>
<li><a href="../../../web/sandbox.html">sandbox</a></li>
<li><a href="../../../web/competition.html">competition</a></li>
<li><a href="../../../web/citations.html">citations</a></li>
</ul>
</li>

<li><a href="../../../web/contact.html">contact</a></li>

<li><a href="../../../web/about.html">about alias-i</a></li>
</ul>

<div class="search">
<form action="http://www.google.com/search">
<p>
<input type="hidden" name="hl" value="en" />
<input type="hidden" name="ie" value="UTF-8" />
<input type="hidden" name="oe" value="UTF-8" />
<input type="hidden" name="sitesearch" value="alias-i.com" />
<input class="query" size="10%" name="q" value="" />
<br />
<input class="submit" type="submit" value="search" name="submit" />
<span style="font-size:.6em; color:#888">by&nbsp;Google</span>
</p>
</form>
</div>

</div><!-- navig -->


<div id="content" class="content">


<h2>What is Text Classification?</h2>
<p>
Text classification typically involves assigning a document to a
category by automated or human means. LingPipe provides a
classification facility that takes examples of text
classifications--typically generated by a human--and learns how to
classify further documents using what it learned with language
models. There are many other ways to construct classifiers, but
language models are particularly good at some versions of this task.
</p>


<h2>20 Newsgroups Demo</h2>

<p>
A publicly available data set to work with is the 20 newsgroups data
available from the
</p>

<blockquote><p>
<a href="http://people.csail.mit.edu/people/jrennie/20Newsgroups"
  >20 Newsgroups Home Page</a>
</p>
</blockquote>


<h3>4 Newsgroups Sample</h3>
<p>
We have included a sample of 4 newsgroups with the LingPipe
distribution in order to allow you to run the tutorial out
of the box.  You may also download and run over the entire
20 newsgroup dataset.  LingPipe's performance over the whole
data set is state of the art.
</p>

<h3>Quick Start</h3>

<p>
Once you have downloaded and installed LingPipe, change
directories to the one containing this read-me:
</p>

<pre class="code">
&gt; cd demos/tutorial/classify
</pre>

<p>
You may then run the demo from the command line (placing all of the
code on one line):
</p>

<p>
On Windows:
</p>

<pre class="code">
java
-cp "../../../lingpipe-4.1.0.jar;
     classifyNews.jar"
ClassifyNews
</pre>

<p>
On Linux, Mac OS X, and other Unix-like operating systems:
</p>

<pre class="code">
java
-cp "../../../lingpipe-4.1.0.jar:
     classifyNews.jar"
ClassifyNews
</pre>

<p>
or through Ant:
</p>

<pre class="code">
ant classifyNews
</pre>

<p>
The demo will then train on the data in
<code>demos/fourNewsGroups/4news-train/</code> and evaluate on
<code>demos/4newsgroups/4news-test</code>. The results of scoring are
printed to the command line and explained in the rest of this
tutorial.
</p>

<h3>The Code</h3>

<p>
The entire source for the example is <a
href="src/ClassifyNews.java">ClassifyNews.java</a>. We will be using
the API from <a
href="../../../docs/api/com/aliasi/classify/Classifier.html">Classifier</a>
and its subclasses to train the classifier, and <a
href="../../../docs/api/com/aliasi/classify/Classification.html">Classifcation</a>
to evaluate it. The code should be pretty self explanatory in terms of
how training and evaluation are done. Below I go over the API calls.
</p>


<h3>Training</h3>

<p>
We are going to train up a set of character based language models (one
per newsgroup as named in the static array <code>CATEGORIES</code>) that processes
data in 6 character sequences as specified by the <code>NGRAM_SIZE</code>
constant.
</p>
<pre class="code">
private static String[] CATEGORIES
    = { "soc.religion.christian",
        "talk.religion.misc",
        "alt.atheism",
        "misc.forsale" };

private static int NGRAM_SIZE = 6;
</pre>
<p>
The smaller your data generally the smaller the n-gram
sample, but you can play around with different values--reasonable
ranges are from 1 to 16 with 6 being a good general starting
place.
</p>
<p>
The actual classifier involves one language model per classifier.  In
this case, we are going to use process language models (<a
href="../../../docs/api/com/aliasi/lm/LanguageModel.Process.html"><code>LanguageModel.Process</code></a>).  There is a factory method in <code>DynamicLMClassifier</code>
to construct actual models.
</p>

<pre class="code">
DynamicLMClassifier classifier
  = DynamicLMClassifier
    .createNGramBoundary(CATEGORIES,
                         NGRAM_SIZE);
</pre>
<p>
There are two other kinds of language model classifiers that may
be constructed, for bounded character language models and
tokenized language models.
</p>

<p>
Training a classifier simply involves providing examples of text
of the various categories.  This is called through the <code>handle</code>
method after first constructing a classification from the category
and a classified object from the classification and text:
</p>

<pre class="code">
Classification classification
    = new Classification(CATEGORIES[i]);
Classified&lt;CharSequence&gt; classified
    = new Classified&lt;CharSequence&gt;(text,classification);
classifier.handle(classified);
</pre>

<p>
That's all you need to train up a language model classifier. Now we can
see what it can do with some evaluation data.
</p>

<h3>Classifying News Articles</h3>

<p> The <a
href="../../../docs/api/com/aliasi/classify/DynamicLMClassifier.html">DynamicLMClassifier</a>
is pretty slow when doing classification so it is generally worth
going through a compile step to produce the more efficient compiled
version, which will classify character sequences into joint
classification results.  A simple way to do that is in the code as:
</p>

<pre class="code">
JointClassifier&lt;CharSequence&gt; compiledClassifier
    = (JointClassifier&lt;CharSequence&gt;)
      AbstractExternalizable.compile(classifier);
</pre>

<p>
Now the rubber hits the road and we can can see how well the machine learning
is doing. The example code both reports classifications to the console and evaluates the performance. The crucial lines of code are:
</p>

<pre class="code">
JointClassification jc = compiledClassifier.classifyJoint(text);
String bestCategory = jc.bestCategory();
String details = jc.toString();
</pre>

<p>
The text is an article that was not trained on and the <a
href="../../../docs/api/com/aliasi/classify/JointClassification.html">JointClassification</a>
is the result of evaluating the text against all the language
models. Contained in it is a <code>bestCategory()</code> method that
returns the highest scoring language model name for the text. Just to
be sure that some statistics are involved the <code>toString()</code>
method dumps out all the results and they are presented as:
</p>

<pre class="code">
Testing on soc.religion.christian/21417
Best Cat: soc.religion.christian
Rank Cat Score P(Cat|In) log2 P(Cat,In)
0=soc.religion.christian -1.56 0.45 -1.56
1=talk.religion.misc -2.68 0.20 -2.68
2=alt.atheism -2.70 0.20 -2.70
3=misc.forsale -3.25 0.13 -3.25
</pre>

<h3>Scoring Accuracy</h3>

<p>
The remaining API of note is how the
system is scored against a gold standard. In this case our testing
data. Since we know what newsgroup the article came from we can
evaluate how well the software is doing with the <a href="../../../docs/api/com/aliasi/classify/JointClassifierEvaluator.html">JointClassifierEvaluator</a> class.
</p>

<pre class="code">
boolean storeInputs = true;
JointClassifierEvaluator&lt;CharSequence&gt; evaluator
    = new JointClassifierEvaluator&lt;CharSequence&gt;(compiledClassifier,
                                                                CATEGORIES,
                                                                storeInputs);
</pre>

<p>
This class wraps the <code>compiledClassifier</code> in an evaluation framework
that provide very rich reporting of how well the system is doing. Later in the
code it is populated with data points with the method <code>addCase()</code>,
after first constructing a classified object as for training:
</p>

<pre class="code">
Classification classification
    = new Classification(CATEGORIES[i]);
Classified&lt;CharSequence&gt; classified
    = new Classified&lt;CharSequence&gt;(text,classification);
evaluator.handle(classified);
</pre>

<p>
This will get a JointClassification for the text and then keep track
of the results for reporting later. After all the data is run, then
many methods exist to see how well the software did. In the demo code
we just print out the total accuracy via the <a href="../../../docs/api/com/aliasi/classify/ConfusionMatrix.html">ConfusionMatrix</a>
class, but it is well worth looking at the relevant Javadoc for what
reporting is available.
</p>


<a name="cross-validation"></a>
<h2>Cross-Validation</h2>


<h3>Running Cross-Validation</h3>


<p>There's an ant target <code>crossValidateNews</code> which cross-validates
the news classifier over 10 folds.  Here's what a run looks like:
</p>

<pre class="code">
&gt; cd $LINGPIPE/demos/tutorial/classify
&gt; ant crossValidateNews

Reading data.
Num instances=250.
Permuting corpus.
 FOLD        ACCU
    0  1.00 +/- 0.00
    1  0.96 +/- 0.08
    2  0.84 +/- 0.14
    3  0.92 +/- 0.11
    4  1.00 +/- 0.00
    5  0.96 +/- 0.08
    6  0.88 +/- 0.13
    7  0.84 +/- 0.14
    8  0.88 +/- 0.13
    9  0.84 +/- 0.14
</pre>

<p>This reports that there are 250 training examples.  With 10 folds,
that'll be 225 traniing and 25 test cases each.  The accuracy for each
fold is reported along with the 95% normal approximation to the
binomial confidence interval per run (with no smoothing on the
binomial estimate, hence the 0.00 variance for fold 4).  The moral of
this story is that small training sizes lead to large variance.  </p>


<div class="sidebar">
<h2>More Cheating Possibilities</h2>
<p>
Reading cross-validation results can be challenging, because
they have the characteristic of results on development sets.
Researchers often report results of cross-validation for the
best sets of parameters they found, which typically overestimates
accuracy on truly held out data.
</p>
</div>

<p>Cross-validation is a means of using a single corpus to train and
evaluate without deciding ahead of time how to carve the data into
test and training portions.  This is often used for evaluation, but
more properly should be used only for development.</p>


<div class="sidebar">
<h2>Cross-Validation for Development</h2>
<p>
Another common approach is to use cross-validation during
development.  Then, with a truly held-out test set, there is
no bias in the reports.
</p>
</div>

<h3>How Cross-Validation Works</h3>

<p>Cross-validation divides a corpus into a number of evenly sized
portions called folds.  Then for each fold, the data not in the fold
is used to train a classifier which is then evaluated on the current
fold.  The results are then pooled across the folds, which greatly
reduces the variance in the evaluation, reflected in narrower confidence
intervals.
</p>

<h3>Implementing a Cross-Validating Corpus</h3>

<div class="sidebar">
<h2>Corpus Implementations for Training</h2>
<p>
An instance of corpus is required for
training batch-oriented classifiers which make multiple passes over the data,
such as logistic regression and perceptron classifiers.</p>
</div>

<p>LingPipe supplies a convenient <a
href="../../../docs/api/com/aliasi/corpus/Corpus.html"><code>corpus.Corpus</code></a>
class which is meant to be used for generic training and testing
applications like cross-validation.  The corpus class is typed based
on the handler type <code>H</code> intended to handle its data.
The basis of the corpus class is
a pair of methods <code>visitTest(H)</code> and <code>visitTrain(H)</code> which
send handlers every training instance or every testing instance respectively.
</p>


<p>LingPipe implements cross-validation for evaluation with the class
<a
href="../../../docs/api/com/aliasi/corpus/XValidatingObjectCorpus.html"><code>corpus.XValidatingObjectCorpus</code></a>.
This corpus implementation just stores the data in parallel lists and
uses it to implement the visit-test and visit-train methods of the
corpus.  </p>


<h3>Permuting Inputs</h3>

<div class="sidebar">
<h2>User-Specified Randomizer</h2>
<p>
Always allow users to specify their own instance
of <code>java.util.Random</code> in any method that
does randomization.  There are two reasons.  First,
it's the only way to guarantee repeatability during
testing.  Second, it's the only way to allow users
to implement a different or better randomizer than the one built
into Java's <code>Random</code> implementation.
</p>
</div>

<p>It is critical in evaluating classifiers to pay attention to
correlations in the corpus.  In the case of the 20 newsgroups data,
which is organized by category, a naive 10% cross-validation would
remove most or all of a category's training data, which would lie
in a continuous run.
</p>

<p>To solve this problem, the cross-validating corpus implementation
includes a method to permute the corpus using a specified random
implementation.
</p>

<p>We implemented the randomizer with a fixed seed so that
experiments would be repeatable.  Change the seed to get a
different set of runs.  You should see the variance even more
clearly after more runs.</p>


<h3>Cross-Validation Implementation</h3>

<p>The command-line implementation for cross-validating is in
<a href="src/CrossValidateNews.java"><code>src/CrossValidateNews.java</code></a>.
The code mostly repeats the simple classifier code.  First, we create
a cross-validating corpus, then store all of the data from both the
training and test directories.
</p>

<pre class="code">
XValidatingObjectCorpus&lt;Classified&lt;CharSequence&gt;&gt; corpus
    = new XValidatingObjectCorpus&lt;Classified&lt;CharSequence&gt;&gt;(NUM_FOLDS);

for (String category : CATEGORIES) {

    Classification c = new Classification(category);
    File trainCatDir = new File(TRAINING_DIR,category);
    for (File trainingFile : trainCatDir.listFiles()) {
        String text = Files.readFromFile(trainingFile,"ISO-8859-1");
        Classified&lt;CharSequence&gt; classified
            = new Classified&lt;CharSequence&gt;(text,c);
        corpus.handle(classified);
    }

    File testCatDir = new File(TESTING_DIR,category);
    for (File testFile : testCatDir.listFiles()) {
        String text = Files.readFromFile(testFile,"ISO-8859-1");
        Classified&lt;CharSequence&gt; classified
            = new Classified&lt;CharSequence&gt;(text,c);
        corpus.handle(classified);
    }
}
</pre>

<p>The corpus is then permuted using a new random number generator,
which randomizes any order-related correlations in the text:</p>

<pre class="code">
long seed = 42L;
corpus.permuteCorpus(new Random(seed));
</pre>

<p>Note that we have fixed the seed value for the random number
generator.  Choosing another one would produce a different shuffling
of the inputs.</p>

<p>Now that the corpus is created, we loop over the folds, evaluating
each one using the methods supplied by the corpus:
</p>

<pre class="code">
for (int fold = 0; fold &lt; NUM_FOLDS; ++fold) {
    corpus.setFold(fold);

    DynamicLMClassifier&lt;NGramProcessLM&gt; classifier
        = DynamicLMClassifier.createNGramProcess(CATEGORIES,NGRAM_SIZE);
    corpus.visitTrain(classifier);

    JointClassifier&lt;CharSequence&gt; compiledClassifier
        = (JointClassifier&lt;CharSequence&gt;)
          AbstractExternalizable.compile(classifier);

    boolean storeInputs = true;
    JointClassifierEvaluator&lt;CharSequence&gt; evaluator
        = new JointClassifierEvaluator&lt;CharSequence&gt;(compiledClassifier,
                                                                    CATEGORIES,
                                                                    storeInputs);

    corpus.visitTest(evaluator);
    System.out.printf(&quot;%5d  %4.2f +/- %4.2f\n&quot;, fold,
                      evaluator.confusionMatrix().totalAccuracy(),
                      evaluator.confusionMatrix().confidence95());
}
</pre>

<p>For each fold, the fold is first set on the corpus.  Then a trainable
classifier is created and the corpus is used to train it through the
<code>visitTrain()</code> method.  Then the classifier is compiled
and used to construct an evaluator.  The evaluator is then run over
the test cases by the corpus method <code>visitTest()</code>.  Finally,
the resulting accuracy and 95% confidence interval are printed.
</p>



<h3>Leave-One-Out Evaluations</h3>

<div class="sidebar">
<h2>Efficient Leave One Out</h2>
<p>
Leave-one-out evaluations can be very expensive in general
implementations that literally retrain on all but one example.
It's much more efficient if there is a way to untrain an
example.  Then, the whole corpus can be trained, and each
example can be visited, untrained, evaluated, and added back
in.
</p>
</div>

<p>The limit of cross-validation is when each fold consists of a
single example.  This is called &quot;leave one out&quot; (LOO).  This is easily
achieved in the general corpus implementation by setting the number of
folds equal to the number of data points. The only potential problem
is rounding errors in arithmetic, so leave-one-out evals are typically
done with specialized implementations. Also, in doing leave-one-out,
there is no point in compiling the classifier before running it.  </p>



<h2>References</h2>

<p>
For a general introduction to cross-validation, see:
</p>
<ul>
<li>Wikipedia: <a href="http://en.wikipedia.org/wiki/Cross-validation">Cross-validation</a></li>
</ul>

<p>
For a survey of statistical classification and examples of classifiers
using character language models, see:
</p>

<ul>
<li> W. J. Teahan. 2000.
<a href="http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.25.8114">Text classification using minimum cross-entropy</a>.
In <i>RIAO 2000</i>.</li>


<li> Fuchun Peng, Dale Schuurmans and Shaojun Wang. 2003.
<a href="http://citeseer.ist.psu.edu/fuchun03language.html">Language and task independent text categorization with simple language
models</a>.  In <i>Proceedings of HLT-NAACL 2003</i>.
</li>

<li> Fuchun Peng, Xiangji Huang, Dale Schuurmans, and Shaojun Wang.
2003.
<a href="http://acl.ldc.upenn.edu/W/W03/W03-1106.pdf">Text classification in Asian
languages without word segmentation</a>.
In <i>Proceedings of the Sixth International Workshop on Information Retrieval with Asian Languages</i>.
</li>

<li> Fuchun Peng, Dale Schuurmans, Vlado Keselj and Shaojun Wang.
2003.
<a href="http://acl.ldc.upenn.edu/eacl2003/papers/main/p33.pdf">Language
independent authorship attribution using character level language
models</a>.
In <i>Proceedings of EACL 2003</i>.
</li>

<li> F. Sebastiani. 2002.
<a href="http://citeseer.ist.psu.edu/518620.html">Machine learning in automated text
categorization</a>. <i>ACM Computing Surveys</i> <b>34</b>(1):1--47.</li>

</ul>


</div><!-- content -->

<div id="foot">
<p>
&#169; 2003&ndash;2011 &nbsp;
<a href="mailto:lingpipe@alias-i.com">alias-i</a>
</p>
</div>
<script type="text/javascript">
var gaJsHost = (("https:" == document.location.protocol) ? "https://ssl." : "http://www.");
document.write(unescape("%3Cscript src='" + gaJsHost + "google-analytics.com/ga.js' type='text/javascript'%3E%3C/script%3E"));
</script>
<script type="text/javascript">
try {
var pageTracker = _gat._getTracker("UA-15123726-1");
pageTracker._trackPageview();
} catch(err) {}</script></body>
</html>








