<HTML>

<HEAD>
<META http-equiv="Content-Type" content="text/html; charset=iso-8859-1"> 
<TITLE>How to Use the Extractor API</TITLE>
</HEAD>

<BODY BGCOLOR="#FFFFFF">

<MAP NAME="banner_top">
<AREA SHAPE="rect" COORDS="588,14,620,40" 
    HREF="http://www.iit.nrc.ca/english.html">
<AREA SHAPE="rect" COORDS="538,14,583,37" 
    HREF="http://www.nrc.ca/corporate/english/">
<AREA SHAPE="rect" COORDS="86,4,421,37" 
    HREF="http://www.iit.nrc.ca/II_public/index.html">
</MAP>

<IMG SRC="banner_top.jpg" width="620" height="37" 
    alt="II Group Banner" USEMAP="#banner_top"
ISMAP border="0"><BR><IMG SRC="banner_extractor.jpg" width="217" 
    heigth="49" alt="Extractor">

<H1><FONT COLOR="#400080">How to Use the Extractor API</FONT></H1>

<SMALL><I>Extractor 7.2, Revised December 4, 2001</I></SMALL><BR>
<SMALL><I>Copyright &copy; 2001, National Research Council of Canada</I></SMALL>

<HR>

<P>
The Extractor API may be used in many different ways, depending
on the intended application:</P>

<MENU>
<LI> <A HREF="#one-one">process one document</A>
<MENU>
<LI> <I>example:</I> for a Summarize button in a word processor
</MENU>
<LI> <A HREF="#many-one">process many documents, using
the same stop words for all of them</A>
<MENU>
<LI> <I>example:</I> for an index for a web site
</MENU>
<LI> <A HREF="#many-many">process many documents, using
different stop words for each one</A>
<MENU>
<LI> <I>example:</I> for summarizing e-mail for a mail server
</MENU>
<LI> <A HREF="#sections">process many small sections (pages, chapters)
from one large document (a book)</A>
<MENU>
<LI> <I>example:</I> for a book index
<LI> <I>example:</I> for a book table of contents
</MENU>
</MENU>

<P>
The API is designed to allow maximum flexibility for a wide 
variety of applications. If the API is incompatible with your
intended application, please 
<A HREF="mailto:Peter.Turney@iit.nrc.ca">
let us know.</A></P>

<HR>

<A NAME="one-one">
<FONT COLOR="#400080"><H2>One Document, One Set of Stop Words</H2></FONT></A>

<P>
This is a sketch of how to use the API to process a single text file.
This example assumes that there is no need to customize the stop words. </P>

<BLOCKQUOTE><P><PRE>
<FONT COLOR="green">/* initialize */</FONT>

call ExtrCreateStopMemory();
call ExtrCreateDocumentMemory();

<FONT COLOR="green">/* process text file */</FONT>

open the text file;
while the end of the text file has not yet been reached {
    read a block of the text file into a buffer;
    call ExtrReadDocumentBuffer();
}
call ExtrSignalDocumentEnd();
close the text file;

<FONT COLOR="green">/* print out keyphrases */</FONT>

call ExtrGetPhraseListSize();
for i = 0 to (PhraseListSize - 1) do {
    call ExtrGetPhraseByIndex();
    display the i-th keyphrase to the user;
}

<FONT COLOR="green">/* free memory */</FONT>

call ExtrClearStopMemory();
call ExtrClearDocumentMemory();
</PRE></P></BLOCKQUOTE>

<P>
Note that Extractor does not manage the text buffer. Extractor reads the
text buffer, but does not change the state of the text buffer in any way.
The text buffer must be allocated and freed outside of Extractor. </P>

<P>
This sketch is essentially what is implemented in the API test wrapper,
<CODE>test_api.c</CODE>. </P>

<HR>

<A NAME="many-one">
<FONT COLOR="#400080"><H2>Many Documents, One Set of Stop Words</H2></FONT></A>

<P>
This is a sketch of how to use the API to process many documents.
This example assumes that there is no need to customize the stop words. </P>

<BLOCKQUOTE><P><PRE>
<FONT COLOR="green">/* initialize the stop words */</FONT>

call ExtrCreateStopMemory();

<FONT COLOR="green">/* process the text files */</FONT>

for each document in the list of documents {

    <FONT COLOR="green">/* initialize the document memory */</FONT>

    call ExtrCreateDocumentMemory();

    <FONT COLOR="green">/* process the current document */</FONT>

    open the text file for the current document;
    while the end of the text file has not yet been reached {
        read a block of the text file into a buffer;
        call ExtrReadDocumentBuffer();
    }
    call ExtrSignalDocumentEnd();
    close the text file for the current document;

    <FONT COLOR="green">/* print out keyphrases */</FONT>

    call ExtrGetPhraseListSize();
    for i = 0 to (PhraseListSize - 1) do {
        call ExtrGetPhraseByIndex();
        display the i-th keyphrase to the user;
    }

    <FONT COLOR="green">/* free the document memory */</FONT>

    call ExtrClearDocumentMemory();
}

<FONT COLOR="green">/* free stop word memory */</FONT>

call ExtrClearStopMemory();
</PRE></P></BLOCKQUOTE>

<P>
In this example, all of the documents share the same set of stop words. 
Therefore the stop word memory is only created once. This is more
efficient than putting <CODE>ExtrCreateStopMemory</CODE> inside the
<CODE>for each document</CODE> loop.</P>

<HR>

<A NAME="many-many">
<FONT COLOR="#400080"><H2>Many Document, Many Sets of Stop Words</H2></FONT></A>
<P>

This is a sketch of how to use the API to process many documents.
In this example, each document is processed with its own set of stop words. </P>

<BLOCKQUOTE><P><PRE>
<FONT COLOR="green">/* process the text files */</FONT>

for each document in the list of documents {

    <FONT COLOR="green">/* initialize */</FONT>

    call ExtrCreateDocumentMemory();
    call ExtrCreateStopMemory();

    <FONT COLOR="green">/* load custom stop words */</FONT>

    open the text file for the custom stop words for the current document;
    while the end of the text file has not yet been reached {
        read a stop word from the file;
        call ExtrAddStopWord();
    }
    close the text file for the custom stop words;

    <FONT COLOR="green">/* process the current document */</FONT>

    open the text file for the current document;
    while the end of the text file has not yet been reached {
        read a block of the text file into a buffer;
        call ExtrReadDocumentBuffer();
    }
    call ExtrSignalDocumentEnd();
    close the text file for the current document;

    <FONT COLOR="green">/* print out keyphrases */</FONT>

    call ExtrGetPhraseListSize();
    for i = 0 to (PhraseListSize - 1) do {
        call ExtrGetPhraseByIndex();
        display the i-th keyphrase to the user;
    }

    <FONT COLOR="green">/* free memory */</FONT>

    call ExtrClearDocumentMemory();
    call ExtrClearStopMemory();
}
</PRE></P></BLOCKQUOTE>

<P>
If the application is a server with many different users, then the users
could each have their own personal list of stop words. For example, if
the server processes e-mail, then the users might want their own names
to be stop words.</P>

<HR>

<A NAME="sections">
<FONT COLOR="#400080"><H2>Process a Document in Sections</H2></FONT></A>

<P>
This is a sketch of how to use the API to process a large document,
one section at a time. This example assumes that the same stop words
are used for all sections. </P>

<P>
This could be useful for producing an annotated table of contents for
a book. Each section in the book could be annotated by a list of keyphrases,
where the keyphrases are extracted from that section alone. </P>

<P>
This could also be useful for producing an index. Extractor generates a 
list of three to thirty keyphrases for each document that it processes
(depending on <CODE>ExtrSetNumberPhrases</CODE>). Thirty keyphrases is not enough to
make an index for a book. However, if the book is processed in blocks
of about one to five pages per block, then Extractor will generate up
to thirty keyphrases for each block. A two-hundred page book could then
yield six thousand keyphrases. This will be more than enough to make a good
index. </P>

<BLOCKQUOTE><P><PRE>
<FONT COLOR="green">/* initialize stop words */</FONT>

call ExtrCreateStopMemory();

<FONT COLOR="green">/* process document */</FONT>

open the text file for the document;
while the end of the text file has not yet been reached {

    <FONT COLOR="green">/* process sections */</FONT>

    for each section of the document {

        <FONT COLOR="green">/* initialize memory for current section */</FONT>

        call ExtrCreateDocumentMemory();

        <FONT COLOR="green">/* process current section */</FONT>

        while the end of the section has not yet been reached {
            read a block of the current section into a buffer;
            call ExtrReadDocumentBuffer();
        }
        call ExtrSignalDocumentEnd();

        <FONT COLOR="green">/* print out keyphrases */</FONT>

        call ExtrGetPhraseListSize();
        for i = 0 to (PhraseListSize - 1) do {
            call ExtrGetPhraseByIndex();
            display the i-th keyphrase to the user;
        }

        <FONT COLOR="green">/* free memory for current section */</FONT>

        call ExtrClearDocumentMemory();
    }
}
close the text file;

<FONT COLOR="green">/* free stop words */</FONT>

call ExtrClearStopMemory();
</PRE></P></BLOCKQUOTE>

<P>
Note that Extractor can efficiently handle very large documents 
without requiring the documents
to be split into smaller chunks. Splitting a document into sections is not 
necessary to increase the speed or capacity of Extractor.</P>

<HR>

<CENTER>
<table border="1" bgcolor="#ccccff">
<tr><td><font size=2>
[ <a href="http://extractor.iit.nrc.ca/">Extractor Home</a> |
<A HREF="http://www.iit.nrc.ca/II_public/french.html">Fran&ccedil;ais</a> |
<A HREF="http://www.iit.nrc.ca/english.html">IIT</A> |
<A HREF="http://www.iit.nrc.ca/II_public/index.html">II Group</A> |
<A HREF="http://www.nrc.ca/corporate/english/">NRC</A> |
<A HREF="http://ai.iit.nrc.ca/search.html">Search</A> |
<A HREF="mailto:Peter.Turney@nrc.ca">Feedback</A> ]
[ <I>Updated</I>: December 4, 2001 ]</font size=2></td></tr>
</table>
</CENTER>

</BODY>
</HTML>


