\chapter{Text Analysis}

\section{Information Retrieval}
%\bibliographystyle{alpha} 
IR sees documents as {\em bags of words}; the semantics of a document
can be characterized by its {\em content words}.  {\em Non content
  words} or {\em stopwords} (also called {\em function words}
(\cite{FOA})) are words that carry no informational content: a, then,
of, this,... They usually are present in almost any document. If
we eliminate them, we are left with the content words. Note that no
syntax or semantics is used; only word appearance is important. Not
even order is used: {\tt the cat is on the mat} and {\tt the mat is
  on the cat} are exactly the same in IR (except for indices that keep
word offsets, see below). Beyond the order of words in a sentence, we
also throw aways relationships among sentences, and any logical
structure in the document (to build arguments, etc.) Information
Extraction (section~\ref{ie}) can be seen as an attempt to get some of
this information back. 

Documents are {\em tokenized}, i.e. divided into discrete units or
tokens. This is needed for many languages. Then, other steps may be
taken, depending on the application:
\bi
\item {\em stopwords} are words that carry no informational content; they
usually are present in almost any document: a, then, of,
this,... However, each word in the document is replaced with an
integer denoting its position; stopwords are counted when computing
the position. This makes it possible to search for phrases containing
stopwords, as {\em gone with the wind}. Because stopwords are not
kept, this will actually match {\em gone * * wind}. Also, searches of
only stopwords are not possible: {\em to be or not to be}. Finally,
one must be careful with words that can be both stopwords and
meaningful, like {\em can} (the verb versus the name). A list of
stopwords is also sometimes called a {\em negative dictionary}. Note,
by the way, that stopwords are among the most useful ones for syntactic
analysis! Statistically, we expect stopwords to occur randomly through
text, but content words to occur with certain patterns (it has been
suggested that stopwords follow a Poisson distribution, while content
words do not).When a content word occurs, it's for one of two reasons:
at random, or because the word reflects part of the document's meaning. 

\item {\em canonical words}: words that are strongly related may be
  converted to a common term. For instance, verb forms (past, present,
  tense) may all be converted to a root. A particular example of this
  is {\em stemming}, getting rid of word inflections (prefixes,
  suffixes) to link several words to a common root ({\em running} to
  {\em run}). Common stemming methods are based on morphological
  knowledge (and hence are language dependent). Note that stemming
  introduces some risk: Porter's algorithm, one of the best known
  stemmers for English, stems {\em university} and {\em universal} to
  {\em univers}. Some stemmers also transform the case of letters (all
  to lower or uppercase); while that sounds like a good idea, it
  deprives us of a source of information for detecting proper nouns
  and for recognizing sentences. In some cases, words may be
  transformed into a stem with multiple meanings, while the original
  word only had one (i.e.. {\em gravitation} to {\em gravity}); after
  all, stemming is a lossy transformation. Also, in the context of the
  Web, many abbreviations 
  are used (especially in technical contexts) and therefore it can be
  difficult to separate regular words from abbreviations. Obviously
  this only works for languages where words can be declined. 
  Stemming increases recall (see later) because two keywords that were
  different before may be the same now. The use of a stemmer also
  reduces the size of the vocabulary and hence the size of an inverted
  index (see later). Note that all of this applies to {\em common or
  general} terms, those denoting categories of objects, as opposed to
  {\em proper names}, those with a unique reference (usually those
  denoting people, places, and events). Proper names do not follow the
  same morphological rules are common names.
\item Another improvement is to accept {\em phrases}. Technically, phrases are
  n-grams (i.e. n terms adjacent in   the text)that together have a
  meaning different from the separate terms, e.g. {\em operating
  system}. Phrases are terms of their own right. However, to discover
  phrases may be complicated. There are basically two approaches,
  syntactical and statistical. In the syntactic approach, allowed
  combinations are listed by syntactic categories, i.e. {\em noun +
  noun}. However, this approach is weak, as most rules allow non
  phrases. The statistical approach consists of looking at the number
  of occurrences of $n$ terms $t_1t_2\ldots t_n$ together and
  determining if this number is higher than it could be expected if
  the terms were independent (i.e. higher than the product of their
  individual occurrences). Recognizing phrases in general is a complex
  problem; in a language like English, phrases can become quite large
  and complex ({\em simulated back-propagation neural network}, {\em
  apples and oranges}).

\item {\em Approximate string matching} is required in order to deal with
typos, misspellings, etc. which can be quite frequent on the web due to
the lack of editorial control.  There are two ways to attack the
problem: one is by using methods that map various spellings of a word
to a common representation. The {\em soundex} algorithm, which takes
into account phonetics and pronunciation, is the best known of such
approaches. The other approach is to break down a word in $n$-grams or
sequences of $n$ characters, and compare overlap of sequences. The
parameter $n$ depends on the language characteristics, like typical
syllable size. Because approximate term matching may be expensive, it
is usually not used by web search engines.

\item Another technique is to have a  
thesaurus or similar to catch {\em synonyms}, and having a term
denoting all synonyms, is equivalent to replace all synonyms by their
union and also reduces the number of concepts. Finally, using {\em
  co-occurrence} information has also been tried. A co-occurrence rate
is considered high if it is higher than the expected co-occurrence
(the expected frequency if the terms are distributed
independently). Note, though, that even if two terms co-occur they may
not be semantically related. Sometimes, co-occurrence and proximity
information (i.e. whether the two terms not only occur, but occur close
to each other in the text) are combined. 
\ei
\subsubsection{Preliminaries: Metrics, Distances and Similarities}
A problem one has to deal with in reducing dimensions is that what one
wants is to combine similar terms; thus, one has to define {\em
  similarity}. Technically, one can use certain {\em metrics} to do that.
A metric or distance is simply a function $d$ defined over pairs of points
in some vector space, that obeys a few basic properties:
\bi
\item $d(x,y) = 0\ iff\ x = y$;
\item $d(x,y) = d(y,x)$;
\item $d(x,y) \leq d(x,z) \leq d(z,y)$ (triangle inequality). 
\ei
From this we can deduce that $d(x,y) \geq 0$ (since $d(x,x) \leq
d(x,y) + d(y,x)$). The  cosine measure that was defined ago is a
metric. While usually conceived of as a {\em distance (difference)},
one can also think of the function as measuring similarity
(closeness). A family of metrics that do this are the Minkowski
metrics, defined by
$$Sim(x,y) = (\Sigma_{k=1}^{N} \mid w_{xk} -w_{yk})^{\frac{1}{L}}$$
where $w_{xk}$ is the weight of term $k$ occurring in document
(vector) $x$, and $L \geq 1$. When $L = 1$ this is called the
Manhattan distance. When $L = 2$, one has the typical Euclidean
distance. Sometimes, $L = \infty$ and the summation becomes equivalent to {\em
  max} (taking the maximum of the distances). We are justified in
calling them similarity measures in that they have   another,
intuitive property: $d(x,x) \geq max_y d(x,y)$. 

 Looking at the arguments as vectors, this metrics have some
 additional properties: 
\bi
\item $d(x,x) = \mid\mid x \mid\mid$;
\item $d(x,y) \leq \mid\mid x \mid\mid \times \mid\mid y \mid\mid$
  (Cauchy-Schwartz inequality);
\ei
where $\mid\mid x \mid\mid$ is the length or {\em norm} of the vector.

\subsection{Vector-Space model}
%NOTES TO ADD:
%vector-space uses real values (tf-idf). It is different from models
%where documents are vectors of natural numbers (counts) as used in
%Naive Bayes. In vector-space, values should always be normalized.
%for length-normalized vectors, cosine similarity and Euclidean
%distance are basically the same.
%documents in vector-space are length-normalized unit vectors that
%point to the surface of a hypersphere.
%for non-normalized vectors, inner product, cosine similarity and
%Euclidean distancce have different behavior.
%centroids are not normalized.


Content words are also called {\em terms}. Terms can be uniquely
identified by numbers. A document D can be
represented by a vector $(t_1,\ldots,t_n)$ where $t_i$ is a 0 if the
$i$-th term is not present, and an integer $> 0$ representing the {\em
  weight} of the term. This is usually computed by taking into account
the number of times the term is present in D and the {\em importance}
of the term. In a collection of documents, the importance is usually
inversely proportional to the number of documents that contain the
term.

Let $D$ be a collection of $m$ documents, that is, $\card{D} = m$. 
 Let $T$ be the collection of all terms in $D$, $t$ be a
term appearing in $n$ documents; we denote by $D_t$ the set of
documents where $t$ appears. The {\em document frequency} of a term is
 the number of documents where the term occurs,
 i.e. $\card{D_t}$. Usually, the more documents where the term 
occurs the less useful the terms is in discriminating among
documents. The {\em inverse document frequency (idf)} of $t$ is
 computed as $idf(t,D) = \log\frac{\card{D}}{\card{D_t}} = \log\frac{m}{n}$
(note: when $n = m$, idf is 0; when $n = 1$, idf will be as large as
possible). The idf tries to account for the fact that terms occurring in
many documents are not good discriminators. In fact, some experiments
 have shown that words that occur too frequently or too infrequently
 are not good discriminators (the latter because they have lower
 probability of bing used!)\footnote{Recall that stopwords have high
 frequency, but stopwords are not guaranteed to be the only words with
 high frequency. In a given document, some content words may have high
 frequency too. What distinguishes stopwords is that they are going to
 be high frequency in {\em any} document.}. The $\card{D}$ acts as a
 normalization factor; we could also use $max_{t' \in T}D_{t'}$. A very
 commonly used normalization factor is $\card{D} - \card{D_t}$,
 i.e. the number of documents {\em not} containing the term.
%This is consistent with assuming that  words have a distribution that
% follows Zipf's law 
Note that this is a  property of the corpus a a whole, not just of a
document! 

The {\em term occurrence frequency (tf)} is the number of times a term
appears in a document. For term $t$, document $d$, we denote $tf(t,d)$
the occurrence frequency of $t$ in $d$. If the term occurs frequently,
then it is likely to be very significant to the document. 
Note that since tf is an absolute value (not normalized) we need the
weight normalized: since tf tends to favor larger documents over short
one, some normalization is needed.  tf can be
normalized by the sum of term counts:
$$TF(t,d) = \frac{tf(t,d)}{\sum_{d' \in D} tf(t,d')}$$
or by the largest sum:
$$TF(t,d) = \frac{tf(t,d)}{max_{d' \in D} tf(t,d')}$$

A {\em tf-idf weight}  is a weight assigned to a term in a document,
obtained by combining the tf and the idf. Simple multiplication can be
used, using the normalized TF:
$$TFIDF(t,d) = TF(t,d) \times idf(t,D)$$

Another idea is to multiply both factors and normalize afterwards;
this has the advantage of yielding a normalized value (between 0 and
1); a variation of the following formula is often used:

$$TFIDF(t,d) = \frac{(a_1 + a_2 \times
  \frac{tf(t,d)}{max_{k \in T}\{tf(k,d)\}}) \times
  idf(t,D)}{\sqrt{\Sigma_{i \in T} 
  (a_1+ a_2 \times \frac{tf(i,d)}{max_{k \in T}\{tf(k,d)\}})^2 \times
  idf(i,D)^2}}$$ 

where $a_1$ and $a_2$ are small constants, and both max and sum are
over all terms in the document. This expression goes up with a higher
term frequency. The denominator is a normalization factor. 

%Distances: dot product. Ranking of results. Similarity search.

A query in natural language can be seen as a phrase and represented as
a vector, just like a document. That is, non-content words are thrown
out, stemming is used, possibly words are grouped into phrases, and
weights assigned to each term by the system. Answering a query, in
this context, means finding document vectors which are the closest to
the query vector, where {\em close} is defined by some {\em distance
  measure} defined in the vector space.
 One possible distance measure is simply the {\em coordination level}:
 let $T_q$ the set of terms used in query $q$ and $S_{d}$ the set of
 terms in document $d$. Then $\card{T_q \inters S_{d}}$ could be
 considered a good representation of how relevant $d$ is to query
 $q$. However, this approach fails because it lacks
 normalization. Assume $T_q$ contains 10 terms, document $d_1$ has 5
 terms and document $d_2$ has 1000 terms, and $\card{T_q \inters
   S_{d_1}} = \card{T_q \inters S_{d_2}} = 5$. The metric gives the
 same score to two very different cases. Normalizing this by size
 gives us the Dice coefficient: $$2 \frac{\card{T_q \inters
     S_d}}{\card{T_q}+\card{S_d}}$$ This still does not take into
 account the fact that queries are usually much shorter than documents
 (in number of keywords). Another approach also counts the number of
 terms that are not present in query or document; this is called the
 {\em simple matching coefficient}: 
$$\frac{\card{T_q \inters S_d} + \card{(T - T_q) \inters (T -
     S_d)}}{\card{T}}$$ 

Finally, another possible distance measure is the dot product of two
vectors:  

$$d(\vec{x},\vec{y}) = \Sigma_{i=1}^n x_i \times y_i$$

If the vectors are binary (a 0 or 1 reflecting presence/absence of
word in document) this value gives the number of terms in
common. However, this measure favors longer documents which will be
likely to have more terms, i.e. less sparse vectors. To remedy this,
another measure call the {\em cosine} function is used:

$$Cosine(X,Y) = \frac{X \circ Y}{\sqrt{(X \circ X) \odot (Y \circ
    Y)}}$$

where $\circ$ is the dot product, and $\odot$ is scalar
multiplication. Note that $\sqrt{X \circ X}$ is the norm or length of
$X$; $X_n = \frac{X}{\sqrt{X \circ X}}$ is the normalized vector of
$X$. Hence, $Cosine(X,Y) = Cosine(cX,Y) = Cosine(X, cY)$ for any
positive constant $c$. Also, $0 \leq Cosine(X,Y) \leq 1$. When
$Cosine(X,Y) = 0$, then the vectors have no terms in common (i.e. are
orthogonal, $X \circ Y = 0$). When $Cosine(X,Y) = 1$, then $X = cY$,
for some positive constant $c$. However, it has been found that the
cosine function favors short documents over long ones. It is
difficult to determine how to deal best with document length: there is
a difference between a document being longer because it treats more
topics, and being longer because it repeats more times the same message
(i.e. redundancy). For the former, we would expect more keywords to be
associated with the document, but for the latter we have higher
frequency for the same keywords. One possible formula normalizes by
taking into account the length of a document (defined as the number of
different terms in the document), the average length of all documents
in the corpus, and the deviation from this average. 

In general, terms with high document frequencies are poor at
discriminating among documents. It has been found that good documents
occur in few documents. However, the best documents are those that
have medium documents frequencies (not too high, not too low). Also,
among terms occurring on the same number of documents, those with a
higher variance are better. The
``goodness'' of a term can be computed as follows: the {\em
  compactness} of a set of document vectors $V = V_1,\ldots, V_m$ is
defined as $\Sigma_{i,j \in V \land i \neq j} Cosine(V_i,V_j)$ (this
can be normalized by dividing by $m$, the size of $V$; $m^2$ is used
in some formulas). When all
documents are similar, compactness is high and retrieval is
difficult. If we compute the compactness over $V$ after taking a term
out of the document vectors, the compactness may decrease, remain the
same or increase. If the compactness decreases, the term is a good
term for distinguishing among documents. This implies that we can use
this also as part of the weight calculation, perhaps instead of IDF:
let $V_k$ be $V$ calculated after taking term $k$ out; then the {\em
  discriminant power} of term $k$ is $V - V_k$. Note that to calculate
the compactness we need all compare all pairs of documents, and there
is $(m 2)$ pairs. We can calculate the {\em centroid} of a document
set as a representation of the {\em average document} as
$D^* = \frac{\Sigma_{i=1}^m d_i}{m}$, where the sum is vector
sum. Then the formula for compactness above can be substituted by
$\Sigma_{i=1}^m Cosine(V_i,D^*)$. 

In the SMART system, the weighting schema is {\em generic}: the
formula $$\frac{freq_{td} \times discrim_t}{norm}$$ is used, and every
one of the three factors on it can be specified on a number of ways:
\be
\item $freq_{td}$ can be $\{0,1\}$,
$tf_{td}$, $tf_{td}$ normalized by max ($\frac{tf_{td}}{max_{t' \in
    T}tf_{t',d}}$), or augmented ($\frac{1}{2}
  +\frac{1}{2}\frac{tf_{td}}{max_{t' \in T}tf_{t',d}}$), or
  logarithmic ($\log tf_{td} + 1$).
\item $discrim_t$ can be logarithmic ($\log
  \frac{\card{D}}{\card{D_t}}$),  probabilistic ($\log \frac{\card{D}
  - \card{D_t}}{\card{D_t}}$) of frequency-based ($\frac{1}{\card{D_t}}$).
\item $norm$ can be a sum ($\Sigma_{V_i \in V} V_i$), a cosine
  ($\sqrt{\Sigma_{V_i \in V} V_i^2}$) or a max ($max_{V_i \in V} V_i$)
\ee

The set of terms for a corpus should ideally be {\em exhaustive} (have 
terms for all the topics any user may want to search for) and {\em
  specific} (have terms that help identify the relevant documents for
any topic). However, both measures are in tension; there exists a
trade-off between them. This is similar to the trade-off between
precision and recall. In effect, if an index is exhaustive recall is
high but specificity may suffer (if we associate many keywords with a
document, we increase its chances of being retrieved, but precision
may go down); if the index is not very exhaustive
we can achieve high precision, but recall many not be very good. Note
that this analysis concerns the corpus (i.e. the documents); if we
look at the user's queries, the set of terms used there may be
different. Thus, there may be a mismatch between terms used in queries
and terms in the index; this is called the {\em vocabulary
  mismatch}. This explains why choosing the right set of keywords is
difficult. In the WWW, where the corpus is open-ended and dynamic, 
choosing the right set of keywords is even more difficult.

One big advantage of using a measure function as a similarity measure
is that the documents retrieved can be {\em ranked} or {\em ordered}
according to this measure. When the number of documents is large
(usual), the user can focus on only the top $k$ results (or the system
may retrieve only the top $k$ results). 

Another common query technique is to
allow the user to enter words and Boolean combinations (Boolean
query). Answering a query, in this context, means to retrieve all
documents where the terms appear and combining them adequately (so AND
of two terms leads to the intersection of two document sets; OR of two
terms leads to the union of document sets; and NOT of a term leads to
the complement of a set of documents). However, Boolean queries have
significant drawbacks: the results are not ranked, and they tend to
retrieve too few or too many results.

\subsection{Implementation: Inverted Indices}

To support the implementation of the vector-space concept, an {\em
  Inverted Index} is built. Given a collection of documents $D$, each
  document is analyzed as explained earlier (tokenized, and perhaps
  other additional steps). Thus, a document-term matrix can be
  built. Then the matrix is transposed to a term-document matrix. This
  is stored as a sorted list of terms and, for each, a list of
  document ids (the documents where the term appears). This is the
  simplest form of inverted index. One can also keep, for each term
  and document, the number of occurrences. Or, even more, one can keep
  the positions where the term appears (this makes possible to support
  {\em proximity queries}, where the user asks that some terms appear
  near others). These indices are very useful as they support both
  Boolean searches and vector-space searches. However, this index can
  grow very large, and it may also be hard to maintain if the
  collection of documents $D$ is not static (i.e. documents can be
  edited). Note that in typical IR applications (libraries) the
  collection {\em is} static, but in modern applications (i.e. the
  Web) this is not the case at all. Therefore, a lot of research has
  gone into {\em compression} and {\em maintenance} of the index. 

The simplest form of compression, is to substitute document IDs (that
may take substantial space) for an {\em offset}: if documents have IDs
1, 2, \ldots and a term appears in documents 30, 45, 48, we store the
ID of the first document (30) and then the offset, called the {\em
  gap} for the others (15 and 18). This is called {\em delta
  encoding}. Then, encoding techniques can be used to represent gaps,
which are much smaller than document IDs.

As for maintenance, note that changes in a single document may cause a
lot of changes throughout the index (as the changes may affect many
terms). This is specially true when the index keeps extra information
like term position. Thus, in many cases inverted indices are not kept
up to date; they are updated in batch. One technique is to create
another, smaller index kept in memory made up exclusively of the
changes; this is called a {\em stop-press} index. This index is used
by the system to modify answers obtained by consulting the main index
as follows: the stop-press index is sorted by document, and contains
terms with a mark to denote whether a terms is added or deleted. When
the search for the main index returns a set of documents, these
documents are filtered by the stop-press index. When the stop-press
index grows to a certain size, it is sorted by term and purge-merged
into the main index.

\subsection{Evaluation of IR systems}
To evaluate the performance of an IR system, two main metrics are
used: {\em recall} and {\em precision}. Recall is the ratio of
relevant documents that are retrieved to the total number of relevant
documents (i.e. the ratio of relevant documents retrieved). Precision
is the ratio of relevant documents that are retrieved to the total
number of retrieved documents (i.e. the fraction of documents
retrieved that is actually relevant). Formally, for a 
query $q$ and a set of documents $D$, let $D_q \subset D$ all the
documents that are relevant to $q$, and let $A_q$ be the answer that a
system gives to $q$ in $D$. Then the recall $\rho$ is $\frac{\card{A_q
    \inters D_q}}{\card{D_q}}$, while the precision $\pi$ is $\frac{\card{A_q
    \inters D_q}}{\card{A_q}}$. 

We can also think of precision as the conditional probability that a
document is relevant, given that it is retrieved; and recall is the
conditional probability that a document is retrieved, given that it is
relevant. 

It is common that when recall increases, precision decreases. That is,
when we retrieve more relevant documents, we also tend to retrieve
more irrelevant documents. By convention, if no documents are returned
($A_q = \emptyset$), precision is considered 1 but recall is
considered 0. As $A_q$ grows bigger, recall will increase, but
precision is likely to decrease.

Another measure sometimes used is a combination of precision and
recall, called $F$ (for {\em F-measure}) or similar, and designed as
the harmonic mean of precision and recall. Some authors use the formula

$$F = \frac{1}{\frac{(\frac{1}{\rho}) + (\frac{1}{\pi})}{2}} =
\frac{2}{(\frac{1}{\rho}) + (\frac{1}{\pi})} = \frac{2 \times \rho
  \times \pi}{\rho + \pi}$$  

Other authors use $F = \frac{\rho \times \pi}{\rho + \pi}$  instead.

Note that all these measures depend on knowing $D_q \subset D$, the
set of all the documents that are {\em truly} relevant to $q$. How is
this determined? One would have to define {\em relevance} for this,
which is very problematic. For once, relevance seems to be a personal
thing (what is relevant to one user may not be to another), but we
would like a measure of relevant that is independent of any particular
user. The TREC conferences provided with a corpus that was carefully
annotated by a set of experts so that everyone could measure their
systems in a level playing field. However, the effort required to
fully annotate large collections, and to ensure that annotations form
a consensus, is so large that it is very infrequent to have such
collection. There is no definite answer for this; therefore, the
measures above are idealizations. 

\subsection{Relevance Feedback}
In order to improve a system's performance, {\em relevance feedback}
is used. When the user poses a query Q to the system, the answer set R
is generated. The user is asked to decide, for each document in R,
whether it is relevant or not. This partitions R into two sets, RR
(retrieved relevant documents), and RI (retrieved irrelevant
documents). The original query Q is then modified using this
information. A usual modification is
$$Q' = Q + C_1 \times \Sigma_{D_i \in RR} - C_2 \times \Sigma_{D_j \in
  RI}$$ 

where $C_1, C_2$ are constants. A refinement of this technique assumes
that all documents in RR are somewhat clustered (in the vector space);
therefore, a {\em centroid} or average document can be computed. This
is what is added to the query, in effect moving the query towards this
centroid (this is what is usually meant by {\em query refinement} {\bf
  in this context}). However, it is usually {\em not} assumed that RI is
clustered, and therefore no centroid is calculated for RI. One
possibility is to take the highest ranked document in RI and use it to
move the query {\em away from} the negative feedback. Note that one
important difference between $Q'$ and $Q$ is that queries are usually
very sparse; as we modify the original query, we get more and more
dense vectors. Hence, computing cosines and other measures is usually
much harder for $Q'$ than it is for $Q$. This is probably one of the
reasons that relevance feedback is usually not implemented. Also, even
though the process can in principle be repeated for several cycles,
experiments seem to show diminishing returns after only a few
iterations. 

However, it may be hard to obtain feedback from the user (and hence,
RR and RI).  This is due to several facts:  
\bi
\item casual users may not want to
invest the time and effort to rate the documents in an answer;
\item even if they want to do so, they may not be able to do it very
  well. This is due to the fact that, even if (as assumed by much
  cognitive science research) the user has internally a {\em
  prototype} document in mind, the actual document may resemble/be
  different from the prototype in a variety of ways;
\item even when answers are obtained from a user, they usually do not
  behave as a metric (i.e. they are not formally a measure), which
  makes them difficult to use (for instance, it makes them difficult
  to use in learning algorithms).
\ei

Some systems use the following trick: the first $k$ documents
in the original answer (where $k$ is small) are RR, and RI is empty
(not used). This obviates the need to ask the user, and is based on
the intuition that, for a good system, the first $k$ answers are very
likely to be relevant. Note, however, that it is not a good idea to
use all the words in RR contribute to the new query, since the wrong
word may offset the benefits of many right words. Hence, the top 10 or
20 words in decreasing IDF order may be picked.
Probabilistic methods are used to improve on the idea of relevance
feedback; however, the simple ones assume that terms are independent
(which is not always true) while the ones that do not assume
independence are quite complex. Thus, we will not deal with them here.

One interesting possibility is to use RR and RI to suggest to the user
new keywords that can be useful for similar searches (and keywords to
avoid), so as to widen the user's vocabulary.

\subsection{Dimensionality Reduction}
One of the problems with the vector-space approach outlined before is
that the number of dimensions in the space equals the number of terms
in all documents. However, any given document is unlikely to contain
all terms, or even a large part of them. As a result, each vector
representing a document is usually sparse, with many zeroes for all
the terms not appearing in a given document. Also, when we make each
term a dimension we are implicitly assuming that terms are independent
of each other ({\em orthogonal} to one another. However, this is
rarely the case: synonyms are clearly two terms that should be given a
unique, shared dimension. Also, some terms may be highly correlated to
others, and this is masked by giving each of them independent
dimensions.  Because of this, there are several techniques that
attempt {\em dimensionality reduction}, that is, to reduce the number
of dimensions used to represent documents by switching from terms to
something else. Note that there is also a computational advantage in
working with more dense (less sparse) representation, which makes the
effort of dimensionality reduction worthwhile.


\subsubsection{Latent Semantic Indexing (LSI)}
One well known technique for reducing dimensions is called {\em latent
  semantic indexing} and is based on a linear algebra technique,
  called {\em singular Value Decomposition (SVD)}. 

Recall that a symmetric matrix can be decomposed by {\em
  eigen-analysis}) into the product of two matrices, one containing
  eigenvectors, the other one containing eigenvalues, which show a
  breakdown of the original data into linearly independent
  components. If some of these components are very small, in absolute
  terms or compared to other components, they can be ignored to build
  an approximate model of the original data. An arbitrary rectangular
  matrix can be similarly decomposed by SVD into three matrices, two
  containing singular vectors and one containing singular
  values. Again, some of those values may be small, in which case
  they can be ignored.

We organize the
  information of a set of documents $D$ in a matrix as follows: a {\em
  term-document matrix A} is built with $m$ columns (where $m =
  \card{D}$) and $n$ rows (where $n = \card{T}$, or number of
  terms). Each $[a_{ij}]$ denotes TF(i,j) (number of occurrences of
  term $i$ in document $j$), or a TFIDF measure. Then, A is decomposed
  using SVD as follows: $$A = U\Sigma V^T$$ where 
\bi
\item $U$ is an $n \times r$ matrix; $\Sigma$ is an $r \times r$
  matrix; and $V^T$ is an $r \times m$ matrix.
\item $U^TU = I$, $V^TV = I$ ($U$ and $V$ are orthogonal matrices,
  that is, all columns are orthonormal: independent vectors of unit
  length). These columns are called the {\em left (U) or right (V)
  singular vectors of A}. 
\item $\Sigma =
  diag(\sigma_1,\ldots,\sigma_n)$, with $\sigma_i \geq 0$, $1 \leq i
  \leq n$. The $\sigma_i$ are called the {\em singular values}.
\ei

This decomposition has some very important properties: if $rank(A) =
r$, then 
\bi
\item $\sigma_i > 0$ for $1 \leq i \leq r$ and $\sigma_j = 0$, for
  $r+1 \leq j \leq n$. Moreover, this {\em singular values} are the
  nonnegative square roots of the $n$ eigenvalues of $AA^T$. Also, SVD
  can be carried out such that $\sigma_1 \geq \sigma_2 \geq \ldots
  \geq \sigma_r > \sigma_{r+1} = \ldots = \sigma_n = 0$.
\item The Frobenius norm is $||A||_F = \sqrt{\Sigma_{t,d}
  a[t,d]^2}$. Then, $|| A ||^2_F = \sigma_1^2 + \ldots + \sigma_n^2$.
%  $||A||^2_2 = \sigma_1$. 
\item The first $r$ columns of $U$ and $V$ define the orthonormal
  eigenvectors associated with the $r$ nonzero eigenvalues of $AA^T$
  and $A^TA$, respectively. Note that $AA^T$ is a $m$ by $m$ matrix
  capturing inter-document similarities, and $A^TA$ is a $n$ by $n$
  matrix capturing inter-terms similarities. Thus, if we compute $V
  \Sigma^2V^T$ (which is still a $m \times m$ matrix) we get a better
  approximation to  inter-document similarities; and if we compute $U
  \Sigma^2 U^T$ (which is still a $n \times n$ matrix) we get a better
  approximation to inter-term similarities).
\item If $R(A)$ denotes the range of $A$, $R(A) =
  span\{u_1,\ldots,u_r\}$, where $U = [u_1,\ldots,u_m]$.
\item If $N(A)$ denotes the null space of $A$, $N(A) =
  span\{v_{r+1},\ldots,v_n\}$, where $V = [v_1,\ldots,v_n]$.
\item Let $A_k$ be the $m \times k$ matrix constructed from the
  k-largest singular triplets of A ($k < r$); $A_k = \Sigma_{i=1}^k u_i
  \sigma_i v_i^T$. Then $A_k$ is the closest rank-k matrix to $A$
  (measured by any invariant norm), that is, the best approximation to
  $A$. Using the Frobenius norm as above, we get the 
  following: $||A||^2_F = \sigma_1^2 + \ldots + \sigma^2_r$; and $||A
  - A_k||^2_F = min_{rank(B) = k} ||A - B||^2_F =  \sigma^2_{k+1} +
  \ldots + \sigma^2_r$. 
\ei

Thanks to this, we can substitute $A$ by $A_k$, choosing suitable $k$,
and reduce the dimensionality of the original matrix. Each document is
represented by a vector in $k$ space, instead of $n$ space. Each of
the $k$ factors is uncorrelated to any other, but they do {\em not}
correspond to the original terms exactly. Rather, they capture hidden
correlations among terms and documents; because only $k < n$ are
retained, something is lost. Hopefully, what is lost is noise, since
we picked the most important ``patterns''. Terms that were not close
in the original n-space may be close now in the k-space because they
co-occur frequently in documents deemed similar. Consider the terms
{\em car, driver, automobile} and {\em elephant}. If {\em car} and
{\em automobile} co-occur with many of the same words, they will be
mapped very close (or to the same representation) in the k-space. The
word {\em driver}, which has some relation, will be mapped to a
somewhat close representation. The word {\em elephant}, which is
unrelated, will be mapped to a different representation. In
particular,
\bi
\item the dot product of two rows of $A_k$ reflects the extent to
  which two terms have a similar pattern of occurrence across the set
  of documents. Note that $A_k A_k^T = U \Sigma^2 U^T$; the rows of $U
  \Sigma$ give the coordinates for terms.
%[(i,j)] = TS[i] TS[j]
\item the dot product of two columns of $A_k$ reflects the extent to
  which two documents have a similar profile of terms. Since $A_k^T
  A_k = V \Sigma^2 V^T$, once can consider the rows of $V \Sigma$ as
  coordinates for documents.
\ei

A query can also be represented in the k-dimensional space, since it
is represented as a vector in n-dimensional space to start with:

$\hat{q}= q^T U_k \Sigma^{-1}_k$.

Then the cosine distance in the k-space can be applied. 

One final note: because the original matrix $A$ is usually very large,
computing SVD can be quite costly. However, special techniques have
been developed that take advantage of the fact that $A$ is, very
often, sparse. Another headache is that if the collection of documents
is not static, adding or deleting documents (or editing existing
documents) may alter (perhaps substantially) the underlying
distribution of keywords across documents, i.e. many values in $A$ can
be affected and changed by changes in the corpus. Thus, it may be
necessary to recompute SVD after significant changes.

\subsection{Thesauri}
%open (uncontrolled) vocabulary vs. closed (controlled).
%context of words: mediates their interpretation, depends on audience,
%othe words, intended usage.
%exhaustive enough to capture all possible topics, specific enough to
%distinguish one topic from others.
%basic assumption: all documents have equal about-ness: the apriori
%probability of any document in a corpus being relevant is the
%same. Since longer docs may mention more topics and be more relevant,
%we normalize by length.
%another assumption: all documents are built up from a combination of
%paragraphs, which the basic unit that has about-ness.
%This is not to say that words, phrases or sentences have no meaning,
%only that they do not provide enough context for that meaning.
%relationships among paragraphs: sequential, hierarchical
%(subsections, sections, chapters); footnote-of, reference (to other
%paragraphs); citation (to other documents); pre-requisite (Previous
%definition, etc.), argument, proof, explanation, etc. Hyperlinks are
%a mix of both reference and citation; they make possible hypertext
%(non linear text).
%note that, as a result, every document is about one topic, but there
%may be several (sub)topics also mentioned in a document.
%documents also carry metadata. Queries may refer to both text and
%metadata. This brings a hybrid of database and IR techniques.
%Note that some attributes are not clearly classified as metadata or
%text. For instance, being a children's journal comes from the
%contents, but may also be how a journal is classified.
%corpus: collection of documents.

%finding out about (FOA) process: query-answer-feedback (new query)
%-new answer, etc. Answer may be not full document, but a succinct
%representation of document (abstract, title, paragraph) so that user
%may provide quick feedback.


The vector based approach has the problem of being based on the
appearance of particular words. While SVD helps a bit, there are still
problems for this approach because of synonyms, homonyms, and lack of
context. One route that has been tried to attack this problem is a
{\em knowledge based} approach, the use of a {\em thesaurus}. A
thesaurus is a list of words together with a set of relationships
among those words. Unlike a {\em dictionary}, which contains a list of
words with definitions for each, a thesaurus does not contain
definitions. Instead, it gives a bit of the meaning of the words by
giving their relationships with other words. Note that this falls
short of a {\em ontology}, where we would have relationships among
words {\em and} definitions in some formal or semi-formal language,
like scripts or frames. On these, a set of slots or properties can be
filled up by values or by expressions that constraint the possible
values that can be used; reasoning methods are usually also
provided. A thesaurus only provides relationships among words, not
definitions, and no reasoning system. A {\em taxonomy} is another
organization in which words (concepts) are related, but the only
relationships that are included are subclass and superclass.

Strictly speaking, a thesauri is about a set of {\em concepts}, which
are different from {\em terms}. The first relationships in the
thesaurus should be the one binding concepts to terms and
vice-versa. This relationship groups all synonyms and quasi-synonyms
together. For instance, {\em education achievement} and {\em school
  success} can be considered synonyms. {\em Academic achievement} may
be considered a quasi-synonym. All three terms may be grouped under a
concept. The concept can be given a name that corresponds to one of
the terms (e. g. {\em education achievement}) or a brand new name.
Sometimes, this consolidation leads to grouping related words: {\em
  airport} and {\em harbor} may be grouped under {\em traffic
  station}. However, this hides important differences that may be
important for some queries. Thus, this consolidation may be left for a
higher level (see later).

There are some other relationships that all thesauri have: a {\em broader
  term} or  {\em more general term} or {\em superclass} relationships,
  and its inverse the {\em narrower term} or {\em more concrete} or
  {\em subclass} relationship.  These relationships go from word A to
  word B if word A denotes a class or concept that is more general
  than that of concept B in some world view or ontology. Besides this,
  a {\em synonym} relationships may connect synonyms, an {\em antonym}
  relationship may connect antonyms. Other semantic relationships
  may also exist, like {\em related term}. Some thesaurus will, at
  higher levels in the hierarchy of terms, have a {\em theme} or
  {\em topic} relationships that relates abstract words (like "war")
  with words that are connected to some aspect of the topic that the
  abstract words represents (like "weapons", "strategy",
  "history"). Note that this connection is very informal and may link
  words that are only somewhat related; some words are linked to other
  words {\em only in some context}. A {\em taxonomy} is another
organization in which words (concepts) are related, but the only
relationships that are included are subclass and superclass.

While {\em nouns} can be easily organized into a hierarchy, {\em
  adjectives} cannot be so easily categorized. The experience of {\em
  WordNet} (\cite{FELL}) shows that adjectives are more like a mesh or
  network, related many times by {\em antinomy} (opposition). 

There are two main approaches to developing thesauri, hand-crafted
and automatically (machine) constructed.
Developing a thesaurus by hand is very labor-intensive, and there are no
purely formal methods. One that is often used is based on the idea of {\em
  facets}. Facets are the elemental components or features of a
concept, the properties that better define the concept. In frame-based
representations, facets are the slots:

\begin{verbatim}
Concept: Hepatitis A
  Pathological process:  inflamation
  Body system:           liver
  Cause:                 infection
  Substance/organism:    hepatitis A virus
\end{verbatim}

 A facet is used to refer not just to the property, but to the {\em
domain} of that property, the set of possible values it may
have. Often time, such values are concepts too. 

Facets are used to build indices and to organize the concepts in the
thesaurus. By having a list of facets, concepts can be developed by
choosing which facets apply. Also, relationships among concepts can be
discovered by analyzing similarities in facets. Moreover, hierarchies
can be built with inheritance by: looking for concepts with a subset
(or a superset) of the facets of a given one, or looking for concepts
with the same facets as a given one and more general fillers. Finally,
facets can be used to build user interfaces and facilitate search for
desired results.

Automatically constructed thesaurus are usually built using some
machine learning mechanism. Statistical methods are based on
co-occurrence; but this approach as limitations,as it is not clear
what is the semantic relationship among co-occurring words. Also,
co-occurrence is defined based on a {\em window size}, a piece of text
within which words are considered to co-occur. The size of this
window has a big effect on the thesaurus built; it is not known what
is the optimal window size. Hence, some people has tried more
linguistically-based methods. These methods analyze the syntax of
phrases and looks at words that modified the head of a noun phrase, or
are arguments of the same verb, etc. However, these methods are
heuristic and  also have limitations.

Thesauri are used as follows:
\bi
\item it provides a {\em controlled vocabulary}, instead of an open
  one, making indexing and search more efficient. This is specially
  useful for control of multiple, distributed databases. A
  multi-lingual thesaurus can provide translations (mappings) among
  different terms in different data repositories, to support query
  rewriting. 
\item it allows for more flexible search: instead of searching for a
  given term W, we may search for narrower terms than W,
  broader terms than W, synonyms of W, or words somehow related to W,
  so that we do not miss relevant documents that happen {\em not to
  mention} W, or to mention it only very few times. This is called
  {\em Query Expansion}, and can be done
  by presenting the user with an interface showing the thesaurus (this
  provides guidance to the user and shows levels of specificity); or
  by automatically rewriting the query (always or only on certain
  cases, like when the answer to an original query is deemed not
  satisfactory).  An important part of query expansion is {\em to
  disambiguate word senses}. This can be done by introducing as many
  terms as senses of a word, and giving enough information with every
  term to distinguish the sense: {\em Administration 1 (management);
  Administration 2 (drugs)}; {\em Discharge 1 (electrical); Discharge
  2 (from hospital or program); Discharge 3 (from employment or
  organization; SYN: dismissal); Discharge 4 (water flow into
  river).}. NOTE: synonym expansion helps recall (but may hinder
  precision); homonym disambiguation helps precision (but may hinder
  recall). Note also that query expansion can be done prior to running
  the query {\em or} after the initial query has been ran and results
  have been obtained; then relevance feedback (obtained manually or
  automatically) is used to determine relevant documents and keywords
  appearing on them (all keywords, or the most common) are used for
  expansion. 
\ei

The big problems with thesauri are:
\bi
\item Thesauri incorporate a classification/ontology in the
  relationships. There is usually more than one point of view of all
  but the simplest domains; different users may have different views
  and a thesaurus represents but one. When a domain is standardized,
  this issue does not come up, but many domains are not. Since most
  thesauri are {\em fixed} in the classification (i.e. terms cannot be
  changed from category to category) there is no way to overcome this
  problem. 
\item Thesauri are limited; they cannot capture all semantic
  relationships for a natural language. Even on those relationships
  they capture, the situation may be more complex than the thesaurus
  indicates: some terms A and B may be synonyms only on a certain
  context. Most thesauri do not capture context. There are going to be
  failures, then, when using a thesauri. However, there is no known
  mechanism to deal with such failures.
\ei

Thesaurus could helps web search, but few engines use (or admit to it).
Some authors think that Google uses synonym expansion, but not
documented.  At some time, hotbot (www.hotbot.com) used homonym
disambiguation, as well as oingo (www.oingo.com, now
defunct. Curiously, it was acquired by Google).

\subsection{Beyond the Index}
Sometimes, additional information about documents is available,
especially if the document is in electronic format. This
may include syntactic metadata (size of file, type of file) and
semantic metadata (author, title, etc.). For certain documents, {\em
  bibliographic information} may be available (for instance, if
document is a journal paper, which paper, date of appearance, journal
name, etc.). Even when metadata is not explicitly present, Information
Extraction techniques (see next chapter) can be used to produce some
information. Capturing and using this metadata allows use of database
techniques. 

Although metadata can vary enormously, there have been
attempts at standards; the {\em Dublin core} is a widely used standard
for documents.

{\bf Citations} are specially important forms of additional
  information. This includes hyperlinks on web pages(see 
  Section~\ref{web}), but is not limited to it. Bibliographies at the
  end of technical papers, references in legal briefs, etc. are all
  forms of citations. This is very important information, and has led
  to the field of {\em bibliometrics}. This field builds citation
  graphs (where documents are nodes, and citations yield directed
  links), which can be used for several types of analysis, like {\em
  impact analysis} (finding papers with major impact, usually,
  identified with in-degree), co-citation studies, etc.

Another important extension to the basic vector model is proposed  by
Hearst (\cite{HP}), and is known as {\em text tiling}. This approach
is directed to {\em full length documents}, which are defined in
\cite{HP} as ``unabstracted expository text which can be of any
length... It excludes documents composed of short news bytes or any
other disjoint, although lengthy, text''. Thus, the idea is a coherent
document that discusses some topic in depth, therefore mentioning,
along the way, other topics, that may be subtopics of the main topic
or related in a variety of ways. In this document, appearance of some
keywords may be scarce, and therefore the vector encoding of the
document will fail to distinguish the main topic from other (related)
subtopics. Hearts idea is to look at blocks of text and determine {\em
  where} words appear: if a word appears often in a block, and rarely
in the rest of the document, than it is a subtopic; if a word appear
rarely in any block, but significantly in the document (i.e. it is
distributed uniformly in the document), it is likely to denote the
main topic. Thus, a long document is analyzed in two steps: first, it
is broken down into blocks, and all pairs of adjacent blocks are
compared and given a similarity measure. The measure is simply the
cosine measure, with terms given a version of the tf-idf weight: the
frequency of a term within a block is compared to the frequency of the
term in the whole document. This helps distinguish between global and
local scope of the term. Then, the sequence of
similarity values is analyzed, looking for peaks and valleys
(smoothing is used to get rid of local minima). A peak
indicates that adjacent blocks are coherent, and probably denotes a
{\em motivated segment}, i.e. a part of the text that deals with a
subtopic. A valley denotes a transition between motivated
segments. The only parameter of this approach is the size of the
block; the value may vary from text to text. As a heuristic, \cite{HP}
uses the average paragraph length of the document (in number of
sentences). The system is called Text Tiling because a graphical
display of the document is created by the system, using tiles to
denote subtopics.

The method allows us to specify queries in which a topic is denoted
{\em in the context of} another topic: topic A is in the context of
topic B if B is the main topic of the document and A is the topic of a
block in the document. This is different from both A and B being
mentioned in the document, perhaps in unrelated ways (this is likely
to happen in full-text documents, as opposed to abstracts or short
documents). 
This method allows us also to deal with {\em passing
  references}: when all references to a topic are within a sentence o
very short block, chances are the topic is not truly discussed in the
document, i.e. it is used in a comparison or example and not really
dealt with. 

\section{Natural Language Processing}

Summary. Mention Question Answering, Deep Analysis and Parsing, LDA
and topic allocation.

\section{Text Analysis in SQL}
Talk about Postgres.

\subsection{Text in the SQL Standard}
\bi
\item A {\tt FullText} ADT is defined.
\item It has two attributes: {\em Content} and {\em Language}.
\item Usually, the Content is a CLOB to allow for large values.
\item The Content attribute is only usable within the methods of the
  ADT. 
\item Casting: {\tt FullText\_to\_Character()}.
\item Searches: for individual words, for specific phrases, for
  patterns, broader or narrower term expansion, synonym expansion,
  context. 
\item {\tt CONTAINS} takes as argument a character string literal;
  literal values must be embedded in double quotes.
\item It returns 1 to signal TRUE, 0 to signal FALSE.
\item Additional schemas: FT\_THESAURUS schema includes tables
TERM\_DICTIONARY, TERM\_HIERARCHY, TERM\_SYNONYM and TERM\_RELATED.
\item An additional type is defined: FT\_Pattern type is a distinct
  type based on CHARACTER VARYING. It contains patterns for search
  (very complex standard).
\item Example:
\begin{verbatim}
CREATE TABLE DVD_info (
  title VARCHAR(100);
  stock_number INTEGER;
  notes FULLTEXT)

INSERT INTO DVD_info VALUES (``The big Lebowski'',
 1339,
 NEW FullText('The Dude is the best guy there is'
     || 'a really cool guy. All you have'
     || 'to do is not ruin his favorite carpet'));

/* single-word search */
SELECT stock_number
FROM DVD_info
WHERE notes.CONTAINS('``thugs''') = 1

/* phrase search */
SELECT stock_number
FROM DVD_info
WHERE notes.CONTAINS('``historical docs''') = 1

/* context search */
SELECT stock_number
FROM DVD_info
WHERE notes.CONTAINS('``attacked'' 
             IN SAME SENTENCE AS ``crew''') = 1

/* ranking search */
SELECT stock_number
FROM DVD_info
WHERE 1.2 < notes.RANK('''carpet''')

/* conceptual search */
SELECT stock_number
FROM DVD_info
WHERE notes.CONTAINS('IS ABOUT ``horror''') = 1

/* complex search */
SELECT stock_number
FROM DVD_info
WHERE notes.CONTAINS('STEMMED FORM OF ``funny''
                      IN SAME PARAGRAPH AS
                      SOUNDS LIKE ``lions'') = 1
\end{verbatim}
\ei

\section{Text Analysis with Files}
Basic counting, tf/idf, how to do n-grams. Use ``From Languages to
Information'' slides.

The following assume text files. Most of the above commands for
implementing selection work on text files (some were designed with
text files in mind). 

The {\tt grep} command helps find content within a file.
Grep takes a pattern and a list of files; if some files are
directories and -R  is used, all files under each directory are
searched, recursively. To control the search, there's a wide variety
of options:  
\bi
\item  -i tells to ignore case distinctions in both the
PATTERN and the input files. 
\item  -w match only whole words (no prefixes or suffixes).
\item  to look for either one of two words: egrep -w
  'PATTERN1|PATTERN2'
\item -v to invert the match.
\ei

Grep can use regular expressions in the PATTERN

To control the output:
\bi
\item   -n : Prefix each line of output with the 1-based line number
  within its input file. 
\item   -H Print the file name for each match. This is the default
  when there is more than one file to search. 
\ei

Some common operations on text which we did not see above:

\bi
\item To remove blank lines:
\begin{verbatim}
grep -v '^$' input.txt > output.txt
\end{verbatim}
Both grep and sed use special pattern \^\$ that matchs the blank
lines. Grep -v option means print all lines except blank line. 
\item Getting a Random Line from a file: there are several options:

\begin{verbatim}
sort -R | head -n 1  
\end{verbatim}

\begin{verbatim}
awk 'BEGIN { srand() } rand() >= 0.5 { print; exit }'  
\end{verbatim}

\begin{verbatim}
tail -$((RANDOM/(32767/`wc -l</etc/group|tr -d ' '`))) /etc/group|head -1 
\end{verbatim}

All of the options using RANDOM should be used with the understanding
that the max possible value is 32767, so it will only be random on
files that have fewer than 32,767 lines. 

\begin{verbatim}
split -l 1 < file; cat `for i in x*; do echo $RANDOM $i; done | sort -n | cut -f2 -d' ' | head -n 1`; rm x*  
\end{verbatim}

\begin{verbatim}
sed -n $((RANDOM%$(wc -l file)+1))p file
\end{verbatim}

\ei


To get most frequent line:

{\tt cat data.txt | tr '[:upper:]' '[:lower:]' | sort | uniq -c | sort
  -rn}

This transforms each line to lowercase, sorts lines, and counts number
of identical lines, then sorts by this count and gets the largest
number.

To do the same with individual words (get frequencies) one trick is to
replace withspaces with carriage returns (so each word gets its own
line) and the above again:

\begin{verbatim}
cat data.txt | tr '[:upper:]' '[:lower:]' | tr -d '[:punct:]' |
  tr ' ' '\n' | grep -v -w -f stopwords\_en.txt | sort | uniq -c | sort
  -rn}
\end{verbatim}

Also, stopwords from file ``stopwords\_en.txt'' are deleted.

To do the same with bi-grams:

\begin{verbatim}
> cat data.txt | tr '[:upper:]' '[:lower:]' | tr -d '[:punct:]' | sed 's/,//' | sed G | tr ' ' '\n' > tmp.txt
> tail -n+2 tmp.txt > tmp2.txt
> paste -d ',' tmp.txt tmp2.txt | grep -v -e "^," | grep -v -e ",$" |
sort | uniq -c | sort -rn
\end{verbatim}

The first command again separates each word in a line of its own, and
adds an extra separation for lines. The second one copies the file
{\tt tmp.txt} but without the first line (starting at line 2). The
third command marges the lines of both files. Note that, by skipping
the first line on {\tt tmp2.txt}, the first line of this file is the
second word, and so it gets paired with the first word (first line of
{\tt tmp.txt}, and so on. The grep commands use the added lines to
make sure that words from separate lines are not paired up. If one has
text and wants to pair up all words, regardless of line or sentence,
simply get first of all sed and grep commands.

The same idea can be extended to tri-grams:
\begin{verbatim}
>tail -n+2 tmp2.txt > tmp3.txt
>paste -d ',' tmp.txt tmp2.txt tmp3.txt | grep -v -e "^," | grep -v -e ",$" | grep -v -e ",," | sort | uniq -c | sort -rn
\end{verbatim}
This repeats the trick but starting with {\tt tmp2.txt}, so that {\tt
  tmp3.txt} skips the first two lines (so the first line contains the
third word). This is then again used in {\tt paste}.


Operations other than select usually make no sense in text files (what
would a join of two text files be, for instance?). When text is
organized with a markup language (HTML or XML), more operations can be
defined. But in 'pure' text, Information Retrieval (IR) based analysis
are the most common.

\subsubsection{Sed}

Sed command is mostly used to replace the text in a file. The below
simple sed command replaces the word "unix" with "linux" in the file. 

\begin{verbatim} >sed 's/unix/linux/' file.txt
\end{verbatim}

Here the "s" specifies the substitution operation. The "/" are
delimiters. The "unix" is the search pattern and the "linux" is the
replacement string. 

By default, the sed command replaces the first occurrence of the
pattern in each line and it won't replace the second,
third...occurrence in the line. Use the /1, /2 etc flags to replace
the first, second occurrence of a pattern in a line. The below command
replaces the second occurrence of the word "unix" with "linux" in a
line. 

\begin{verbatim} >sed 's/unix/linux/2' file.txt
\end{verbatim}

The substitute flag /g (global replacement) specifies the sed command
to replace all the occurrences of the string in the line. 

\begin{verbatim} >sed 's/unix/linux/g' file.txt
\end{verbatim}


Use the combination of /1, /2 etc and /g to replace all the patterns
from the nth occurrence of a pattern in a line. The following sed
command replaces the third, fourth, fifth... "unix" word with "linux"
word in a line. 

\begin{verbatim} >sed 's/unix/linux/3g' file.txt
\end{verbatim}


You can use any delimiter other than the slash. As an example if you
want to change the web url to another url as 

\begin{verbatim} >sed 's/http:\/\//www/' file.txt
\end{verbatim}



In this case the url consists the delimiter character which we
used. In that case you have to escape the slash with backslash
character, otherwise the substitution won't work. 

There might be cases where you want to search for the pattern and
replace that pattern by adding some extra characters to it. In such
cases \& comes in handy. The \& represents the matched string. 

\begin{verbatim} >sed 's/unix/{\&}/' file.txt
\end{verbatim}


replaces 'unix' with '{unix}'.

\begin{verbatim} >sed 's/unix/{\&\&}/' file.txt
\end{verbatim}


replaces 'unix' with '{unixunix}'.

You can use parenthesis to denote parts of a match, and then use
\textbackslash 1,
\textbackslash 2, to denote the parts. The parenthesis need to be escaped with
backslash for this to work:

\begin{verbatim} >sed
's/\/unix\/linux/2/1/' file.txt
\end{verbatim}

changes 'unixlinux' to 'linuxunix'.
The dot ('.') is used to denote a single character:

\begin{verbatim} >sed 's/^\(.\)\(.\)\(.\)/\3\2\1/' file.txt
\end{verbatim}

switches the first three characters of each line.
You can restrict the sed command to replace the string on a specific
line number. An example is 

\begin{verbatim} >sed '3 s/unix/linux/' file.txt
\end{verbatim}


You can specify a range of line numbers to the sed command for
replacing a string. 

\begin{verbatim} >sed '1,3 s/unix/linux/' file.txt
\end{verbatim}


To replace text from the second line until the end of the file:

\begin{verbatim} >sed '2,\$ s/unix/linux/' file.txt
\end{verbatim}


You can specify a pattern to the sed command to match in a line. If
the pattern match occurs, then only the sed command looks for the
string to be replaced and if it finds, then the sed command replaces
the string. 

\begin{verbatim} >sed '/linux/ s/unix/centos/' file.txt
\end{verbatim}


Here the sed command first looks for the lines which has the pattern
"linux" and then replaces the word "unix" with "centos". 
You can make sed command to work as similar to grep command.


\begin{verbatim} 
>grep 'unix' file.txt
>sed -n '/unix/ p' file.txt
\end{verbatim}



Here the sed command looks for the pattern "unix" in each line of a
file and prints those lines that has the pattern. 

You can also make the sed command to work as grep -v, just by using
the reversing the sed with NOT (!). 


\begin{verbatim} 
>grep -v 'unix' file.txt
>sed -n '/unix/ !p' file.txt
\end{verbatim}


The sed command can add a new line after a pattern match is found. The
"a" command to sed tells it to add a new line after a match is found. 

\begin{verbatim} >sed '/unix/ a "Add a new line"' file.txt
\end{verbatim}


The "i" command to sed tells it to add a new line before a match is
found. 

\begin{verbatim} >sed '/unix/ i "Add a new line"' file.txt
\end{verbatim}


 The "c" command to sed tells it to change the line.

\begin{verbatim} >sed '/unix/ c "Change line"' file.txt
\end{verbatim}



The d option in sed command is used to delete a line. The syntax for
deleting a line is: 

\begin{verbatim} >sed 'Nd' file
\end{verbatim}



Here N indicates Nth line in a file. In the following example, the sed
command removes the first line in a file. 

\begin{verbatim} >sed '1d' file
\end{verbatim}


The following sed command is used to remove the footer line in a
file. The \$ indicates the last line of a file. 

\begin{verbatim} >sed '\$d' file
\end{verbatim}


The sed command can be used to delete a range of lines. The syntax is
shown below: 

\begin{verbatim} >sed 'm,nd' file
\end{verbatim}



Here m and n are min and max line numbers. The sed command removes the
lines from m to n in the file. The following sed command deletes the
lines ranging from 2 to 4: 

\begin{verbatim} >sed '2,4d' file
\end{verbatim}


Use the negation (!) operator with d option in sed command. The
following sed command removes all the lines except the header line. 

\begin{verbatim} >sed '1!d' file
\end{verbatim}


\begin{verbatim} >sed '2,4!d' file
\end{verbatim}


Here the sed command removes lines other than 2nd, 3rd and 4th.


You can specify the list of lines you want to remove in sed command
with semicolon as a delimiter. 

\begin{verbatim} >sed '1d;\$d' file
\end{verbatim}


removes the first and last line.

To delete empty lines or blank lines

\begin{verbatim} >sed '/^\$/d' file
\end{verbatim}



The \^\$ indicates sed command to delete empty lines. However, this sed
do not remove the lines that contain spaces. 

\begin{verbatim} >sed '/^u/d' file
\end{verbatim}


\^ is to specify the starting of the line. Above sed command removes
all the lines that start with character 'u'.  

\begin{verbatim} >sed '/x\$/d' file
\end{verbatim}


\$ is to indicate the end of the line. The above command deletes all
the lines that end with character 'x'.  

Delete lines which are in upper case or capital letters:

\begin{verbatim} 
>sed '/^[A-Z]*$/d' file
\end{verbatim}

Delete lines that contain a pattern:

\begin{verbatim} >sed '/debian/d' file
\end{verbatim}


Delete lines starting from a pattern till the last line:

\begin{verbatim} >sed '/fedora/,\$d' file
\end{verbatim}


Delete last line only if it contains a pattern:

\begin{verbatim} >sed '\${/ubuntu/d;}' file
\end{verbatim}


Here \$ indicates the last line. If you want to delete Nth line only
if it contains a pattern, then in place of \$ place the line number.  

Note: In all the above examples, the sed command prints the contents
of the file on the unix or linux terminal by removing the
lines. However the sed command does not remove the lines from the
source file. To Remove the lines from the source file itself, use the
-i option with sed command. 

\begin{verbatim} >sed -i '1d' file
\end{verbatim}


If you dont wish to delete the lines from the original source file you
can redirect the output of the sed command to another file. 

\begin{verbatim}
> sed '1d' file > newfile
\end{verbatim}

HOW MUCH OF THIS TO KEEP??
Useful sed scripts: (Eric Pement, sed.sourceforge.net)

\begin{verbatim}
FILE SPACING:

 # double space a file
 sed G

 # double space a file which already has blank lines in it. Output file
 # should contain no more than one blank line between lines of text.
 sed '/^$/d;G'

 # triple space a file
 sed 'G;G'

 # undo double-spacing (assumes even-numbered lines are always blank)
 sed 'n;d'

 # insert a blank line above every line which matches "regex"
 sed '/regex/{x;p;x;}'

 # insert a blank line below every line which matches "regex"
 sed '/regex/G'

 # insert a blank line above and below every line which matches "regex"
 sed '/regex/{x;p;x;G;}'

NUMBERING:

 # number each line of a file (simple left alignment). Using a tab (see
 # note on '\t' at end of file) instead of space will preserve margins.
 sed = filename | sed 'N;s/\n/\t/'

 # number each line of a file (number on left, right-aligned)
 sed = filename | sed 'N; s/^/     /; s/ *\(.\{6,\}\)\n/\1  /'

 # number each line of file, but only print numbers if line is not blank
 sed '/./=' filename | sed '/./N; s/\n/ /'

 # count lines (emulates "wc -l")
 sed -n '$='

TEXT CONVERSION AND SUBSTITUTION:

 # IN UNIX ENVIRONMENT: convert DOS newlines (CR/LF) to Unix format.
 sed 's/.$//'               # assumes that all lines end with CR/LF
 sed 's/^M$//'              # in bash/tcsh, press Ctrl-V then Ctrl-M
 sed 's/\x0D$//'            # works on ssed, gsed 3.02.80 or higher

 # IN UNIX ENVIRONMENT: convert Unix newlines (LF) to DOS format.
 sed "s/$/`echo -e \\\r`/"            # command line under ksh
 sed 's/$'"/`echo \\\r`/"             # command line under bash
 sed "s/$/`echo \\\r`/"               # command line under zsh
 sed 's/$/\r/'                        # gsed 3.02.80 or higher

 # IN DOS ENVIRONMENT: convert Unix newlines (LF) to DOS format.
 sed "s/$//"                          # method 1
 sed -n p                             # method 2

 # IN DOS ENVIRONMENT: convert DOS newlines (CR/LF) to Unix format.
 # Can only be done with UnxUtils sed, version 4.0.7 or higher. The
 # UnxUtils version can be identified by the custom "--text" switch
 # which appears when you use the "--help" switch. Otherwise, changing
 # DOS newlines to Unix newlines cannot be done with sed in a DOS
 # environment. Use "tr" instead.
 sed "s/\r//" infile >outfile         # UnxUtils sed v4.0.7 or higher
 tr -d \r <infile >outfile            # GNU tr version 1.22 or higher

 # delete leading whitespace (spaces, tabs) from front of each line
 # aligns all text flush left
 sed 's/^[ \t]*//'                    # see note on '\t' at end of file

 # delete trailing whitespace (spaces, tabs) from end of each line
 sed 's/[ \t]*$//'                    # see note on '\t' at end of file

 # delete BOTH leading and trailing whitespace from each line
 sed 's/^[ \t]*//;s/[ \t]*$//'

 # insert 5 blank spaces at beginning of each line (make page offset)
 sed 's/^/     /'

 # align all text flush right on a 79-column width
 sed -e :a -e 's/^.\{1,78\}$/ &/;ta'  # set at 78 plus 1 space

 # center all text in the middle of 79-column width. In method 1,
 # spaces at the beginning of the line are significant, and trailing
 # spaces are appended at the end of the line. In method 2, spaces at
 # the beginning of the line are discarded in centering the line, and
 # no trailing spaces appear at the end of lines.
 sed  -e :a -e 's/^.\{1,77\}$/ & /;ta'                     # method 1
 sed  -e :a -e 's/^.\{1,77\}$/ &/;ta' -e 's/\( *\)\1/\1/'  # method 2

 # substitute (find and replace) "foo" with "bar" on each line
 sed 's/foo/bar/'             # replaces only 1st instance in a line
 sed 's/foo/bar/4'            # replaces only 4th instance in a line
 sed 's/foo/bar/g'            # replaces ALL instances in a line
 sed 's/\(.*\)foo\(.*foo\)/\1bar\2/' # replace the next-to-last case
 sed 's/\(.*\)foo/\1bar/'            # replace only the last case

 # substitute "foo" with "bar" ONLY for lines which contain "baz"
 sed '/baz/s/foo/bar/g'

 # substitute "foo" with "bar" EXCEPT for lines which contain "baz"
 sed '/baz/!s/foo/bar/g'

 # change "scarlet" or "ruby" or "puce" to "red"
 sed 's/scarlet/red/g;s/ruby/red/g;s/puce/red/g'   # most seds
 gsed 's/scarlet\|ruby\|puce/red/g'                # GNU sed only

 # reverse order of lines (emulates "tac")
 # bug/feature in HHsed v1.5 causes blank lines to be deleted
 sed '1!G;h;$!d'               # method 1
 sed -n '1!G;h;$p'             # method 2

 # reverse each character on the line (emulates "rev")
 sed '/\n/!G;s/\(.\)\(.*\n\)/&\2\1/;//D;s/.//'

 # join pairs of lines side-by-side (like "paste")
 sed '$!N;s/\n/ /'

 # if a line ends with a backslash, append the next line to it
 sed -e :a -e '/\\$/N; s/\\\n//; ta'

 # if a line begins with an equal sign, append it to the previous line
 # and replace the "=" with a single space
 sed -e :a -e '$!N;s/\n=/ /;ta' -e 'P;D'

 # add commas to numeric strings, changing "1234567" to "1,234,567"
 gsed ':a;s/\B[0-9]\{3\}\>/,&/;ta'                     # GNU sed
 sed -e :a -e 's/\(.*[0-9]\)\([0-9]\{3\}\)/\1,\2/;ta'  # other seds

 # add commas to numbers with decimal points and minus signs (GNU sed)
 gsed -r ':a;s/(^|[^0-9.])([0-9]+)([0-9]{3})/\1\2,\3/g;ta'

 # add a blank line every 5 lines (after lines 5, 10, 15, 20, etc.)
 gsed '0~5G'                  # GNU sed only
 sed 'n;n;n;n;G;'             # other seds

SELECTIVE PRINTING OF CERTAIN LINES:

 # print first 10 lines of file (emulates behavior of "head")
 sed 10q

 # print first line of file (emulates "head -1")
 sed q

 # print the last 10 lines of a file (emulates "tail")
 sed -e :a -e '$q;N;11,$D;ba'

 # print the last 2 lines of a file (emulates "tail -2")
 sed '$!N;$!D'

 # print the last line of a file (emulates "tail -1")
 sed '$!d'                    # method 1
 sed -n '$p'                  # method 2

 # print the next-to-the-last line of a file
 sed -e '$!{h;d;}' -e x              # for 1-line files, print blank line
 sed -e '1{$q;}' -e '$!{h;d;}' -e x  # for 1-line files, print the line
 sed -e '1{$d;}' -e '$!{h;d;}' -e x  # for 1-line files, print nothing

 # print only lines which match regular expression (emulates "grep")
 sed -n '/regexp/p'           # method 1
 sed '/regexp/!d'             # method 2

 # print only lines which do NOT match regexp (emulates "grep -v")
 sed -n '/regexp/!p'          # method 1, corresponds to above
 sed '/regexp/d'              # method 2, simpler syntax

 # print the line immediately before a regexp, but not the line
 # containing the regexp
 sed -n '/regexp/{g;1!p;};h'

 # print the line immediately after a regexp, but not the line
 # containing the regexp
 sed -n '/regexp/{n;p;}'

 # print 1 line of context before and after regexp, with line number
 # indicating where the regexp occurred (similar to "grep -A1 -B1")
 sed -n -e '/regexp/{=;x;1!p;g;$!N;p;D;}' -e h

 # grep for AAA and BBB and CCC (in any order)
 sed '/AAA/!d; /BBB/!d; /CCC/!d'

 # grep for AAA and BBB and CCC (in that order)
 sed '/AAA.*BBB.*CCC/!d'

 # grep for AAA or BBB or CCC (emulates "egrep")
 sed -e '/AAA/b' -e '/BBB/b' -e '/CCC/b' -e d    # most seds
 gsed '/AAA\|BBB\|CCC/!d'                        # GNU sed only

 # print paragraph if it contains AAA (blank lines separate paragraphs)
 # HHsed v1.5 must insert a 'G;' after 'x;' in the next 3 scripts below
 sed -e '/./{H;$!d;}' -e 'x;/AAA/!d;'

 # print paragraph if it contains AAA and BBB and CCC (in any order)
 sed -e '/./{H;$!d;}' -e 'x;/AAA/!d;/BBB/!d;/CCC/!d'

 # print paragraph if it contains AAA or BBB or CCC
 sed -e '/./{H;$!d;}' -e 'x;/AAA/b' -e '/BBB/b' -e '/CCC/b' -e d
 gsed '/./{H;$!d;};x;/AAA\|BBB\|CCC/b;d'         # GNU sed only

 # print only lines of 65 characters or longer
 sed -n '/^.\{65\}/p'

 # print only lines of less than 65 characters
 sed -n '/^.\{65\}/!p'        # method 1, corresponds to above
 sed '/^.\{65\}/d'            # method 2, simpler syntax

 # print section of file from regular expression to end of file
 sed -n '/regexp/,$p'

 # print section of file based on line numbers (lines 8-12, inclusive)
 sed -n '8,12p'               # method 1
 sed '8,12!d'                 # method 2

 # print line number 52
 sed -n '52p'                 # method 1
 sed '52!d'                   # method 2
 sed '52q;d'                  # method 3, efficient on large files

 # beginning at line 3, print every 7th line
 gsed -n '3~7p'               # GNU sed only
 sed -n '3,${p;n;n;n;n;n;n;}' # other seds

 # print section of file between two regular expressions (inclusive)
 sed -n '/Iowa/,/Montana/p'             # case sensitive

SELECTIVE DELETION OF CERTAIN LINES:

 # print all of file EXCEPT section between 2 regular expressions
 sed '/Iowa/,/Montana/d'

 # delete duplicate, consecutive lines from a file (emulates "uniq").
 # First line in a set of duplicate lines is kept, rest are deleted.
 sed '$!N; /^\(.*\)\n\1$/!P; D'

 # delete duplicate, nonconsecutive lines from a file. Beware not to
 # overflow the buffer size of the hold space, or else use GNU sed.
 sed -n 'G; s/\n/&&/; /^\([ -~]*\n\).*\n\1/d; s/\n//; h; P'

 # delete all lines except duplicate lines (emulates "uniq -d").
 sed '$!N; s/^\(.*\)\n\1$/\1/; t; D'

 # delete the first 10 lines of a file
 sed '1,10d'

 # delete the last line of a file
 sed '$d'

 # delete the last 2 lines of a file
 sed 'N;$!P;$!D;$d'

 # delete the last 10 lines of a file
 sed -e :a -e '$d;N;2,10ba' -e 'P;D'   # method 1
 sed -n -e :a -e '1,10!{P;N;D;};N;ba'  # method 2

 # delete every 8th line
 gsed '0~8d'                           # GNU sed only
 sed 'n;n;n;n;n;n;n;d;'                # other seds

 # delete lines matching pattern
 sed '/pattern/d'

 # delete ALL blank lines from a file (same as "grep '.' ")
 sed '/^$/d'                           # method 1
 sed '/./!d'                           # method 2

 # delete all CONSECUTIVE blank lines from file except the first; also
 # deletes all blank lines from top and end of file (emulates "cat -s")
 sed '/./,/^$/!d'          # method 1, allows 0 blanks at top, 1 at EOF
 sed '/^$/N;/\n$/D'        # method 2, allows 1 blank at top, 0 at EOF

 # delete all CONSECUTIVE blank lines from file except the first 2:
 sed '/^$/N;/\n$/N;//D'

 # delete all leading blank lines at top of file
 sed '/./,$!d'

 # delete all trailing blank lines at end of file
 sed -e :a -e '/^\n*$/{$d;N;ba' -e '}'  # works on all seds
 sed -e :a -e '/^\n*$/N;/\n$/ba'        # ditto, except for gsed 3.02.*

 # delete the last line of each paragraph
 sed -n '/^$/{p;h;};/./{x;/./p;}'

SPECIAL APPLICATIONS:

 # remove nroff overstrikes (char, backspace) from man pages. The 'echo'
 # command may need an -e switch if you use Unix System V or bash shell.
 sed "s/.`echo \\\b`//g"    # double quotes required for Unix environment
 sed 's/.^H//g'             # in bash/tcsh, press Ctrl-V and then Ctrl-H
 sed 's/.\x08//g'           # hex expression for sed 1.5, GNU sed, ssed

 # get Usenet/e-mail message header
 sed '/^$/q'                # deletes everything after first blank line

 # get Usenet/e-mail message body
 sed '1,/^$/d'              # deletes everything up to first blank line

 # get Subject header, but remove initial "Subject: " portion
 sed '/^Subject: */!d; s///;q'

 # get return address header
 sed '/^Reply-To:/q; /^From:/h; /./d;g;q'

 # parse out the address proper. Pulls out the e-mail address by itself
 # from the 1-line return address header (see preceding script)
 sed 's/ *(.*)//; s/>.*//; s/.*[:<] *//'

 # add a leading angle bracket and space to each line (quote a message)
 sed 's/^/> /'

 # delete leading angle bracket & space from each line (unquote a message)
 sed 's/^> //'

 # remove most HTML tags (accommodates multiple-line tags)
 sed -e :a -e 's/<[^>]*>//g;/</N;//ba'

 # extract multi-part uuencoded binaries, removing extraneous header
 # info, so that only the uuencoded portion remains. Files passed to
 # sed must be passed in the proper order. Version 1 can be entered
 # from the command line; version 2 can be made into an executable
 # Unix shell script. (Modified from a script by Rahul Dhesi.)
 sed '/^end/,/^begin/d' file1 file2 ... fileX | uudecode   # vers. 1
 sed '/^end/,/^begin/d' "$@" | uudecode                    # vers. 2

 # sort paragraphs of file alphabetically. Paragraphs are separated by blank
 # lines. GNU sed uses \v for vertical tab, or any unique char will do.
 sed '/./{H;d;};x;s/\n/={NL}=/g' file | sort | sed '1s/={NL}=//;s/={NL}=/\n/g'
 gsed '/./{H;d};x;y/\n/\v/' file | sort | sed '1s/\v//;y/\v/\n/'

 # zip up each .TXT file individually, deleting the source file and
 # setting the name of each .ZIP file to the basename of the .TXT file
 # (under DOS: the "dir /b" switch returns bare filenames in all caps).
 echo @echo off >zipup.bat
 dir /b *.txt | sed "s/^\(.*\)\.TXT/pkzip -mo \1 \1.TXT/" >>zipup.bat

TYPICAL USE: Sed takes one or more editing commands and applies all of
them, in sequence, to each line of input. After all the commands have
been applied to the first input line, that line is output and a second
input line is taken for processing, and the cycle repeats. The
preceding examples assume that input comes from the standard input
device (i.e, the console, normally this will be piped input). One or
more filenames can be appended to the command line if the input does
not come from stdin. Output is sent to stdout (the screen). Thus:

 cat filename | sed '10q'        # uses piped input
 sed '10q' filename              # same effect, avoids a useless "cat"
 sed '10q' filename > newfile    # redirects output to disk

For additional syntax instructions, including the way to apply editing
commands from a disk file instead of the command line, consult "sed &
awk, 2nd Edition," by Dale Dougherty and Arnold Robbins (O'Reilly,
1997; http://www.ora.com), "UNIX Text Processing," by Dale Dougherty
and Tim O'Reilly (Hayden Books, 1987) or the tutorials by Mike Arst
distributed in U-SEDIT2.ZIP (many sites). To fully exploit the power
of sed, one must understand "regular expressions." For this, see
"Mastering Regular Expressions" by Jeffrey Friedl (O'Reilly, 1997).
The manual ("man") pages on Unix systems may be helpful (try "man
sed", "man regexp", or the subsection on regular expressions in "man
ed"), but man pages are notoriously difficult. They are not written to
teach sed use or regexps to first-time users, but as a reference text
for those already acquainted with these tools.

QUOTING SYNTAX: The preceding examples use single quotes ('...')
instead of double quotes ("...") to enclose editing commands, since
sed is typically used on a Unix platform. Single quotes prevent the
Unix shell from intrepreting the dollar sign ($) and backquotes
(`...`), which are expanded by the shell if they are enclosed in
double quotes. Users of the "csh" shell and derivatives will also need
to quote the exclamation mark (!) with the backslash (i.e., \!) to
properly run the examples listed above, even within single quotes.
Versions of sed written for DOS invariably require double quotes
("...") instead of single quotes to enclose editing commands.

USE OF '\t' IN SED SCRIPTS: For clarity in documentation, we have used
the expression '\t' to indicate a tab character (0x09) in the scripts.
However, most versions of sed do not recognize the '\t' abbreviation,
so when typing these scripts from the command line, you should press
the TAB key instead. '\t' is supported as a regular expression
metacharacter in awk, perl, and HHsed, sedmod, and GNU sed v3.02.80.

VERSIONS OF SED: Versions of sed do differ, and some slight syntax
variation is to be expected. In particular, most do not support the
use of labels (:name) or branch instructions (b,t) within editing
commands, except at the end of those commands. We have used the syntax
which will be portable to most users of sed, even though the popular
GNU versions of sed allow a more succinct syntax. When the reader sees
a fairly long command such as this:

   sed -e '/AAA/b' -e '/BBB/b' -e '/CCC/b' -e d

it is heartening to know that GNU sed will let you reduce it to:

   sed '/AAA/b;/BBB/b;/CCC/b;d'      # or even
   sed '/AAA\|BBB\|CCC/b;d'

In addition, remember that while many versions of sed accept a command
like "/one/ s/RE1/RE2/", some do NOT allow "/one/! s/RE1/RE2/", which
contains space before the 's'. Omit the space when typing the command.

OPTIMIZING FOR SPEED: If execution speed needs to be increased (due to
large input files or slow processors or hard disks), substitution will
be executed more quickly if the "find" expression is specified before
giving the "s/.../.../" instruction. Thus:

   sed 's/foo/bar/g' filename         # standard replace command
   sed '/foo/ s/foo/bar/g' filename   # executes more quickly
   sed '/foo/ s//bar/g' filename      # shorthand sed syntax

On line selection or deletion in which you only need to output lines
from the first part of the file, a "quit" command (q) in the script
will drastically reduce processing time for large files. Thus:

   sed -n '45,50p' filename           # print line nos. 45-50 of a file
   sed -n '51q;45,50p' filename       # same, but executes much faster
\end{verbatim}

\section{Text Analysis in Python}
NLTK package.

\ignore{
\bibitem{AG} Agichtein, E. and Gravano, L. {\em Querying Text
  Databases for Efficient Information Extraction}, in Proceedings of
  the 19th ICDE Conference, 2003.
\bibitem{HGP} Hristidis, V., Gravano, L. and Papakonstantinou, Y. {\em
  Efficient IR-Style Keyword Search over Relational Databases}, in
  Proceedings of the 29th VLDB Conference, Berlin, Germany, 2003.
\bibitem{BHNCS} Bhalotia, G., Hulgeri, A. Nahke, C., Chakrabarti,
  S. and Sudarshan, S. {\em Keyword Search and Browsing in Databases
  using BANKS}, in Proceedings of the 18th International Conference on
  Data Engineering (ICDE), 2002.
\bibitem{ACD} Agrawal, S., Chaudhuri, S. and Das, G. {\em DBXplorer: A
  System for Keyword-Based Search over Relational Databases}, in 
Proceedings of the 18th International Conference on
  Data Engineering (ICDE), 2002.
\bibitem{BDS} Borkar, Vinayak, Deshmukh, Kaustubh and Sarawagi,
  Sunita, {\em Automatic Segmentation of text into Structured
  Records}, ACM SIGMOD 2001.
\bibitem{LYMC} F. Liu, C. Yu, W. Meng and A. Chowdhury, {\em Effective
  Keyword Search in Relational Databases}, in ACM SIGMOD 2006. 
}
