%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% Should we include this section?
%
% From his piazzza post:
%   Some reports include a lot of text from the class lecture notes. There is
%   no point in doing this. Instead of repeating from a source, just cite it.
%   Make the citation precise if appropriate, i.e. specify the relevant
%   section and subsection. Only quote text from a source if you actually
%   provide your own comments on what is quoted
%
% This is just repeating information from the readme.txt and not adding
% anything beyond that.
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

\ignore {
We begin by providing background on the Movie Review Polarity Dataset (MR).
MR contains $1,000$ positive and $1,000$ negative examples of movie review
sentiment. That is, if a movie was given a good rating, the review is marked
as a positive example; otherwise, it is a negative example. For ...
}


\section{Dataset Preprocessing}
\label{sec:dataset}
\label{sec:preprocess}
In this section, we explain the techniques and implementations we used to
preprocess the data for training our classifiers and motivations behind them. 
Since the dataset consists of text documents, we refer to the words in the text
as features. We perform several transformations, selections and weighting on these
features as explained further. The main motivation behind these techniques is
to improve the usefulness of the features, reduce the 
number of features that contribute to building the model and subsequently avoid overfitting
and reduce runtimes.

We experiment with two text conversion techniques namely {\em term
frequency-inverse document frequency} (TF-IDF) and {\em binary term occurrence}
as described in \cite{thumbs_up} to aid our classifiers.

We hold off evaluating and presenting the results all the hypotheses mentioned
below and their effects on improving accuracy of our classifiers until Section
\ref{sec:results}.

\subsection{Feature Selection}

As part of our feature selection strategies, we tokenize words using non-words
as delimiters. We also eliminate ``stop'' words for reasons stated in Section 11.1
of \cite{class_notes}. 

Furthermore, we experiment with stemming words using the Porter Stemmer algorithm. Our hypothesis is that larger groups
should improve accuracy since the ideas behind these similar words (e.g.
excitement and exciting) will have larger weight combined 
instead of being spread over multiple terms. 

We also try filtering single letter words. Our hypothesis for eliminating
these words is that these single words do not carry much meaning, so they can
be eliminated to improve runtimes without losing accuracy. We also prune features that occur less than $40$
or over $1,970$ times over all the documents in the corpus. Although rare
words can be strong signifiers for classes, they also are unlikely to appear
in test documents. In contrast, words that occur in the greater majority of
documents probably carry no significance because the learning methods will not
be able to distinguish them from either class.

\subsection{Feature Generation}
As part of new feature generation techniques we combine words into bigrams. 
We suspect that combining into bigrams can improve accuracy because 
words have different meanings depending on their context. For example, 
hyphenated words and oxymorons have different meaning than their individual components 
and just using the bag-of-words representation loses this meaning. 
By combining words into bigrams, we preserve some of this
meaning.

Additionally, we try combining terms with their negations. In the bag-of-words
representation, any word that is prefixed with a negative signifier will be
lost since the bag-of-words representation does not consider position. By
combining the term with the negative signifier, we preserve the negative
meaning of these words.

In addition to preserving negative relationships with words, we also try
representing numbers with a single token. We believe that the single token unifies the
representation of numbers so that the underlying meaning is better
represented.

\subsection{Weighting by Correlation}
We also explore methods of improving accuracy by using weight by correlation
operator in RapidMiner. For each fold of cross validation, we learn the weight representing the
correlation between the word and the target label in the training fold. 
We further preprocess the weights to eliminate the ones that weakly predict the feature label
below a threshold. We accomplish this filtering by removing all features with weights with an
absolute value less than $0.2$. We then save the learned weights from our training
 set so that we can use the same learned weights in testing. 

\subsection{Leakage}

Out of all the techniques mentioned above only weight by correlation has potential of
leakage from training to test and hence we are careful to not use the same
weights in both training and test as mentioned above.
