% please textwrap! It helps svn not have conflicts across a multitude of
% lines.
%
% vim:set textwidth=78:

\section{Experiments with Features}
\label{sec:results}

\subsection{Technique Comparisons}
Few of the techniques presented improved accuracy in our models. We show the
best accuracy and models for the techniques that worked as follows.

\paragraph{Feature Selection}
We find that the entire corpus of files when converted into tokens has
\totalwords{} tokens. Each token becomes a feature in the training and
test datasets. We believe that a large number of these words are not
significant predictors or indicators of the true label of the corresponding
examples, and so we employ some feature selection techniques to reduce the
number of tokens which we consider for the final classifier.

After pruning the number of tokens by discarding any tokens which appear in less
than $40$ documents or in more than $1970$ documents, the number of resultant
features is in the range of \prunelow{} to \prunehigh{}.

In the next step, we filter stopwords and any tokens which are a single
character in length. The total number of resultant features that we obtain
after this is in the range \filterlow{} to \filterhigh{}.

Finally, we use the Rapidminer operator Stem (Porter) to stem the tokens.
The total number of resultant features that we obtain after stemming is in
the range \stemlow{} to \stemhigh{}.

\paragraph{Feature Generation}
We also investigate the effect of feature generation on the accuracy we
achieve for both classifiers. We begin by generating n-grams from the
unigram tokens. We find that trigrams are extremely diverse and thus they
are eliminated by our pruning mechanism discussed above. As such, we only
generate unigrams and bigrams. 

In the next step, we generate negation features from the tokens by
prepending any token between a negation word and a punctuation or
whitespace by the keyword ``NOT\_". Thus, if the review text contains the
words ``doesn't work", we generate a token ``NOT\_work". We also replace all
numeric tokens with the keyword ``NUMBER" to prevent stray recurring
numbers, such as the number of minutes of a movie, which is usually $90$
minutes, from being assigned a high weight by either classifier.

\paragraph{Weights by Correlation}
We also experiment with generating weights for each feature based on its
correlation with the label attribute. We use the RapidMiner operator Weights by
Correlation to generate these weights. We believe that features which have a high
correlation with the label attribute will be a good predictor for the
label. Therefore, we filter features by their weights by setting a tunable
threshold. We then train our classifiers on this subset of features filtered by
their weights.

\section{Results and Evaluation}
\label{sec:eval}

\subsection{Evaluation Methodology}
We evaluate our models upon accuracy by using 10-fold cross validation on the
learning set of $1,000$ positive and $1,000$ negative examples. The layout,
the positive and negative examples are explained in the Readme.txt
included in the distribution of the data set.

\subsection{Accuracy Baselines}
Figures \ref{tab:svm_accuracy} and \ref{tab:bayes_accuracy} show the accuracy 
we achieve for the Linear SVM classifier and the Naive Bayes classifier 
respectively for each different experiment. The baseline that we consider
is the accuracy that we achieve without applying any processing operators
to the tokens that we directly obtain from the documents. The first row of
the first column of Figures \ref{tab:svm_accuracy} and \ref{tab:bayes_accuracy} show the
baseline accuracy for Linear SVM classifier and Naive Bayes classifier
respectively for preprocessing by Binary Term Occurrences and TF-IDF. We achieve
baseline accuracies of \svmbinarybase{} with 
Binary Term Occurrences, and \svmtfidfbase{} with TF-IDF for the Linear SVM
classifier. We also achieve baseline accuracies of \bayesbinarybase{} with
Binary Term Occurrences, and \bayestfidfbase{} with TF-IDF for the Naive
Bayes classifier.

\subsection{Accuracy with Linear SVM Classifier}
We compare the accuracies achieved with different experiments for the
Linear SVM classifier in Figure \ref{tab:svm_accuracy}. We observe that none
of our experiments could achieve better accuracy that the baseline because
the SVM classifier learns the polarity of the features, and thus, processing 
the terms does not improve the accuracy of the trained model. 
We suspect that the drop in accuracy in subsequent experiments may be due 
to the presence of a large number of irrelevant bigrams in the example set.
This behavior is similar to that reported in \cite{paper1}.

We also observe that while feature selection techniques such as pruning,
filtering stopwords and filtering by length, lower the accuracy by small
fractions, stemming reduces the accuracy by a relatively larger fraction.
We suspect that this may be because of verb/adjective confusion due to
stemming. For example, we observe that stemming causes the tokens
``surprising" and ``surprisingly" to be stemmed to the same word
``urprise", which causes some loss of context and introduces a certain
amount of generalization which results in loss of accuracy. Also, certain
tokens such a ``roth's" and ``rotten" are stemmed to the same token ``rot",
which also introduces some measure of incorrectness in the dataset.

Thus, we discard the stemming operator for our future experiments which
involve assigning weights by correlation to each feature. We observe that
the various feature selection and feature generation methods, combined with
use of weights improves the accuracy for the SVM classifier using
preprocessing by TF-IDF. 

\subsection{Accuracy with Naive bayes Multinomial Classifier}
We compare the accuracies achieved with different experiments for the
Naive bayes Multinomial classifier in Figure \ref{tab:bayes_accuracy}. We
observe that we are able to improve upon the baseline and achieve an
accuracy of $84.00$ using Binary Term Occurrence. Again, we observe that
stemming lowers the accuracy of the trained model, and so we discard
the stemming operator in the subsequent experiment with weights by
correlation.

We observe that pruning tokens, filtering stopwords, and filtering tokens
by length improves the accuracy of the model over the baseline. We believe
that this is because tokens such as prepositions, which have a high
probability of occurence in either positive or negative class, are
discarded by these filtering steps. This renders the remaining tokens more
useful as predictors of each class. We also observe that attribute weights 
further improve the accuracy we obtain.

\ignore{
Using TF-IDF
decreased accuracy all cases. We suspect that TF-IDF decreases accuracy over
using raw term frequencies because it TF-IDF polarizes the term frequencies of
rare words in respect to the entire corpus towards higher probabilities and
common words towards zero. This change in frequencies changes all the rare
words to
}

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% SVM Accuracy Table
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\begin{figure*}[!t]
\begin{center}
\begin{tabular}{|p{4cm}|p{2cm}|p{2cm}|p{2cm}|p{2cm}|p{2cm}|} \hline
 & Baseline  & Prune & Prune, Stopwords and Filter by Length & Prune,
 Stopwords, Filter by Length and Stemming  & Prune, Stopwords, Filter by Length, Weights \\ \hline
Unigrams and Binary Occurrences & 87.20\% & 86.55\% & 85.60\% &  84.05\% & 84.35\% \\ \hline
Unigrams, Bigrams and Binary Occurrences & - & 86.40\% & 85.40\% & 84.15\% & 84.25\% \\ \hline
Unigrams, Bigrams, Token Replacement and Binary Occurrences & - & 86.50\% & 85.55\% & 84.25\% & 85.55\% \\ \hline
Unigrams and TF-IDF & 68.85\% & 75.05\% & 73.35\% & 72.25\% & 81.70\% \\ \hline
Unigrams, Bigrams and TF-IDF & - & 79.50\% & 78.40\% & 77.50\% & 83.20\% \\ \hline
Unigrams, Bigrams, Token Replacement and TF-IDF & - & 79.60\% & 78.55\% & 77.90\% & 83.65\% \\ \hline
\end{tabular}
\end{center}
\caption{\figtitle{SVM Accuracy Using Preprocessing Combinations.}
The table shows combinations of feature generation and feature selection and
the accuracies resulting from training an SVM in 10-fold cross validation.
Fields containing `-' represents missing data because they exceeded the memory
available to us in our runs. Unigrams with no pruning produces the best
accuracy of \optflm{}.
}
\label{tab:svm_accuracy}
\end{figure*}

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% NB Accuracy Table
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\begin{figure*}[!t]
\begin{center}
\begin{tabular}{|p{4cm}|p{2cm}|p{2cm}|p{2cm}|p{2cm}|p{2cm}|} \hline
 & Baseline  & Prune & Prune, Stopwords and Filter by Length & Prune, Stopwords, Filter by Length and Stemming  & Prune, Stopwords, Filter by Length + Weights \\ \hline
Unigrams, Binary Occurrences & 82.50\% & 83.00\% & 82.95\% &  81.95\% & 82.75\% \\ \hline
Unigrams, Bigrams and Binary Occurrences & - & 83.55\% & 83.35\% & 82.30\% & 83.90\% \\ \hline
Unigrams, Bigrams, TokenReplacement and Binary Occurrences & - & 83.75\% & 83.40\% & 82.75\% & 84.00\% \\ \hline
Unigrams and TF-IDF & 81.40\% & 82.10\% & 82.00\% & 79.55\% & 82.45\% \\ \hline
Unigrams, Bigrams and TF-IDF & - & 82.25\% & 82.05\% & 80.20\% & 82.60\% \\ \hline
Unigrams, Bigrams, TokenReplacement and TF-IDF & - & 83.40\% & 82.70\% & 80.45\% & 83.80\% \\ \hline
\end{tabular}
\end{center}
\caption{\figtitle{Naive Bayes Accuracy Using Preprocessing Combinations.}
The table shows combinations of feature generation and feature selection and
the accuracies resulting from training a Multinomial Naive Bayes Learner in
10-fold cross validation.  Fields containing `-' represents missing data
because they exceeded the memory available to us in our runs. Unigrams and
bigrams with token replacement, binary occurrences and weighting by correlation
produces the highest accuracy of \optnb{}.
}
\label{tab:bayes_accuracy}
\end{figure*}
\begin{figure}[!t]
\center
\begin{tabular}{|l|l|} \hline
\multicolumn{2}{|c|}{Words} \\\hline
disappointing & life\\
driver &        ludicrous\\
dull &          memorable\\
embarrassing &  NOT\_work\\
enjoyed &       outstanding\\
entertaining &  perfectly\\
excellent &     plot\\
feel\_good &    poor\\
filmmakers &    refreshing\\
good\_job &     ridiculous\\
great &         stupid\\
inept &         superior\\
it's\_bad &     terrific\\
lame &          wonderfully\\ \hline
\end{tabular}
\caption{\figtitle{Most Influential Words in SVM Classifier Model}
The list shows the $28$ words which were assigned the largest weights by the
Linear SVM classifier.
}
\label{tab:svm_words}
\end{figure}

\begin{figure*}
\begin{center}
\begin{tabular}{|p{3cm}p{3cm}|p{3cm}p{3cm}|} \hline
\multicolumn{2}{|c|}{Positive Class} & \multicolumn{2}{|c|}{Negative Class} \\ \hline
animation & enjoyed & NOT\_funny & action\_sequences \\
appreciate & entertaining & awful & NOT\_good \\
beautifully & epic & bad\_movie & NUMBER\_minute \\
brilliant & excellent & bore & pathetic \\
charming & good\_job & broken & poor \\
chemistry & mature & cheesy & promising \\
compelling & outstanding & clich? & ridiculous \\
complex & realistic & disappointing & sorry \\
culture & satisfying & dull & stupid\\
delightful & terrific & embarrassing & tedious \\ 
disturbing & understanding & horrible &worst \\
driver & & ludicrous & \\ \hline
\end{tabular}
\end{center}
\caption{\figtitle{Most Influential Words in Naive Bayes Classifier}
The list shows the $23$ words which were assigned the largest weights by the
Naive Bayes Classifier.
}

\label{tab:bayes_words}
\end{figure*}

\subsection{Top Words}
Figure \ref{tab:svm_words} shows the list of thirty most influential words
in the model we obtain from the Linear SVM classifier. These words are mostly
adjectives, and by human intuition as well, they are good indicators of the 
sentiment of the review.

Figure \ref{tab:bayes_words} shows the list of words for positive and negative classes
which are good predictors for that particular class. The words which are influential
in the positive class prediction were assigned large weights for the positive class
and small weights for the negative class. The words which are in the negative class
were assigned large weights for the negative class and small weights for the 
positive class. Once again, the words we obtain as good predictors in the Naive 
Bayes classifier model are mostly adjectives which are, intuitively, good
indicators of the sentiment of the review.

\subsection{Limitations}
As mentioned in Section \ref{sec:dataset}, we use TF-IDF as one of the data
processing techniques before training our classifiers. However since TF-IDF 
calculation is done before cross validation there is a possibility of leakage
of information from training to test data.

We observe that SVM accuracy is highest when we have binary term frequency
without any feature selection, feature transformation or feature generation.
We are not able to improve beyond this accuracy of \optflm{}.

Another limitation is that since we chose to use RapidMiner's implementation 
of Naive Bayes' multinomial model, we could not modify pseudocounts.

% Example of a figure
\ignore{
\begin{figure}[!t]
\center
\includegraphics[width=0.5\textwidth]{img/k_mse_mae.eps}
\caption{\figtitle{The Effect of Rank Approximation on MSE.}
We observe a decay of MSE as we increase $k$ which corroborates our
expectation that with residual fitting, we successively diminish the MSE.
}
\label{fig:k_mse_mae}
\end{figure}
}

% Example of a table
\ignore {
\begin{figure*}[!t]
\begin{center}
\begin{tabular}{l l}
$- 0.099 *$ RFA\_2\_1 = 4 \\
$+ 0.110 *$ RFA\_2\_1 = 1 \\
\end{tabular}
\end{center}
\caption{\figtitle{Fast Large Margin Logistic Regression Model.}
The model produced by the linear SVM using the optimal cost value C found.}
\label{fig:logistic_model}
\end{figure*}
}
