\documentclass[conference]{IEEEtran}


\usepackage{cite}







% *** GRAPHICS RELATED PACKAGES ***
%
\ifCLASSINFOpdf
  \usepackage[pdftex]{graphicx}
  % declare the path(s) where your graphic files are
 \graphicspath{C:\Users\cwilcox\Documents\Fall 11\Data Mining\SVN\final}
  % and their extensions so you won't have to specify these with
  % every instance of \includegraphics
  \DeclareGraphicsExtensions{.pdf,.jpeg,.png}
\else
  % or other class option (dvipsone, dvipdf, if not using dvips). graphicx
  % will default to the driver specified in the system graphics.cfg if no
  % driver is specified.
  % \usepackage[dvips]{graphicx}
  % declare the path(s) where your graphic files are
  % \graphicspath{{../eps/}}
  % and their extensions so you won't have to specify these with
  % every instance of \includegraphics
  % \DeclareGraphicsExtensions{.eps}
\fi





% *** MATH PACKAGES ***
%
\usepackage[cmex10]{amsmath}

% correct bad hyphenation here
\hyphenation{op-tical net-works semi-conduc-tor}


\begin{document}
\bibliographystyle{IEEEtran}
%
% paper title
% can use linebreaks \\ within to get better formatting as desired
\title{An Online Class-Feature-Centroid Classifier}


% author names and affiliations
% use a multiple column layout for up to three different
% affiliations
\author{\IEEEauthorblockN{Aisha Al Zbeidi, Majid Khonji and Wen Shen}
\IEEEauthorblockA{Computing and Information Science\\
Masdar Institute of Science and Technology\\
PO Box 54224, Abu Dhabi, UAE\\
Email: \{aalzbeidi,mkhonji,wshen\} @masdar.ac.ae}
\and
\IEEEauthorblockN{Catherine Wilcox}
\IEEEauthorblockA{Water and Environmental Engineering\\
Masdar Institute of Science and Technology\\
PO Box 54224, Abu Dhabi, UAE\\
Email: cwilcox@masdar.ac.ae}
}


% make the title area
\maketitle


\begin{abstract}

%\boldmath
A centroid-based classifier for text categorization is improved by combining
the class-feature-centroid (CFC) and expectation-maximization (EM) algorithms.
Although centroid classifiers have the advantage of short training and testing
times, they historically have performed with less accuracy than more robust but
computationally expensive methods such as support vector machines (SVM).
Previous studies indicated that the application of CFC improved the centroid
classifier to a level of accuracy greater than SVM. We show with the
20-newsgroups dataset that the classifier is further
improved by combination with EM, with F1 measures equal to or greater than
literature values for other classifiers. This classifier is especially valuable
when training data is limited or a reduced computation time is required. The online nature of this classifier allows further refinement on the classifier during testing phase.
\end{abstract}






\IEEEpeerreviewmaketitle

\section{Introduction}
% no \IEEEPARstart

% You must have at least 2 lines in the paragraph with the drop letter
% (should never be an issue)

Text categorization is the task of categorizing or classifying a set of documents into classes based on the text content of the documents. The importance of this field stems from the increasing amount of information stored in databases due to more advanced technologies. Such large numbers of documents would not be feasibly categorized by human beings due to time and cost requirements. In the text categorization literature, there are a variety of text categorization methods that differ in their methodology, accuracy and speed. Moreover, the literature on text categorization reports many proposals of improving these methods’ accuracies and speed which achieved good results and remarkable improvements over previous trials.

One such classifier which shows great promise via such improvements is known as the centroid-based classifier. This classifier features a high computational efficiency, leading to its use in a widespread variety of web applications. Yet in the field of text classification, the support vector machine (SVM) is considered the state-of-the-art classifier due to its high accuracy, although it has a much lower efficiency. Recent advancements have presented several refined centroid-based classifiers which have comparable or even higher accuracy than SVM, in addition to a much greater efficiency. In this project we synthesize the Expectation-Maximization (EM) method proposed by Cachopo et al. and the Class-Feature-Centroid (CFC) method proposed by Guan et al en. to build a novel centroid classifier. We then compare the performance of this refined methods with other classifiers such as the traditional centroid-based classifier, centroid-based classifiers using just one of these methods and SVM. Through our experiments, this novel method outperforms the CFC method, the EM method and SVM.

%TODO talk about paper structure
\section{Objective}

The learning time and testing time of centroid-based classifiers is much shorter than SVM. However, it's accuracy is lower than
SVM due to the inductive bias or model misfit incurred by the centroid assumption. Many literatures have addressed on this issue.
For instance, consider the Expectation-Maximization method proposed by Cachopo et al~\cite{cardoso2007semi}. and the Class-Feature-Centroid method
proposed by Guan et al en\cite{guan2009class}. The CFC method has better performance than SVM when there is large amount of training data. The EM
centroid method has a close performance to SVM when the training data is insufficient. In this paper, we propose a novel method
which combines the EM method with the CFC method to construct the centroids to address the issue of low accuracy when the training
data is insufficient.

%TODO move it to future work
Our original goal is to build a novel classifier for multi-label text
classification combining the CFC method and the EM method. However, multi-label
text classification is more complicated than single label text classification,
due to time constraint, we implemented the novel classifier for single-label
text classification. We will extend our method to multi-label text
classification in future work.
\section{Centroid-based classifiers}
Centroid-based classifiers have been widely used in many web applications due to their computational efficiency. In this section,
we explore several centroid-based classifiers.
\subsection{Centroid Construction Methods}
 In centroid-based text classification, the documents are first represented using the Vector Space Model (VSM). In this model, each document \textit{d} is considered to be a vector in the term-space.  Normalization of document length is often performed by calculating a term’s weight and scaling the vector to have L2-norm equal one, as is the most commonly used method.

A traditional prototype vector is a delegate vector for each document class, where a feature’s weight should be a form of weight combination of all documents in the class. 

To avoid over-fitting and high-computational complexity, many dimension reduction methods have been proposed for the term vector, such as stop words, stemming, word clustering, and document frequency\cite{guan2009class}. In its simplest form, each document is represented by the \textit{term-frequency}(TF) vector $d_{tf}$=($tf_{1}$,$tf_{2}$,...,$tf_{n}$), where $tf_{i}$ is the frequency of the \textit{i}th term in the document.

Given a class $C_{j}$ of a corpus, there are two classical methods to create $C_{j}$ ’s prototype vector:

(1) Arithmetical Average Centroid\cite{han2000centroid} (AAC):


 \begin{equation}
 \overrightarrow{Centroid_{j}} = \frac{1}{\mid C_j\mid } \sum_{\substack{\overrightarrow{d}\in C_j }} \overrightarrow{d} 
 \end{equation}
 
 
where the centroid is the arithmetical average of all document vectors of class $C_{j}$ . This is the most commonly used initialization method for centroid-based classifiers.


(2) Cumuli Geometric Centroid \cite{chuang2000fast}(CGC):

\begin{equation}
  \overrightarrow{Centroid_{j}} = \sum_{\substack{\overrightarrow{d}\in C_j }} \overrightarrow{d} 
\end{equation}

where each term will be given a summation weight.

(3) The Rocchio formula: 
\begin{equation}
  \overrightarrow{Centroid_{j}} = \beta\bullet \frac{1}{\mid \ C_{j} \mid} \bullet \sum_{\substack{\overrightarrow{d}\in C_j }}\overrightarrow{d}-\gamma \bullet \frac{1}{\mid C-C_{j} \mid} \bullet \sum_{\substack{\overrightarrow{d}\notin C_j}}\overrightarrow{d}
\end{equation}

where each centroid, $\overrightarrow{Centroid_{j}}$, is represented by the sum of all the document vectors for the positive training examples for class $C_{j}$, minus the sum of all the vectors for the negative training examples, weighted by control parameters $\beta$ and $\gamma$, respectively. This method is first proposed by Hull\cite{hull1994improving}.

(4) The normalized sum formula\cite{lertnattee2004effect}:
 \begin{equation}
   \overrightarrow{Centroid_{j}} = \frac{1}{\mid\mid \sum_{\substack{\overrightarrow{d}\in C_j}}\overrightarrow{d} \mid\mid} \bullet  \sum_{\substack{\overrightarrow{d}\in C_j}}\overrightarrow{d}
 \end{equation}
 where each centroid, $\overrightarrow{Centroid_{j}}$, is represented by the sum of all the vectors for the positive training examples for class $C_j$, normalized so that it has unitary length. 
\subsection{Centroid Testing Method}
After the centroids of the different classes are determined, unlabeled test document are classified by finding the closest centroid to the document vector. The category of this centroid is then assigned to the test document. When the distance of two vectors is measured by their dot product, the testing process is to calculate

\begin{equation}
  C^{'}=\operatorname*{arg\,max}_j (\overrightarrow{d}\bullet \overrightarrow{Centroid_{j}}  )
\end{equation}

The test document $\textit{d}$ will be labeled as class $C^{'}$ .

\subsection{The problem of Centroid-based classifier}
Compared to other text categorization methods, centroid-based approaches are more serious with the problem of inductive bias or model misfit-classifiers tuned to the contingent characteristics of the training data rather than the constitutive characteristics of the categories. Centroid-based approaches are more susceptible to model misfit because of its assumption that a document should be assigned to a particular class when the similarity of this document and the class is the largest. In practice, this assumption often does not hold (i.e., model misfit).

Many researchers have addressed this issue. Cachopo et al.~\cite{cardoso2007semi} proposed the combination of Expectation-Maximization with a centroid-based method to incorporate information about the unlabeled data during the training phase. Guan et al. ~\cite{guan2009class} designed a Class-Feature-Centroid (CFC) classifier motivated by weight-adjustment efforts for centroid-based classifiers. Tan et al.~\cite{tan2007using,tan2005using} proposed the Hypothesis-Margin Based Global Refinement(HMGR) method and the DragPushing method. We will introduce these methods in detail based on their papers during the following subsections.

\subsection{Method proposed by Elmarhumy et al.}
Elmarhumy et al. ~\cite{elmarhumy2009automatic} proposed the modified centroid classifier model. In the proposed model, they added the most similar training errors belonging to a certain class to its centroid to update it and discard the training errors that have low similarities with their class based on a certain threshold value. The experimental results show that the proposed approach can slightly improve the performance of the centroid classifier.

\subsection{Expectation Maximization}
In order to improve the accuracy of the centroid-based method, Cachopo and Olivira~\cite{cardoso2007semi} proposed the combination of Expectation-Maximization(EM) with a centroid-based method to incorporate information about the unlabeled data during the training phase. EM is a class of iterative algorithms for maximum likelihood estimation of hidden parameters in problems with incomplete data. In their case, they considered that the labels of the unlabeled documents were unknown and use EM to estimate these (unknown) labels.

Experiments show that their proposed approach can greatly improve accuracy relatively to a simple centroid-based method, in particular when there are very small amounts of labeled data.

They also show how a centroid-based method can be used to incrementally update the model of the data, based on new evidence from the unlabeled data. Using one synthetic dataset and three real-world datasets, they provided empirical evidence that, if the initial model of the data is sufficiently precise, using unlabeled data improves performance; on the other hand, using unlabeled data degrades performance if the initial model is not precise enough.

\subsection{The Class-Feature-Centroid Method}
Guan et al. argue that one of the reasons for the inferior performance of centroid-based classifiers is that centroids do not have good initial values. To solve this problem, many methods have been developed using feedback-loops to iteratively adjust prototype vectors. Motivated by the weight-adjustment efforts for centroid-based classifiers, they ~\cite{guan2009class} designed a Class-Feature-Centroid(CFC) classifier, which strives to construct centroids with better initial values than traditional centroids. In the CFC classifier, it first extracts inter-class and inner-class term indexes from the corpus. Then both indexes are carefully combined together to produce prototype vectors. Different from these previous approaches, this proposed method try to obtain good centroids during the construction phase such that their classification capability is still competitive compared to those derived from adaptive methods. 

The main idea are listed as following: For each class $C_{j}$, calculate the class's centroid $\overrightarrow{centroid_{j}}$, using the following equation: 
          \begin{equation}
            centroid_{j} = (w_{1j},w_{2j},...,w_{|F|j})
          \end{equation}
          where all documents from a corpus form a lexicon set $F={t1,t2,...,t_{|F|}}$, and $w_{kj}(1\leq  k \leq |F|)$ represents the weight for term $t_{k}$.\\
          The weight for term $t_{k}$ of class $C_{j}$ is calculated as:
          \begin{equation}
            w_{ij} = b^{\frac{DF_{t_{i}}^{j}}{\left|C_{j}\right| }}\times \log(\frac{\left|C\right|}{CF_{t_{i}}}) 
          \end{equation}
          where $DF_{t_{i}^{j}}$ is term $t_{i}$'s document frequency in class $C_{j}$, $\left|C_{j}\right|$ is the number of documents in class $C_{j}$, $\left|C\right|$ is the total number of document classes, $CF_{t_{i}}$ is the number of classes containing term $t_{i}$, and \textit{b} is a constant larger than one. Ther first component $b^{\frac{DF_{t_{i}}^{j}}{\left|C_{j}\right| }}$ is the inner-class term index and the second component $\log(\frac{\left|C\right|}{CF_{t_{i}}}) $ represents the inter-class term index.

In the testing phase, CFC adopts a denormalized cosine measure, instead of a normalized prototype vector. This is to preserve prototype vectors' discriminative ability and enlarging effect. The experimental results on the skewed Reuters-21578 corpus and the balanced 20-newsgroup corpus demonstrate that the CFC classifier has a consistently better performance than SVM classifiers. In particular, CFC is more effective and robust than SVM when data is sparse.



\subsection{The Hypothesis-Margin Based Global Refinement Method}
To address the problem of inductive bias or model misfit incurred by the assumption of centroid classifier, Tan et al.~\cite{tan2007using} pointed out that some of the previous refinement methods employed only one criterion such as training-set error as its objective function. However, training-set error based objective function cannot guarantee the generalization capability of base classifiers and may lead to over-train over training data. To solve this problem, they introduced a new refinement strategy named as "Hypothesis-Margin Based Global Refinement(HMGR)" which uses both training-set errors and training-set margins as training criteria to build an global objective function over all training examples.

They conducted extensive experiments on four benchmark document corpora including Reuter-21578, 20 News Group, Industry Sector and OHSUMED. The results show that their proposed technique is able to improve classification performance of centroid classifier dramatically. The resulting classifier not only approaches the state-of-the-art SVM in classifying performance, but also beats it in running time. 
\subsection{The DragPushing Method}

Tan et al.~\cite{tan2005using} proposed an strategy named "DragPushing" to improve the classification accuracy of centroid classifiers.
Using the training data set, the algorithm first calculates the prototype vectors, or centroids, for each of the available document classes. Using misclassified examples, it then iteratively refines these centroids; by dragging the centroid of a correct class towards a misclassified example and in the same time pushing the centroid of an incorrect class away from the misclassified example. 

Experiments show that its classification accuracy is comparable to that of more complex methods,such as support vector machines(SVM). 

\section{The Online Class-Feature-Centroid Classifier}
Recent work shows that Class-Feature-Centroid method is more efficient and robust when the data is sparse\cite{guan2009class}. Previous work also demonstrates that Expectation-Maximization method can greatly improve accuracy relatively to a simple centroid-based method, in particular when there are very small amounts of labeled data available~\cite{cardoso2007semi}. Based on these two method, we propose a novel centroid based classifier- the online class-feature-centroid classifier in this section. This novel classifier combines the CFC centroid calculation method with the Expectation-Maximization(EM) algorithm. In our experiments, this novel method outperforms the CFC method both in Micro-F1 measure and Macro-F1 measure. We will present a detailed description about this novel method in this section.
\subsection{The Centroid Construction}
The online class-feature-centroid classifier(Online-CFC) constructs the centroids incorporating large amounts of unlabeled data in conjunction with small amounts of labeled data using EM algorithm. The procedure is similiar to the EM method proposed by Cachopo and Olivira except that it uses a different  method- the class-feature-class method to calculate the classes' centroids.
The full description of Online-CFC method is shown as following:
\begin{itemize}
  \item Inputs: A set of labeled document vectors, $L$, and a set of unlabeled document vectors $U$.\\
  \item Initialization step:
        \begin{enumerate}
          \item For each class $C_{j}$ appearing in $L$, set $D_{C_{j}}$ to the set of documents in $L$ belonging to class $C_{j}$
          \item For each class $C_{j}$, calculate the class's centroid $\overrightarrow{centroid_{j}}$, using the following equation: 
          \begin{equation}
            centroid_{j} = (w_{1j},w_{2j},...,w_{|F|j})
          \end{equation}
          where all documents from a corpus form a lexicon set $F={t1,t2,...,t_{|F|}}$, and $w_{kj}(1\leq  k \leq |F|)$ represents the weight for term $t_{k}$.\\
          The weight for term $t_{k}$ of class $C_{j}$ is calculated as:
          \begin{equation}
            w_{ij} = b^{\frac{DF_{t_{i}}^{j}}{\left|C_{j}\right| }}\times \log(\frac{\left|C\right|}{CF_{t_{i}}}) 
          \end{equation}
          where $DF_{t_{i}^{j}}$ is term $t_{i}$'s document frequency in class $C_{j}$, $\left|C_{j}\right|$ is the number of documents in class $C_{j}$, $\left|C\right|$ is the total number of document classes, $CF_{t_{i}}$ is the number of classes containing term $t_{i}$, and \textit{b} is a constant larger than one. Ther first component $b^{\frac{DF_{t_{i}}^{j}}{\left|C_{j}\right| }}$ is the inner-class term index and the second component $\log(\frac{\left|C\right|}{CF_{t_{i}}}) $ represents the inter-class term index.
        \end{enumerate}
  \item Estimation step:
      \begin{enumerate}
        \item For each class $C_{j}$ appearing in $L$, set $U_{C_{j}}$ to the empty set.
        \item For each document vector $\overrightarrow{d_{i}} \epsilon  U:$
          \begin{enumerate}
            \item Let $C_{k}$ be the class to whose centroid $\overrightarrow{d_{i}}$ has the greatest cosine similarity, calculated using the following equation:
            \begin{equation}
              C_{k}=\operatorname*{arg\,max}_{C_{j}} (\overrightarrow{d_{i}}\bullet \overrightarrow{Centroid_{C_{j}}}  )
            \end{equation}
            \item Add $\overrightarrow{d_{i}}$ to the set of document vectors labeled as $C_{k}$, i.e., set $U_{C_{k}}$ 
            to $U_{C_{k}} \bigcup \{ \overrightarrow{d_{i}}\} $ 
          \end{enumerate}
      \end{enumerate}
  \item Maximization step:
     \begin{enumerate}
       \item For each class $C_{j}$, calculate $\overrightarrow{centroid_{j_{new}}}$, using $D_{C_{k}} \bigcup U_{C_{k}}$ as the set of documents labeled as $C_{k}$, for each class $C_{k}$.
     \end{enumerate}
     \item Iterate:
         \begin{enumerate}
           \item If, for some $j$, $\overrightarrow{centroid_{j}} \neq \overrightarrow{centroid_{j_{new}}}$, then set $\overrightarrow{centroid_{j}}$ to $\overrightarrow{centroid_{j_{new}}}$ and repeat from the \textquotedblleft Estimation step"  forward.
         \end{enumerate}
     \item Outputs: For each class $C_{j}$, the centroid $\overrightarrow{centroid_{j}}$.
\end{itemize}
\subsection{Inner-Class Term Index and Inter-Class Term Index}

The inner-class term index is proved to be helpful for classificaiton ~\cite{guan2009class}.If a term appears many times in documents of category $C$, and then a test document containing the term is more likely to be the category $C$. The inner-class term index form we use limits the inner-class feature weight within range $(1,b]$. The denominator $|C_{j}|$ in the equation smoothes the difference of document frequenceies across categories.

A good inter-class term should distribute distinctly different among classes. If a term only appears in a few categories, and tehn the term is discriminative feature, thus a good feature for classification. If a term appears in every category, and then the term is not a good inter-class feature. The inter-class term index we use is also proved to be a form that favors rare terms and disfavors popular terms, which produces more discriminative features.

\subsection{Classification}
After centroids of different categories are determinined, an unlabeled document is classified by finding the closest centroid to the document vector. The category of this centroid is then assigned to the test document. The calculation method is shown in equation (5).
 


\section{Evaluation}
The basic idea in the proposed classifier \textquotedblleft Online-CFC classifier" is to take the advantage of the EM method in using unlabeled training data to improve the performance of the original CFC classifier. In this section, we evaluate our online-CFC classifier on the Text Classification task by comparing online-CFC's performance with the CFC method and SVM appproaches. Specificially, we compare the performance of four different approaches:
\begin{itemize}
  \item Online-CFC: our online class-feature-centroid classifier
  \item CFC: the class-feature-centroid classifier
  \item NormalizedSum Centroid: normalizedSum centroid using EM.
  \item SVM: SVM-based tools. The SVMLight, SVMTorch and LibSVM. 
\end{itemize}
We will use 20-newsgroup dataset in this experiments. There are four evaluation measures used in the paper: precision, recall, micro-F1 and macro-F1. We will discuss these settings in this section.
\subsection{Th 20-newsgroup dateset}
The 20-newsgroup dataset is a collection of 19,997 newsgroup documents, partitioned(nearly) evenly across 20 diffierent newsgroups. The 20-newsgroup collection has become a popular dataset for experiments in text applications of text classification. There are approximately 4\% of the articles are cross-posted. The stop words list has 823 words, and we kept words that occurred at least once and texts that had at least one term.

Altoether, there are 19,899 texts left in the corpus. When parsing documents, we only keep "Subject", "Keywords", and "Content". Other information, such as "Pat", "From", "Message-ID", "Sender" "Organization", "References", "Date","Lines", and email addresses, are filtered out. Words in "Subject" and "Keywords"are given ten times weight comparing to those in "Contents".

We use the tokenizer tool provided in the Trinity College sample. IDF scores for TF-IDF are extracted from the whole corpus.
\subsection{The evaluation measures}
The following four evaluation measures are employed in the experiments: precision, recall, micro-average-F1 and macro-average-F1.
\subsubsection{Precision}
Precision  is the proportion of actual positive class members returned by the system among all positive class members\cite{joachims1996probabilistic}.
\begin{equation}
  Precision = \frac{True Positives}{True Positives + False Positives}
\end{equation}
\subsubsection{Recall}
Recall is the proportion of predicted positive members among all actual positive class members in the data.
\begin{equation}
  Recall = \frac{True Positives}{ True Positives + False Negatives}
\end{equation}
\subsubsection{F1 Measure}
F1 is the harmonic average of precision and recall as shown below:
\begin{equation}
  F_{1} = \frac{2\times Precision \times Recall}{Precision + Recall}
\end{equation}
To evaluate the average performance across multiple categories, there are two conventional methods: micro-average-$F_{1}$ and macro-average-$F_{1}$. Micro-average-$F_{1}$ is the global calculation of $F_{1}$ measure regardless of categories. Macro-average-$F_{1}$ is the average on $F_{1}$ scores of all categories. Micro-average gives equal weight to every document, while macro-average gives equal weight to every category, regardless of its frequency. In our experiments, precision, recall, micro-average-$F_{1}$ and macro-average-$F_{1}$ will be used to evaluate the classification performance\cite{ziarko2005investigation}.

\subsection{Experiment results and analysis}
In our experiments we divided the dataset documents in three parts. The labeled training part contains 1/2 of the dataset, while the unlabeled training part contains 1/4 of the dataset. However, the third part for testing contains the remaining 1/4 of the data set.  We implemented our online-CFC classifier and the CFC classifier using Java. We set the parameter $b$ as $e-1.7$.

Table I shows that the Online CFC classifier produce a better overall results in both micro-F1 and macro-F1 than the original CFC. Table II, III and IV show the Online-CFC method outperforms the CFC in certain classes and performs as well as the CFC method in the remainning classes based on F1,Precision and Recall respectively.

\begin{table}[ht]
\begin{center}
\caption{Comparsion of the overall performance of Online-CFC and CFC }
\begin{tabular}{ | l | c | c | c| c| }
  \hline
  Classifier & Micro-F1 & Macro-F1 & Precision & Recall \\
  \hline
  Online-CFC & 0.9192 & 0.9189 & 0.9192 & 0.9192\\
  \hline
  CFC & 0.9182 & 0.9179 & 0.9182 & 0.9182\\
  \hline
\end{tabular}
\end{center}
\end{table}


\begin{table}[ht]
\begin{center}
\caption{Comparsion of F1 for all classes}
\begin{tabular}{ | l | c | c |}
  \hline
  \# & Online-CFC & CFC \\
  \hline
  1 & 0.9269 & 0.9269 \\
  \hline
  2 & 0.9217 & 0.9217 \\
  \hline
  3 &0.9217  & 0.9217 \\
  \hline
  4 &0.924  &0.924  \\
  \hline
  5 & 0.9151 &0.9151  \\
  \hline
  6 & 0.9604 & 0.9604  \\
  \hline
  7 & 0.9402 & 0.9402 \\
  \hline
  8 &0.9386  &0.9369  \\
  \hline
  9 &0.8320  & 0.8298 \\
  \hline
  10 & 0.9758 & 0.9738 \\
  \hline
  11 & 0.8973 & 0.8951 \\
  \hline
  12 &0.9399  & 0.9380 \\
  \hline
  13 & 0.9760 &  0.9739 \\
  \hline
  14 & 0.9760 & 0.9760 \\
  \hline
  15 & 0.7245 & 0.7245 \\
  \hline
  16 & 0.9695 & 0.9676 \\
  \hline
  17 & 0.8190 & 0.8190 \\
  \hline
  18 & 0.8780 & 0.8739 \\
  \hline
  19 &0.9191  &  0.9169\\
  \hline
  20 & 0.9716 & 0.9716 \\
  \hline
\end{tabular}
\end{center}
\end{table}


\begin{table}[ht]
\begin{center}
\caption{Comparsion of Precision for all classes}
\begin{tabular}{ | l | c | c |}
  \hline
  \# & Online-CFC & CFC \\
  \hline
  1 & 0.9527 & 0.9527 \\
  \hline
  2 & 0.972 & 0.972 \\
  \hline
  3 &0.9688  & 0.9688 \\
  \hline
  4 &0.924  &0.924  \\
  \hline
  5 & 0.8923 &0.8923  \\
  \hline
  6 & 0.9418 & 0.9418  \\
  \hline
  7 & 0.9291 & 0.9291 \\
  \hline
  8 &0.8974  &0.8974  \\
  \hline
  9 &0.7956  & 0.7919 \\
  \hline
  10 & 0.9758 & 0.9758 \\
  \hline
  11 & 0.8973 & 0.8987 \\
  \hline
  12 & 0.9028  &0.8987\\
  \hline
  13 & 0.9700&  0.9700 \\
  \hline
  14 & 0.9721 & 0.9681 \\
  \hline
  15 & 0.9722 & 0.9722\\
  \hline
  16 & 0.7702 & 0.7702 \\
  \hline
  17 & 0.9795 & 0.9795 \\
  \hline
  18 & 0.8142 & 0.8142 \\
  \hline
  19 &0.9031  & 0.8884\\
  \hline
  20 & 0.9677 & 0.9677 \\
  \hline
\end{tabular}
\end{center}
\end{table}


\begin{table}[ht]
\caption{Comparsion of Recall for all classes}
\begin{center}
\begin{tabular}{ | l | c | c | }
  \hline
  \# & Online-CFC & CFC \\
  \hline
  1 & 0.9024 & 0.9024 \\
  \hline
  2 & 0.972 & 0.972 \\
  \hline
  3 &0.8970  & 0.8970 \\
  \hline
  4 &0.924  &0.924  \\
  \hline
  5 & 0.9392 &0.9392  \\
  \hline
  6 & 0.9798 & 0.9798  \\
  \hline
  7 & 0.9516 & 0.9516 \\
  \hline
  8 &0.9839  &0.98  \\
  \hline
  9 &0.872  & 0.8714 \\
  \hline
  10 & 0.9758 & 0.9718 \\
  \hline
  11 & 0.892 & 0.8915 \\
  \hline
  12 &0.9116  & 0.908 \\
  \hline
  13 & 0.9799 &  0.9798 \\
  \hline
  14 & 0.98 & 0.98 \\
  \hline
  15 & 0.684 & 0.684 \\
  \hline
  16 & 0.9598 & 0.956 \\
  \hline
  17 & 0.824 & 0.824 \\
  \hline
  18 & 0.864 & 0.86 \\
  \hline
  19 &0.9357  &  0.9354\\
  \hline
  20 & 0.9756 & 0.9756 \\
  \hline
\end{tabular}
\end{center}
\end{table}

We were planing to compare other classifiers which also have competitive performance, but due to time constraints we were not able to implement other classifiers. We collect the experiments data form literature \cite{guan2009class} that also uses the 20-newsgroup dataset with some different experiemental settings. The results are shown in table V.

\begin{table}[ht]
\begin{center}
\caption{Comparsion of Other classifers}
\begin{tabular}{ | l | c | c | }
  \hline
  Classifier & micro-F1 & macro-F1 \\
  \hline
  SVMLight & 0.8304 & 0.8297 \\
  \hline
  SVMTorch & 0.8482 & 0.8479 \\
  \hline
  LibSVM & 0.8416 & 0.8292 \\
  \hline
  NormalizedSum Centroid(EM) & 0.8307 & 0.8292 \\
  \hline
\end{tabular}
\end{center}
\end{table}


These results give us a general view about the efficiency of the Online EM CFC classifier and  shows a promising results among the commonly used classifiers.
 
\section{Roles of Team Members}
This project was hosted by Google Code. All the planning, document summaries, and task assignments are updated in ``http://code.google.com/p/cis501-group6/''. The responsibilities of all the team members are listed as following:
\subsubsection{Aisha Al Zbeidi}
Her main contribution was in writing the evaluation section.
\subsubsection{Catherine Wilcox}
Explored centroid-based-classifier improvement methods. As this was her first exposure to Java, her main contribution was in the writing of the report. 
\subsubsection{Majid Khonji}
Majid participated the coding for the online-CFC classifier.
\subsubsection{Wen Shen}
Wen mainly implemented the Online-CFC classifier using Java based on the opensource tokenizer tool for 20-newsgroup dataset. He finished the writing the online-CFC classifier section in this report. He also participated in writing the Evaluation section.
\section{Conclusion}
In this work, we explored several centroid-based classifiers for single-label text classification. We also combined the CFC and EM method to propose a novel centroid-based classifier for single-label text classification. Our experimental results shows that the online-CFC method outperforms the CFC method in all the three evaluation measures: Precison, Recall and F1. Compared with the experimental results in literature, the online-CFC method also has better performance than CFC,SVM and Centroid-based method with EM.

We will extend our method to the multi-label text classification in future work.
\section*{Acknowledgment}


We would like to thank Dr. Wei Lee Woon for his helpful suggestions on our project.

\bibliography{citation}

% that's all folks
\end{document}
