%
% File acl2010.tex
%
% Contact  jshin@csie.ncnu.edu.tw or pkoehn@inf.ed.ac.uk
%%
%% Based on the style files for ACL-IJCNLP-2009, which were, in turn,
%% based on the style files for EACL-2009 and IJCNLP-2008...

%% Based on the style files for EACL 2006 by 
%%e.agirre@ehu.es or Sergi.Balari@uab.es
%% and that of ACL 08 by Joakim Nivre and Noah Smith

\documentclass[11pt]{article}
\usepackage{acl2010}
\usepackage{times}
\usepackage{url}
\usepackage{latexsym}
%\setlength\titlebox{6.5cm}    % You can expand the title box if you
% really have to

\usepackage{amsmath}
\usepackage{graphicx}
\usepackage{multirow}

\title{Co-Training: A Survey of Theory and Practice}

\author{Nam Khanh Tran\\
  Dep. of Computational Linguistics\\
  Saarland University\\
  {\tt khanhtn09@gmail.com} \And
  Ivan Titov\\
  MMCI Cluster of Excellence\\
  Saarland University\\
  {\tt titov@mmci.uni-saarland.de}
  }
\date{}

\begin{document}
\maketitle
\begin{abstract}
	
	Recently there has been significant interest in supervised learning algorithms
	that combine labeled and unlabeled data for text learning tasks. The
	co-training setting applies to datasets that have a natural separation of their
	features into two disjoint sets. Some studies have shown that co-training works
	well if the two views satisfy the sufficiency and independence assumptions.
	Some others have presented that co-training still works effectively with weaker
	assumptions and even with single-view dataset. In this paper, we review the
	standard two-view co-training, some theoretical analyses that weaken the
	assumptions and some analyses the effectiveness and applicability of
	co-training.
	
\end{abstract}

\vspace*{1cm}

\section{Introduction}
\label{sec:intro}

	Traditional supervised classification learning builds classifiers based on the 
	availability of labeled training examples. As labeled examples can be difficult
	and expensive to obtain, many semi-supervised approaches, such as the generative-based
	methods, the graph-based methods, and co-training tend to utilize unlabeled examples
	to improve the predictive accuracy.
	
	\vspace*{0.3cm}
	Recently, co-training, a paradigm of semi-supervised learning, has drawn
	considerable attentions and interests. The standard two-view co-training
	\cite{Blum:98} assumes that the data can be described by two disjoint sets of
	features or views. The standard co-training utilizes an initial small set of
	labeled training data and a large set of unlabeled data from the same
	distribution, and it works roughly as follows \cite{Blum:98}. Two classifiers
	are first trained on the initial labeled training set using the two views
	separately. Then, each classifier classifies the unlabeled data and chooses the
	few unlabeled examples whose labels it predicts most confidently, adds them
	with their predicted labels to the training set. The classifiers are then
	retrained, and the process repeats until some stopping criterion is met. That
	is, the two classifiers teach each other with the additional examples whose
	labels are given by the other classifier, so as to improve the classification
	accuracy. \cite{Blum:98} proposed that if sufficiency assumption that is each
	view is sufficient to predict class perfectly and independence assumption which
	is that the two views are independent given the class, are satisfied,
	co-training is guaranteed to work well.

	\vspace*{0.3cm}	
	However, the two assumptions are remarkably powerful and
	easily violated in practice \cite{Nigam:00,Abney:02,Balcan:04}. Therefore,
	\cite{Abney:02} shows that the conditional independence can be relaxed to be
	weak dependence. Latter, \cite{Balcan:04} theoreotically showed that given
	approxiately strong PAC-learners on each view, an assumption of
	$\epsilon$-expansion on the underlying data distribution, which is weaker than
	the assumption of sufficient and redundant views, is sufficient. Thus, it is
	not strange that co-training may still be able to work well on the two-view
	datasets which are not independent. But, most real-world datasets have only
	single views instead of two views. To exploit the advantages of co-training,
	effective single-view co-training algorithms are needed
	\cite{Nigam:00,Jun:11,Chen:11}. \cite{Nigam:00} reported an empirical study on
	splitting single views into two views and found that when an independent and
	redundant feature split exists, co-training outperform many other algorithms
	using ublabeled data. Even when natural feature divisions are not exists, if
	there are sufficient redundancy among features and a fairly reasonable division
	of the features can be identified, co-training algorithms may show similar
	advantages to other algorithms. Latter, \cite{Jun:11,Chen:11} present different
	splitting methods to split sing-view datasets into two-view subsets in order to
	make co-training work reliably.
	
	\vspace*{0.3cm}
	Co-training and its variants have achieved great success in many applications
	such as named entity classification \cite{Collins:99}, word sense
	disambiguation \cite{Yarowsky:95}, noun phrase identification \cite{Pierce:01}
	and others \cite{Ghani:01,Levin:03,Chan:04}. \cite{Collins:99} present the
	algorithm with co-training setting to classify named entity into categories
	wherein the word can be classified based either on a spelling feature or a
	contextual feature. \cite{Yarowsky:95} show that a word can be disambiguated
	based on the target word itself or the nearby words which are indicated from
	two powerful properties of human language, once sense per collocation and one
	sense per discourse. \cite{Blum:98} suggest that \cite{Yarowsky:95} is a
	special case of co-training but in fact that Yarowsky algorithm based on
	different assumption called precision independent. \cite{Pierce:01} propose
	that co-training will not scale well due to mistakes made by the view
	classifiers. They illustrate the idea by applying the co-training setting to
	the task of noun phrase identification.
	
	\vspace*{0.3cm}
	The rest of this paper is organized as follows. The next section of this paper
	describes the standard two-view co-training. Section \ref{sec:theory} give the
	analyses of basis theory of co-training and section \ref{sec:practice}
	presents the applications of co-training. Finally we conclude this paper in
	section \ref{sec:conclusion}.
	
\vspace*{0.45cm}	
	
\section{Standard Co-training}
\label{sec:standard}

	The co-training setting applies when a dataset has a natural division of its
	features. For example, web pages can be described by either the text on the web
	page, or the text on the hyperlinks pointing to the web page. Traditional
	algorithms usually ignore this division and pool all features together. An
	algorithm that uses the co-training setting can learn separate classifiers over
	each of the feature sets, and then combine their predictions to decrease
	classification error.
	
	\vspace*{0.3cm}
	\cite{Blum:98} formalize the co-training model as follows. An instance space X
	can be splitted into two different views $X=X_1 \times X_2$, where $X_1$ and
	$X_2$ correspond to two different views of an example. That is, each example
	$x$ is given as a pair $(x_1, x_2)$. \cite{Blum:98} proposed two assumptions
	for co-training to work well. The first one assumes that the views are
	sufficient wherein each view is sufficient to predict the class correctly.
	Specifically, let $D$ be a distribution over $X$, and $C_1$ and $C_2$ be
	concept classes defined over $X_1$ and $X_2$, respectively. What they assume is
	that all labels on examples with non-zero probability under $D$ are consistent
	with some target function $f_1 \in C_1$ and $f_2 \in C_2$. That is, for any
	example $x = (x_1, x_2)$ observed with label $l$, then $f(x) = f_1(x_1) =
	f_2(x_2) = l$. That means in particular the distribution $D$ assigns
	probability zero to any example $(x_1, x_2)$ such that $f_1(x_1) \neq
	f_2(x_2)$:
	\[
		Pr_D[(x_1, x_2): f_1(x_1) \neq f_2(x_2)] = 0
	\]
	That target function $f=(f_1, f_2) \in C_1 \times C_2$ is considered as being
	compatible with $D$. In general, the degree of compability of a target function
	$f=(f_1, f_2)$ with a distribution $D$ could be defined as a number $0 \leq p
	\leq 1$ where $p = 1 - Pr_D[(x_1, x_2): f_1(x_1) \neq f_2(x_2)]$. The second
	assumption requires that the two views are independent given the class on the
	distribution $D$. Specifically, target functions $f_1, f_2$ and the
	distribution $D$ together satisfy the conditional independence assumption if,
	for any fixed $(\tilde{x}_1, \tilde{x}_2) \in X$ of non-zero probability:
	\[
		Pr_{(x_1, x_2) \in D} \left[ x_1 = \tilde{x}_1 \vert x_2 = \tilde{x}_2 \right] =
		Pr_{(x_1, x_2) \in D} \left[ x_1 = \tilde{x}_1 \vert f_2(x_2) = f_2(\tilde{x}_2) \right]
	\]
	and similarly,
	\[
		Pr_{(x_1, x_2) \in D} \left[ x_2 = \tilde{x}_2 \vert x_1 = \tilde{x}_1 \right] = 
		Pr_{(x_1, x_2) \in D} \left[ x_2 = \tilde{x}_2 \vert f_1(x_1) = f_1(\tilde{x}_1) \right]
	\]
	For example, in the problem of web page classification, it is assumed that the
	words on the page P and the words on the hyperlinks pointing to P are
	conditionally independent given the classification of P. That is, the page
	itself is constructed by a different user than one who made the link. However,
	real-world data sets with a feature division will not completely satisfy the
	strict requirements of compatibility and conditional independence. Therefore,
	\cite{Nigam:00} investigated how sensitive co-training algorithm are to the
	correctness of these assumptions, \cite{Balcan:04} gave a theoretical analysis
	that substantially relaxes the strength of the independence assumption to just
	a form of expansion of the underlying distribution, and \cite{Jun:11} attempted
	to verify the sufficiency and independence assumptions and propose some methods
	to split single view into two views in order to make the standard two-view
	co-training work more reliably on single-view dataset (see section
	\ref{sec:theory} and section \ref{sec:practice} for more details).
	\cite{Blum:98} showed that if the two assumptions are satisfied and the target
	class is learnable from random classification noise in the standard PAC model,
	then any initial weak predictor can be boosted to arbitrarily high accuracy
	using only unlabeled examples by co-training. They defines a weakly useful
	predictor $h$ of a function $f$ to be a function such that:
	\begin{enumerate}
		\item $Pr_D \left[ h(x) = 1 \right] \geq \epsilon $ and
		\item $Pr_D \left[ f(x) = 1 \vert h(x) = 1 \right] \geq Pr_D\left[f(x) = 1 \right] + \epsilon$
	\end{enumerate}
	for some $\epsilon \geq 1/poly(n)$. For example, observing the word
	"publications" on the web page would be a weakly useful predictor that the page
	is a professional homepage if (1) "publications" appears on a non-negligible
	fraction of pages and (2) the probability a given page is a professional
	homepage given that "publications" appears is non-negligibly higher than the
	probability without that word. They proposed a theorem as follows: \\ [0.3cm]
%		
	\textbf{Theorem 1}: {\em If $C_2$ is learnable in the PAC model with
	classification noise, and if the conditional independence assumption is
	satisfied, then ($C_1, C_2$) is learnable in the co-training model from
	unlabeled data only, given an initial weakly useful predictor $h(x_1)$} \\
	[0.3cm] 
	{\em Proof of Theorem 1}: Let $f(x)$ be the target concept and $p =
	Pr_D(f(x) = 1)$ be the probability that a random example from $D$ is positive.
	Let $q = Pr_D(f(x) = 1 \vert h(x_1) = 1)$ and let $c = Pr_D(h(x_1) = 1)$. So,
	\begin{center}
		\begin{tabular}{l l}
			$Pr_D \left[ h(x_1) = 1 \vert f(x) = 1 \right]$ & 
			$= \dfrac{Pr_D[f(x)=1 \vert h(x_1)=1] Pr_D[h(x_1)=1]}{Pr_D[f(x)=1] }$ \\
			& $=\dfrac{qc}{p}$
		\end{tabular}
	\end{center}
	and \\
		$~~~~~~~~~~~~~~~~~~~~~~$ $Pr_D[h(x_1) = 1 \vert f(x)=0] ~~~~ = \dfrac{(1-q)c}{1-p}$ \\
	By the conditional independence assumption, for a random example $x=(x_1,x_2)$,
	$h(x_1)$ is independent of $x_2$ given $f(x)$. Thus, if $h(x_1)$ is used as a
	noisy label of $x_2$, then this is equivalent to $(\alpha,
	\beta)$-classification noise, where $\alpha=1 - qc/p$ and $\beta =
	(1-q)c/(1-p)$. The sum of the two noise rates:
	\[
		\alpha + \beta = 1 - \dfrac{qc}{p} + \dfrac{(1-q)c}{1-p} = 1 - c \left( \dfrac{q-p}{p(1-p)} \right)
	\]
	By the assumption that $h$ is a weakly useful predictor then $c \geq \epsilon$
	and $q-p \geq \epsilon$. Therefore, $\alpha + \beta \leq 1 - \epsilon^2 /
	(p(1-p)) \leq 1 - 4 \epsilon^2$. As a result, $(C_1, C_2)$ is learnable in the
	co-training model with $(\alpha,\beta)$ classification noise.

	\vspace*{0.3cm}
	In order to illustrate the effect of co-training, \cite{Blum:98} applied it to
	the problem of classifying web pages as course home page. For each example web
	page $x$, they considered $x_1$ as the bag of words appearing on the web page
	and $x_2$ to be the bag of words underlined in all links pointing into the web
	page from other pages. Two classifiers were trained separately for $x_1$ and
	for $x_2$, using the naive Bayes algorithm. The co-training algorithm they used
	is described in Table \ref{tab:tab1}.
	
	\begin{table}[h]
	\centering
	\begin{tabular}{p{15.5cm}}
	\hline \\
	Given:
	\begin{itemize}
		\item A set $L$ of labeled training examples
		\item A set $U$ of unlabeled examples
	\end{itemize}
	Creat a pool $U'$ of examples by choosing $u$ examples at random from $U$.\\
	Loop for $k$ iterations:
	\begin{itemize}
		\item[] Use $L$ to train a classifier $h_1$ that considers only the $x_1$ portion of $x$
		\item[] Use $L$ to train a classifier $h_2$ that considers only the $x_2$ portion of $x$
		\item[] Allow $h_1$ to label $p$ positive and $n$ negative examples from $U'$
		\item[] Allow $h_2$ to label $p$ positive and $n$ negative examples from $U'$		
		\item[] Add these self-labeled examples to $L$
		\item[] Randomly choose $2p + 2n$ examples from $U$ to replenish $U'$
	\end{itemize}
	\end{tabular}
	\line(1,0){453}
	\caption{The standard co-training algorithm.}
	\label{tab:tab1}
	\end{table}
	They compared co-training to supervised training using naive Bayes classifiers
	on the collection of 1051 web pages from computer science department web sites
	with 12 labeled training examples in $L$. The results show that the co-training
	algorithm decreases significantly classification errors as shown in the Table
	\ref{tab:tab2}. \begin{table}[h]
	\centering
	\begin{tabular}{ | l | c | c | c | }
		\hline
		& Page-based classifier & Hyperlink-based classifier & Combined classifier \\
		\hline
		Supervised training & 12.9 & 12.4 & 11.1 \\
		\hline
		Co-training & 6.2 & 11.6 & 5.0 \\
		\hline
	\end{tabular}
	\caption{Error rate in percent for classifying web pages as course home page.}
	\label{tab:tab2}
	\end{table}
		
	The Blum and Mitchell paper has been very influential and attracted many
	researchers to analyze underlying theory, assumptions	as well as the
	effectiveness and applicability of co-training. However, \cite{Abney:02}
	pointed out that the co-training algorithm does not directly seek to find
	classifiers that agree on unlabeled data and the Yarowsky algorithm is actually
	based on different independence assumption instead of a special case of
	co-training as \cite{Blum:98} suggested. Co-training and its variants have been
	applied to many applications across computer science and beyond
	\cite{Yarowsky:95,Collins:99,Nigam:00,Pierce:01,Ghani:01,Levin:03}. These
	approaches are described more details in next two sections.
	
\section{Theoretical Analyses}
\label{sec:theory}
	
	% Abney 2002: Bootstrapping
	In recent work, \cite{Abney:02} studies the co-training assumptions and shows
	that the independence assumption is remarkably powerful, and easily violated in the
	data. Therefore, he proposes a weaker assumption for co-training to succeed and
	also presents a new co-training algorithm that is theoretically justified and
	has good empirical performance. Let $H_1$ be the set of rules that are
	functions of $X_1$ only, and $H_2$ be the set of rules that are functions of
	$X_2$ only. The conditional dependence of $F \in H_1$ and $G \in H_2$ given $Y$
	= $y$ is defined as $d_y = \frac{1}{2}\displaystyle \sum_{u,v} \vert Pr[G=v
	\vert Y=y, F=u]-Pr[G=v \vert Y=y] \vert$, $d_y = 0$ if $F$ and $G$ are
	conditonally independent. The weaker rule independence is defined as follows:
	\\[0.3cm] 
	\textbf{Definition:} {\em Rules F and G satisfy weak rule dependence just in
	case, for  $y \in \{+.-\}$}:
	\[
		d_y \leq p_2 \dfrac{q_1 - p_1}{2 p_1 q_1}
	\]
	where $p_1 = $min$_u Pr[F=u \vert Y=y]$, $p_2 =$min$_u Pr[G=u \vert Y=y]$, and
	$q_1 = 1 - p_1$. By definition, $p_1$ and $p_2$ cannot exceed 0.5; if $p_1 =
	0.5$, then weak rule dependence reduces to independence ($d_y=0$). However,
	when $p_1$ decreases, the permissible amount of conditional dependence
	increases.
	
	\vspace*{0.3cm}
	\cite{Abney:02} propose a theorem that assumes non-abstaining binary rules:
	For all $F \in H_1, G \in H_2$ that satisfy weak rule dependence and are
	nontrivial predictors in the sense that min$_{u}$ Pr[F=u] $>$ Pr[F $\neq$ G],
	one of the following inequalities holds:
	\[
		Pr[F \neq Y] \leq Pr[F \neq G] 
	\]
	\[
		Pr[\bar{F} \neq Y] \leq Pr[F \neq G]
	\]
	The detailed proof of this theorem is beyond the scope of this paper. In short,
	\cite{Abney:02} show that disagreement upper bounds the minority probability
	just in case weak rule dependence is satisfied and the theorem is proved.
	
	\vspace*{0.3cm}
	\cite{Abney:02} also presents a new co-training algorithm called greedy
	agreement algorithm as shown in Table \ref{tab:greedy}.
	\begin{table}[h]
		\centering
		\begin{tabular}{l}
			\hline \\				
			Input: Seed rules $F$, $G$ \\
			Loop: \\
			{~~~~~~~~~~~~~} for each automic rule $H$:\\
			{~~~~~~~~~~~~~} {~~~~~~~~~~~~} $G'$ := G + H \\
			{~~~~~~~~~~~~~} {~~~~~~~~~~~~} evaluate cost of ($F$, $G'$) \\
			{~~~~~~~~~~~~~} {~~~~~~~~~~~~} keep lowest-cost $G'$ \\
			{~~~~~~~~~~~~~} if $G'$ is worse than G, quit \\
			{~~~~~~~~~~~~~} swap $F$. $G'$ \\ \\
			\hline
		\end{tabular}
		{~~~~~}
		\begin{tabular}{c}
			\includegraphics[scale=0.7]{abney.jpg}
		\end{tabular}
		\caption{The greedy agreement algorithm}
		\label{tab:greedy}
	\end{table}
	
	The algorithm begins with two seed rules, one for each view. At each iteration,
	each possible extension to one of the rules is considered and scored. The best
	one is kept, and attention shifts to the other rule. The algorithm is run to
	convergence, that is, until no atomic rule can be found that decrease cost.
	
	\vspace*{0.3cm}
	% Balcan 2004: Co-training and expansion
	The assumption that the two views are class conditionally independent is very
	strong and as \cite{Nigam:00} show, can easily be violated in practice. In
	previous work, \cite{Abney:02} presents a weak rule dependence, however,
	\cite{Balcan:04} suggests that this requirement can be weakened more
	significantly. Intuitively, for the two classifiers to be able to teach each
	other, they must predict confidently on different subsets of the unlabeled
	data. \cite{Balcan:04} formalize this condition as a concept of
	$\epsilon$-$expandability$. Let $h_1,h_2$ be the two classifiers trained on the
	two views $X_1, X_2$, $C_1, C_2$ be the concept class defined over $X_1, X_2$,
	respectively. For $S \subseteq X$, let $S_i$ denote the event that an input
	$x=(x_1, x_2) \in S$ satisfies $x_i \in C_i$. The probability of an instance in
	$S$ which is classified confidently by both classifiers can be expressed as
	$Pr(S_1 \wedge S_2)$ and by exactly one of the two classifiers as $Pr(S_1
	\oplus S_2)$ and by none as $Pr(\bar{S_1} \wedge \bar{S_2})$.\\ [0.3cm]
	\textbf{Definition 1}: {\em D is $\epsilon$-\textbf{expanding} with respect to
	the hypothesis class $H$ if for any $S \subseteq X$ and any two classifiers
	$h_1, h_2 \in H$, the following statement holds}:
	\[
		Pr(S_1 \oplus S_2) \geq \epsilon min[Pr(S_1 \wedge S_2), Pr(\bar{S_1} \wedge
		\bar{S_2})]
	\]
	\textbf{Definition 2}: {\em D is $\epsilon$-right-expanding if for any $S_1
	\subseteq X_1, S_2 \subseteq X_2$, if $Pr(S_1) \leq 1/2$ and $Pr(S_1 \vert S_1)
	\geq 1 - \epsilon$ then $Pr(S_2) \geq (1+\epsilon)Pr(S_1)$.}\\	[0.3cm]
%	
	Intuitively, the condition ensures that with high probability there are data
	instances in the unlabeled set for which exactly one of the two classifiers
	is confident. These instances can then be added to the labeled set to teach the
	classifier which was not so sure about them. \cite{Balcan:04} show that if the
	distribution D is $\epsilon$-expanding, and the two classifiers are never
	"confident but wrong", co-training will succeed. In addition, \cite{Balcan:04}
	pointed out that this notion helps clarify how the assumptions are much less
	restrictive than those considered previously. Specifically, let first consider
	conditional independence assumption, for any $S_1 \subseteq X_1, S_2 \subseteq
	X_2$, $Pr(S_2 \vert S_1) = Pr(S_1)$. So, if $Pr(S_2 \vert S_1) \geq 1 -
	\epsilon$, then $Pr(S_2) \geq 1 - \epsilon$ as well. That is, not only does
	$S_1$ expand by a $1 + \epsilon$ factor but also it expands to nearly all of
	$X_2$. Weak dependence \cite{Abney:02} is a relaxation of conditional
	independence that requires only that for all $S_1 \subseteq X_1, S_2 \subseteq
	X_2$, $Pr(S_2 \vert S_1) \geq \alpha Pr(S_2)$ for some $\alpha$. So, if $Pr(S_2
	\vert S_1) \geq 1 - \epsilon$, then $Pr(\bar{S_2} \vert S_1) \leq \epsilon$,
	which implies by the definition of weak independence that $Pr(\bar{S_2}) \leq
	\epsilon / \alpha$ and therefore $Pr(S_2) \geq 1 - \epsilon / \alpha$. As a
	result, for sufficiency small $\epsilon$, even if $S_1$ is very small, it still
	expands to nearly all of $X_2$. That is, if an algorithm is PAC-learnable from
	positive data only over $X_2$ and it can be trained over the conditional
	distribution given by $S_1$, then by driving down its error on this conditional
	distribution, one can perform co-training in just one iteration.

	\vspace*{0.3cm}
	% Jun du 2011: When does co-training work in real data
	The two previous studies give theoretical support for co-training working with
	two views by weakening the two assumptions \cite{Abney:02,Balcan:04}. However,
	given a dataset with two views, how can we judge if the two-view co-training
	would work well? How can we verify if the assumptions are satisfied? Can the
	two-view co-training still work if the dataset has only one views as in most
	real-world situations? In attempting to answer these questions, \cite{Jun:11}
	describe a simple but novel method to empirically verify the sufficiency and
	independence assumptions and four increasingly sophisticated methods to split
	single views into two views in order to make the standard two-view co-training
	work more reliably on single-view dataset.
	
	\vspace*{0.3cm}	
	Given a whole lebeled dataset $D$ and two views of features ($X_1, X_2$). Let
	$p$ be the accuracy of classifiers on D using $X_1 \times X_2$. The
	sufficiency requires that $p$ should be close to 1. Let $p_1, p_2$ denote the
	accuracies of classifiers using individual features in $X_1$ and $X_2$
	respectively. \cite{Jun:11} proposed that the sufficiency assumption of
	co-training is satisfied if there exists a small positive number $\delta_1$
	(sufficiency parameter) such that: $p > 1 - \delta_1$, $p_1 > 1 - \delta_1$ and
	$p_2 > 1 - \delta_1$. Let $p_{x_v^i}$ be the accuracy of $X_{3-v}$ predicting
	$x_v^i$ on the whole dataset $D$, $v=\{1,2\}$, and $p'_{x_v^i}$ denote the
	accuracy of the majority value of the class. The independence assumption hold
	as there exists a small positive number $\delta_2$ (independence parameter)
	such that: $p_{x_v^i} < p'_{x_v^i} + \delta_2$ for all $1 \leq i \leq m$
	\cite{Jun:11} verified empirically the two assumptions on the \textit{WebKB
	Course} dataset and obtained $\delta_1=0.12, \delta_2=0.15$ which are both
	rather small. This explains why co-training works well on this dataset as in
	many previous works \cite{Blum:98,Nigam:00}
	
	\vspace*{0.3cm}	
	\cite{Jun:11} also proposed four spliting methods: random split, entropy slit,
	entropy-start hill climbing, and random-restart hill climbing to split
	single-view datasets into two-view ones. Random split is the simplest and most
	straight forward method which naturally split single views randomly into two 
	views.	It is considered as a baseline to compare to other methods. By doing
	experiments on the UCI datasets with random split method, they provides that
	co-training is more likely to work with smaller $\delta_1$ and $\delta_2$ even
	when the sufficiency and independence assumptions might not be satisfied. For
	entropy split method, \cite{Jun:11} first calculate the entropy of each feature
	in the single view based on whole dataset. Intuitively, the larger the entropy,
	the more predictive of the class that the feature would be. Then, they assign
	features with the odd number of highest entropy to the first view, and the even
	number of highest entropy to the second view. An alternation of entropy method,
	entropy-start hill climbing, is described as follows: each feature after
	splitted with entropy split is switched to the other view once a time, the new
	generated splits are evaluated to check $\delta_1$, $\delta_2$ and the one that
	yields minimum $\delta_1 + \delta_2$. The whole process repeats until the slit
	is not altered from the last iteration. The last method, random-restart hill
	climbing, works almost the same as the entropy- start hill climbing except it
	starts from 29 random splits instead of only one deterministic entropy slit.
	Thus, it can get better optimization performance.

	% Chen 2011: Automatic Feature Decomposition for Single View Co-training
	\vspace*{0.3cm}		
	In general, the splitting methods in \cite{Jun:11} are rather simple and not
	significantly effective. \cite{Chen:11} presented an algorithm called
	\textit{Pseudo Multi-view Co-training}(PMC) which utilizes learning theory that
	has significantly weakened the strong assumptions of co-training
	\cite{Balcan:04} to devide the features of a single-view dataset into two
	mutually exclusion subsets. A classifier $h_u$ is trained on the single-view
	dataset $D$ by minimizing the log-loss function over D with weight vector
	\textbf{u}:
	\[
		l(\textbf{u}, D) = \displaystyle \sum_{(x,y) \in D} log(1 + e^{-\textbf{u}^T x
		y})
	\]
	For co-training, two classifiers are trained jointly that both suffer low loss.
	That is:
	\[
		h_u, h_v = min_{u,v} {~~~~} max[l(\textbf{u},L), l(\textbf{v}, L)]
	\]
	or 
	\[
		h_u, h_v = min_{u,v} {~~~~} log \left( e^{l(\textbf{u},L)} + e^{l(\textbf{v},
		L)} \right)
	\]
	As crucial aspect of co-training is that the two classifiers are trained on
	different views of the dataset. So that, for each feature $i$, at least one of
	two classifiers must have a zero weight in the $i^{th}$ dimension: $\forall i,
	1 \leq i \leq d, u_i v_i = 0$ or $\displaystyle \sum_{i}^{d} u_i^2 v_i^2 = 0$.
	In addition, \cite{Chen:11} follows the intuition behind the
	$\epsilon$-expandability of \cite{Balcan:04} that is
	\[
		Pr(S_1 \oplus S_2) \geq \epsilon min[Pr(S_1 \wedge S_2), Pr(\bar{S_1} \wedge
		\bar{S_2})]
	\]
	to get the final condition:
	\[
		\displaystyle \sum_{x \in U}[c_u(x) \bar{c}_v (x) + \bar{c}_u(x) c_v(x)]
		\geq \epsilon min \left[ \displaystyle \sum_{x \in U} c_u(x) c_v(x) , 
								\displaystyle \sum_{x \in U} \bar{c}_u(x) \bar{c}_v(x) \right]
	\]
	wherein $c_u(x)$ is a binary confidence indicator function and $\bar{c}_u(x) =
	1 - c_u(x)$. \cite{Chen:11} then combine these constraints as an optimization
	problem and refer to as \textit{Pseudo Multi-view Decomposition} (PMD):	
	
	\begin{itemize}
	  \item[] {~~~~~~~~~~}$min_{u,v} log \left(e^{l(u,L)} + e^{l(v,L)}\right)$
	  \item[] {~~~~~~~~~~}subject to:
	  \item[] {~~~~~~~~~~}{~~~~~~~~~} (1) $\displaystyle sum_{i=1}^{d}
	  u_{i}^{2}v_{i}^{2} = 0$
	  \item[] {~~~~~~~~~~}{~~~~~~~~~} (2) $ \displaystyle \sum_{x \in U}[c_u(x)
	  \bar{c}_v (x) + \bar{c}_u(x) c_v(x)] \geq \epsilon min \left[ \displaystyle \sum_{x \in U}
	  c_u(x) c_v(x) , \displaystyle \sum_{x \in U} \bar{c}_u(x) \bar{c}_v(x) \right] $
	\end{itemize}
	
	\cite{Chen:11} then use the feature decomposition method to apply iterative
	co-training on single-view data. The Pseudo Multiview Co-Training algorithm is
	presented in Table \ref{tab:pmc}. \cite{Chen:11} demonstrated the
	capability of the algorithm by effectively utilizing weakly labeled
	image-search results to improve the classification accuracy on the Caltech 256
	object recognition dataset.
	
	\begin{table}[h]
		\centering
		\begin{tabular}{l}
		\hline \\
			Inputs: labeled examples $L$ and unlabeled examples $U$ . \\
			Initialize $u$, $v$ and $l$. \\
			\textbf{repeat} \\
			$~~~~$ Find $u^*$ , $v^*$ by optimizing PMD on $L$ and $U$. \\
			$~~~~$ Apply $h_{u^∗}$ and $h_{v^∗}$ on all elements of $U$. \\
			$~~~~$ Move up-to $l$ confident inputs from $U$ to $L$. \\
			\textbf{until} No more predictions are confident \\
			Train final classifier $h$ on $L$ with all features $X$. \\
			Return $h$ \\
		\hline
		\end{tabular}
		\caption{PMC algorithm in pseudo code}
		\label{tab:pmc}
	\end{table}
		
	\vspace*{0.3cm}
	To sum up, this section provides some analyses of the theoretical basis of
	co-training. \cite{Abney:02} showed that weak dependence can guarantee
	successful co-training. After that, a weaker assumption called
	$\epsilon$-expansion was improved sufficient for iterative co-training to
	succeed \cite{Balcan:04}. \cite{Jun:11,Chen:11} presented methods to split
	single view dataset into two-view subsets in order to make the standard
	two-view co-training work more reliably on single-view dataset.

\section{Practical Analyses}
\label{sec:practice}

	% Yarowsky algorithm 95
	In constrast to previous investigations of the theoretical basis of
	co-training, this section concerns about the application of weakly supervised
	learning to problems in natural language processing
	\cite{Yarowsky:95,Collins:99,Nigam:00,Pierce:01}
	
	\vspace*{0.3cm}
	The Yarowsky algorithm \cite{Yarowsky:95} was one of the first bootstrapping
	algorithms to become widely known in computational linguistics. The algorithm
	exploits two powerful properties of human language:
	\begin{itemize}
		\item[1.] One sense per collocation: Nearby words provide strong and
		consistent clues to the sense of a target word, conditional on relative
		distance, order and syntactic relationship.
		\item[2.] One sense per discourse: The sense of a target word is highly
		consistent within any given document. 
	\end{itemize}
	In brief, it consists of two loops. The base learner ("inner loop") is a
	supervised learning algorithm. Specifically, Yarowsky uses a simple decision
	list learner that considers rules of the form "if instance $x$ contains feature
	$f$, then predict label $j$" and selects those rules whose precision on the
	training data is highest. The "outer loop" is given a seet set of rules to
	start with. In each iteration, it uses the current set of rules to assign
	labels to unlabeled data. It selects those instances regarding which the base
	learner’s predictions are most confident and constructs a labeled training set
	from them. It then calls the inner loop to construct a new classifier (that is,
	a new set of rules), and the cycle repeats. \cite{Blum:98} suggested that
	Yarowsky algorithm is a special case of co-training wherein Yarowsky performs
	word sense disambiguation by building a sense classifier using the local
	context of the word and a classifier based on the senses of other occurenses of
	that word in the same document. For example, to disambiguate the sense of the
	word "bank" which can be either river bank or financial bank, Yarowsky builds a
	classifier using the context ("swim near the \_\_, a bridge over the\_\_) and a
	classifier based on the word "bank" itself. However, \cite{Abney:02} showed
	that Yarowsky algorithm actually based on the precision independence instead of
	view independence and they are distinct assumptions; neither implies the other.
	Therefore, the Yarowsky algorithm is not a special case of co-training. Table
	\ref{tab:word-sense} presents the accuracy of the algorithm with differenct
	seed training options comparing to supervised algorithm. It shows that the
	algorithm obtains good results even though small seed training sets are needed.
	\begin{table}[h]
		\centering
		\begin{tabular}{|l |l |c |c |c |c|}
		\multicolumn{6}{c}{} \\
		\hline
		\multirow{2}{*}{Word} & \multirow{2}{*}{Senses} & \multirow{2}{*}{Supvsd Algrthm} & 
											\multicolumn{3}{|c|}{Semi-supvsd training options} \\ \cline{4-6}
			 &		  & & 2 Words & Dict.Defn & Top Colls \\
		\hline
		plant & living/factory & 97.7 & 97.1 & 97.3 & 97.6 \\
		space & volume/outer & 93.9 & 89.1 & 92.3 & 93.5 \\
		tank & vehicle/container & 97.1 & 94.2 & 94.6 & 95.8 \\
		motion & legal/physical & 98.0 & 93.5 & 97.4 & 97.4 \\
		bass & fish/music & 97.8 & 96.6 & 97.2 & 97.7 \\
		poach & steal/boil & 97.1 & 96.6 & 97.2 & 97.7 \\
		duty & tax/obligation & 93.7 & 90.4 & 92.1 & 93.2 \\
		\hline
		\end{tabular}
		\caption{Accuracy in percent of word sense disambiguation.}
		\label{tab:word-sense}
	\end{table}
	
	\vspace*{0.3cm}
	% Collin 1999: 
	\cite{Collins:99} applied the idea of co-training to perform named
	entity classification which boosts classifers that use either the spelling of
	named entity or the context in which that entity occurs. Specifically, an
	example can be viewed as two views of features, intrinsic features which are
	the words making up the name, and contextual features which are features of the
	syntactic context in which the name occurs. For example, consider {\em ...says
	Mr. Cooper, a vice president of ...}, "Mr." is a intrinsic feature which can be
	used to build one classifier and "president" is a contextual feature which is
	useful to build another one. Both classifiers can both individually classify
	named entity and help each other to perform more precisely. \cite{Collins:99}
	present two algorithms. The first method called DL-CoTrain uses a similar
	algorithm to that of \cite{Yarowsky:95} with modifications motivated by
	\cite{Blum:98}. The second algorithm, CoBoost extends the ideas from boosting
	algorithms, designed for supervised learning tasks, to the framework suggested
	by \cite{Blum:98}. Table \ref{tab:result of coboost} presents the accuracy of
	these algorithms compared to some well-known methods on 88962 ({\em spelling,
	context}) training pairs and 1000 testing pairs. The results presented in Table
	\ref{tab:result of coboost} indicates that the algorithms with co-training
	setting outperforms other methods for the task of named entity classification.
	
	 \begin{table}[h]
		\centering
		\begin{tabular}{l c c}
			\hline
			Learning algorithm & Accuracy (Clean) & Accuracy (Noise) \\
			\hline
			Baseline & 45.8\% & 41.8\% \\
			EM & 83.1\% & 75.8\% \\
			Yarowsky & 81.3\% & 74.1\% \\
			DL-CoTrain & 91.3\% & 83.3\% \\
			CoBoost & 91.1\% & 83.1\% \\
			\hline
		\end{tabular}
		\caption{Accuracy of different learning methods.}
		The baseline method tags all entities as the most frequent class type
		\label{tab:result of coboost}		
	\end{table}
	
	\vspace*{0.3cm}
	Co-training and its variants have been applied to many other applications
	across computer science and beyond \cite{Ghani:01,Levin:03,Chan:04}.
	Consequently, new questions are arised about the scalability of the co-training
	paradigm \cite{Pierce:01}. First, can co-training be applied to learning
	problems without natural factorizations into views? \cite{Nigam:00} suggests a
	qualified affirmative answer to this question, for a text classification task
	designed to contain redundant information; however, it is expectable to
	continue investigating the issue for large-scale natural language processing
	tasks. Second, when a large number of training examples are required to obtain
	usable performance levels, how does co-training scale? It is plausible to
	expect that co-training algorithm will not scale well, due to mistakes made by
	the view classifiers. Specifically, the view classifers may occasionally add
	incorrectly labeled instances from unlabeled examples to the labeled data.
	Therefore, if the learning task requires many iterations of co-training,
	degradation in the quality of the labeled data become a problem and affect the
	quality of subsequent view classifiers. The effectiveness of co-training may be
	dulled over time. To elaborate, \cite{Pierce:01} applied co-training to the
	task of identifying base noun phrases with bracket representations (IOB).

	\vspace*{0.3cm}	
	To apply co-training, they specified how to factor the task into views. One
	classifier looks at the focus tag and the tags to its left, while the other
	looks at the focus tag and the tags to its right. They report that co-training
	works well on the base noun phrase classification task but the improvement in
	accuracy does not continue as co-training progresses; rather, performance peaks
	declines somewhat before stabilizing as shown in Figure \ref{fig:iob_1}. They
	hypothesize that this decline is due to degradation in the quality of the
	labeled data.

	\begin{figure}[h]
		\centering
		\includegraphics[scale=0.7]{iob_1.jpg}
		\caption{Learning curves for co-training. The solid curve indicates the
		accuracy of the left context classifier, while the dotted line shows the
		goal performance of the same classifier trained on a labeled version of
		the complete training data.}
		\label{fig:iob_1}
	\end{figure}
	
	\begin{figure}[h]
		\centering
		\begin{tabular}{l l}
			\includegraphics[scale=0.5]{iob_2.jpg}
			&
			\includegraphics[scale=0.5]{iob_3.jpg}
		\end{tabular}
		\caption{Learning curves for co-training with varying amounts of initial
		labeled data}
		\label{fig:iob_2}
	\end{figure}

	In addition, \cite{Pierce:01} report that co-training for base NP
	identification seems to be quite sensitive to the co-training parameter
	settings. For example, with initial labeled examples L = 200, the co-training
	classifiers appear not to be accurate enough to sustain co-training whereas
	with L = 1000, they are too accurate, in the sense that co-training contributes
	very little accuracy before the labeled data deteriorates as shown in Figure
	\ref{fig:iob_2}.
		
	\vspace*{0.3cm}
	To solve these problems, \cite{Pierce:01} proposed a variant called corrected
	co-training which have human annotator intervene by reviewing and correcting
	instances labeled by the view classifiers. By arresting the deterioration of
	the labeled data, it is expected to prevent the ultimate decline in accuracy
	for co-training. Figure \ref{fig:iob_3} presents the results of corrected
	co-training for eleminating degradation of the labeled data by correcting
	labeling errors.
	
	\begin{figure}[h]
		\centering
		\includegraphics[scale=0.7]{iob_4.jpg}
		\caption{Corrected co-training eleminates degradation of the labeled data by
		correcting labeling errors.}
		\label{fig:iob_3}
	\end{figure}
	% Nigam 2000: Analyzing effectiveness and applicationality
	% Pierce 2001: Limitations of co-training
	
	\vspace*{0.3cm}
	To sum up, the idea of co-training can be applied in many natural language
	processing tasks such as word sense disambiguation \cite{Yarowsky:95}, named
	entity classification \cite{Collins:99}, and others. They pointed out that
	co-training works well on these tasks, even though resulting classifer does not
	perform as well as a fully supervised classifer trained on hundreds of times as
	much labeled data but if the difference in accuracy is less important than the
	effort required to produce the labeled training data, co-training is especially
	attractive.
	
%\section{Discussions}
%\label{sec:discussion}	
%
%	\begin{itemize}
%		\item How to verify the co-training assumptions? \cite{Jun:11} presented a simple
%		method to empirically verify the sufficiency and independence assumptions?
%		I suggested to do a research on this. 
%	\end{itemize}	
	
\section{Conclusions}
\label{sec:conclusion}

	Co-training is a method for combining unlabeled data and labeled data when
	examples can be partitioned into two views such that each view in itself is at
	least roughly sufficient to achieve good classification, and yet the views are
	not too highly correlated. Some theoretical work points out that required
	instantiating conditions do not have to be in a strong sense as independence
	given the label or a form of weak dependence. The "right" condition can be
	something much weaker, an expansion property on the underlying distribution.
	Some studies show that co-training not only outperform significantly other
	methods on the two-view dataset but also on the single-view dataset with some
	splitting feature methods. Therefore, co-training and its variants have been
	applied to many applications across computer science and beyond. However, it
	can be suffered the problem of scalability due to mistakes made by view
	classifiers. Hence, it is suggested to combine weakly supervised methods such
	as co-training or self-training with active learning.

\section*{Acknowledgments}

\begin{thebibliography}{}

\bibitem[\protect\citename{Blum and Mitchell}1998]{Blum:98}
Avrim Blum and Tom Mitchell.
\newblock 1998.
\newblock Combining labeled and unlabeled data with co-training.
\newblock In {\em Proceedings of the 11th Annual Conference on 
Computational Learning Theory}. Madison, WI.

\bibitem[\protect\citename{Levin \bgroup et al.\egroup }2003]{Levin:03}
Anat Levin, Paul Viola and Yoav Freund.
\newblock 2003.
\newblock Unsupervised Improvement of Visual Detectors using Co-Training.
\newblock In {\em Proceedings of the Ninth IEEE International Conference on Computer Vision},
Volume 2, 2003.

\bibitem[\protect\citename{Pierce and Cardie}2001]{Pierce:01}
David Pierce and Claire Cardie.
\newblock 2001.
\newblock Limitations of Co-Training for Natural Language Learning
from Large Datasets.
\newblock In {\em Proceedings of the 2001 Conference on Empirical Methods
in Natural Language Processing}

\bibitem[\protect\citename{Yarowsky}1995]{Yarowsky:95}
David Yarowsky.
\newblock 1995.
\newblock Unsupervised word sense disambiguation rivaling supervised methods.
\newblock In {\em Proceedings of the 33rd Annual Meeting of the
Association for Computational Linguistics}. pages 189-196.

\bibitem[\protect\citename{Ghani}2001]{Ghani:01}
Ghani, R.
\newblock 2001.
\newblock Combining labeled and unlabeled data for text classification with a large number of categories.
\newblock In {\em Proceedings of the IEEE International Conference on Data
Mining}. volume 2, 2001.

\bibitem[\protect\citename{Chan \bgroup et al.\egroup }2004]{Chan:04}
Jason Chan,	Irena Koprinska and Josiah Poon.
\newblock 2004.
\newblock Co-training with a Single Natural Feature Set Applied to Email Classification.
\newblock In {\em Proceedings of the 2004 IEEE/WIC/ACM International Conference on Web Intelligence},
Washington, DC, USA.

\bibitem[\protect\citename{Jun Du \bgroup et al.\egroup }2011]{Jun:11}
Jun Du,   Ling, C.X. and Zhi-Hua Zhou.
\newblock 2011.
\newblock When does co-training work in real data?
\newblock {\em IEEE Transactions on Knowledge and Data Engineering},
Volume 23, Issue 5.

\bibitem[\protect\citename{Nigam and Ghani}2000]{Nigam:00}
Kamal Nigam and Rayid Ghani.
\newblock 2000.
\newblock Analyzing the Effectiveness and Applicability of Co-training.
\newblock In {\em Proceedings of the 9th ACM International Conference in Information
and Knowledge Management}. Washington, DC.

\bibitem[\protect\citename{Nigam \bgroup et al.\egroup }2000]{Nigam:00a}
Kamal Nigam, Andrew Kachites McCallum, Sebastian Thrun and 	Tom Mitchell.
\newblock 2000.
\newblock Text Classification from Labeled and Unlabeled Documents using EM.
\newblock {\em Machine Learning - Special issue on information retrieval}.
Volume 39, Issue 2-3.

\bibitem[\protect\citename{Balcan \bgroup et al.\egroup }2004]{Balcan:04}
Maira-Florina Balcan, Avrim Blum, Ke Yang.
\newblock 2004.
\newblock Co-traing and expansion: Towards bridging theory and practice.
\newblock In {\em Proceedings of NIPS}.

\bibitem[\protect\citename{Chen \bgroup et al.\egroup }2011]{Chen:11}
Minmin Chen, Kilian Weinberger and Yixin Chen.
\newblock 2011.
\newblock Automatic Feature Decomposition for Single View Co-training.
\newblock In {\em Proceedings of 28th International Conference on Machine Learning}.
Bellevue, WA, USA.

\bibitem[\protect\citename{Collins and Singer}1999]{Collins:99}
Michael Collins and Yoram Singer.
\newblock 1999.
\newblock Unsupervised models for named entity classification.
\newblock In {\em Empirical Methods in Natural Language Processing}.

\bibitem[\protect\citename{Dasgupta \bgroup et al.\egroup }2002]{Fujita:02}
Sanjoy Dasgupta, Michael L. Littman and David McAllester.
\newblock 2002.
\newblock PAC generalization bounds for co-training.
\newblock {\em Advances in Neural Information Processing Systems 14}. Cambridge, MA: MIT Press.

\bibitem[\protect\citename{Abney}2002]{Abney:02}
Steven Abney.
\newblock 2002.
\newblock Bootstrapping.
\newblock In {\em Proceedings of the 40th Annual Meeting of the
Association for Computational Linguistics}. Philadelphia, PA.

\end{thebibliography}

\end{document}
