\chapter{Learning Methods}

\begin{table}
\center
\begin{tabular}{l|lllll}
Method & Generative or & Probabilistic or & Exact or  & Local or & Batch or\\
 & Discriminative & Non-Probabilistic? & Approximate  & Global & Online\\
\hline
MLE           & generative     & prob       & exact  &  local & batch \\
EM             & generative     & prob       &  approx &  local & batch \\
LL/MaxEnt  & discriminative & prob      & exact  & local & batch   \\
CRF            & discriminative & prob      & exact  & global & batch   \\
MIRA          & discriminative & nonprob & exact  & global & online \\
%Structured Perceptron (Collins 2002) - "discriminative" on the 1-0 loss, sort of, in certain conditions - nonprob - exact - local -   online
%Structured Perceptron (Collins and Roark 2004) - "discriminative" on the 1-0 loss, sort of, in certain conditions - nonprob - "exact" - global -  online
\end{tabular}
\end{table}

Characterization of the Learning Methods Used in this book
\begin{itemize}
\item {\bf Generative} vs {\bf Discriminative} \\ the former explains the distribution of X,Y jointly, the latter provide explanation for Y only
\item {\bf Probabilistic} vs {\bf Non-Probabilistic} \\ the former assumes the annotated data has been generated according to some prob-dist (joint or conditional), takes the loss to be -LogLikelihood, the latter does not assume such a thing, uses different kinds of loss..
\item {\bf Exact} vs {\bf Approximate} \\ the former solves the \(L(D,\theta)\) optimization exactly (only possible if L is well behaved, eg convex), the latter climbs as far as it can
\item {\bf Local} vs {\bf Global} \\ the former scores individual steps ("multiclass classification"), the latter score whole structures.
\item {\bf Batch} vs {\bf Online} \\ online uses the result of the prediction in time i (and its gold) to do something better at time i+1 
\end{itemize}