%==============================================
\chapter{Statistical learning}
\label{CAP-LEARN-GER}
%==============================================
\section{Introduction}
%==============================================

Some relevant concepts on learning theory,  
necessary to a general understanding of Support Vector Machines (SVM), 
are introduced in this chapter.
%
The discussion of the learning problem will be focused on the 
supervised learning paradigm and its implications to the 
learning and generalization capabilities of a model. 
%
A brief introduction to the statistical learning theory is given as well, 
taking into account important definitions like empirical risk minimization
and structural risk minimization principles. 
%
Finally, the Vapnik-Chervonenskis (VC) dimension concept is presented. The VC 
dimension can be considered as a measure of the classification power of 
a family of functions and provides bounds to the empirical risk. 

%==============================================
\section{Machine learning}
%==============================================

A learning model may be briefly described by the diagram presented 
in Figure \ref{FIG-LEARN-MACHINE}
(see \cite{VAPNIK9801} and \cite{VLADIMIR9801}).
The first component of the model are the data that can be seen as a set of
vectors $\mathbf{x} \in \Re^m$ sampled from an unknown probability 
density function (PDF) $p(\mathbf{x})$. This PDF is assumed to be independent and 
identically distributed (\emph{iid}).
% ----------------------------------------------------------------
\begin{figure}[htb]
\centering
\psfrag{Data}{Data}
\psfrag{x}{$x$}
\psfrag{Machine}{\shortstack{Learning \\Machine}}
\psfrag{y}{$y$}
\psfrag{ys}{$\hat y$}
\psfrag{System}{System}
\includegraphics[scale=.60]{figs/machine_learning.eps}
\caption{\label{FIG-LEARN-MACHINE}Machine learning diagram.}
\end{figure}
% ----------------------------------------------------------------
For each input vector $\mathbf{x}$ the data set also includes the
corresponding output $y \in \Re^1$ (without loss of generality 
scalar outputs will be used).
The relationship between $\mathbf{x}$ and $y$ is 
given by the conditional PDF $p(y\mid\mathbf{x})$
which is unknown in most cases.

It is expected that the learning machine, after observing many input 
and output data pairs, will perform one of the two following actions:
%-----
\begin{enumerate}
\item Imitate the system's behavior through the prediction 
of $y$ values and generate $\hat y$ outputs as close 
to $y$ as possible.
\item Find out the operator that governs the system. 
Assuming that the actual function %performed by the system 
is $y=g(\mathbf{x})$, the learning machine must be capable of 
estimating $g(\cdot)$.
\end{enumerate}
%-----

The second situation has a higher degree of difficulty,
since it implies knowing or estimating the joint distribution
$p_{\mathbf{x}y}(\mathbf{x},y)$. 
In this case, the learning machine chooses, given
a universe of functions $f(\mathbf{x},\mathbf w)$ with 
parameters $\mathbf w  \in  \mathcal{W}$  (parameters space), 
the best one that fits the system. 
However, this problem has an ill-posed nature \cite{VAPNIK9801} 
and its solution is difficult.

\section{Learning process}

Vapnik considers the learning process as ``\emph{a process of
choosing an appropriate function from a given set of 
functions}'' \cite{VAPNIK9801}. 
%In special, when 
%dealing with finite training data, some restrictions need 
%to be imposed to the learning process.
Although there exist many learning methods, some general 
requirements may be cited \cite{VLADIMIR9801}:
%
%If the learning procedure aims at generating a model from 
%the data, the following requisites must be achieved
%\cite{VLADIMIR9801}:
%
\begin{enumerate}
\item A flexible and large enough set of candidate functions 
      $f(\mathbf{x},\mathbf w)$.
\item \emph{A priori} knowledge for controlling model
complexity. 
%Complex models are more flexible but they
%may result on overfitting  and poor generalization
%capability. Simple models are more rigid and may underestimate 
%model complexity. Estimating the appropriate fit of 
%model complexity to data complexity it not an easy task.
% ----------------------------------------------------
% -> complementar com metodos que estimam complexidade
%Although there exists some measures of complexity like 
%Vapnik-Chervonenkis dimension \cite{VAPNIK9801},
% ----------------------------------------------------

\item An inductive principle, that is, an idea of how to
associate training set and the \emph{a priori} knowledge
in order to estimate the actual function that governs the system.
%establish a relationship between the data set and \apriori{}
%knowledge so that the function that represents the system
%can be estimated. 
Some inductive principles are regularization,
empirical risk minimization, 
structural risk minimization principle and Bayesian inference 
\cite{VLADIMIR9801}.

\item A learning algorithm, namely a procedure 
that specifies how to implement the inductive principle and
how to select the best function $f(\mathbf{x},\mathbf w)$.
\end{enumerate}
%==============================================
\section{Risk functional}
%==============================================
%
%In order to choose the better function that fits to the data set it is
%necessary to create a measure of discrepancy or loss.
%Thus, the machine learning would select a specific function  from the universe of
%approximating functions that better reproduce the system. % imitate
%
A measure of discrepancy (or loss) is required when choosing
the best function that fits the training set.
This measure will be used by the learning machine to select a
specific function from the universe of approximating functions.

Let $\mathcal F$ be the function set composed of every function
$f(\mathbf x,\mathbf w)$  with parameters $\mathbf w \in \mathcal W$ 
and working on a vector space $\Re^m$ that belongs to $\mathcal F$. 
Without loss of generality, consider the output of $f(\mathbf x,\mathbf w)$
as a scalar,
%-----------------------------------
\begin{equation}
\begin{array}{rcl}
f:\Re^m & \longmapsto & \Re^1 \\
f(\mathbf x,\mathbf w) & \longmapsto & y. \\
\end{array}
\end{equation}
%-----------------------------------
Consider a set of $p$ training samples
$\mathcal D = \{\mathbf x_i,y_i\}_{i=1}^{p}$. It is assumed that
these samples are \emph{iid} and their probability density function 
$p(\mathbf x)$ is unknown.

Suppose that a measure of loss can be provided by a 
general function $L(y,f(\mathbf x,\mathbf w))$. 
%Loss functions provide large ouput value for large distances
%between 
%It is expected that as far the estimated $\hat y_i$ is from the true
%output ($y_i$) the loss function will provide larger values as result.
A commonly employed loss function is the \emph{quadratic loss function}:
%----------------------------------
\begin{equation}
L(y,f(\mathbf x,\mathbf w)) = \left(y-f(\mathbf x,\mathbf w)\right)^2.
\label{EQ-CAPLG-QUAD-LOSS}
\end{equation}
%----------------------------------
Other possible loss function is the absolute loss function:
%----------------------------------
\begin{equation}
L(y,f(\mathbf x,\mathbf w)) = \left|y-f(\mathbf x,\mathbf w)\right|.
\end{equation}
%----------------------------------
Both loss functions presented above are  suitable for regression
problems with continuous output. When dealing with classification
problems a usual loss function is
%----------------------------------
\begin{equation}
L(y,f(\mathbf x,\mathbf w)) = \left\{
\begin{array}{ll}
0,\ \ \ & \mathrm{if}\ \ y = f(\mathbf x,\mathbf w) \\
1,\ \ \ & \mathrm{if}\ \ y \neq f(\mathbf x,\mathbf w),
\end{array}
\right.
\label{EQ-CAPLG-INDICATOR}
\end{equation}
%----------------------------------
where $f(\mathbf x,\mathbf w)$ is known as a ``indicator
function''  \cite{VAPNIK9801}, since its output may represent only two
classes ($f(\mathbf x,\mathbf w) \in \{0, 1\}$) .

These loss functions can be used to generate a risk functional
given by \cite{VLADIMIR9801}:
%----------------------------------
\begin{equation}
R(\mathbf w) = \int L(y,f(\mathbf x,\mathbf w)) 
p_{\mathbf{x}y}(\mathbf x,y)d\mathbf x dy,
\label{EQ-CAPLG-RISK-FUNCT}
\end{equation}
%----------------------------------
where the function $p_{\mathbf{x}y}(\mathbf x,y)$ is the joint 
probability density function of $\mathbf x$ and $y$.
The machine learning task is to find the minimum of the functional
$R(\mathbf w)$ over a given data set $\mathcal D$.
This is equivalent to finding the function
$f(\mathbf x)$ with a set of optimal parameters represented by $\mathbf w^*$,
namely $f(\mathbf x,\mathbf w^*)$. When the density $p_{\mathbf{x}y}(\mathbf x,y)$
is known, the problem is well posed and can be easily solved. However,
if the density $p_{\mathbf{x}y}(\mathbf x,y)$ is unknown, the problem 
becomes ill-posed 
and its solution is not trivial (in fact, Equation 
\ref{EQ-CAPLG-RISK-FUNCT}
can be seen as a first order Fredholm integral equation \cite{WING9101},
remarkably an ill-posed problem).

A way to overcome this problem is to use an \emph{inductive principle}.
The inductive principle is an attempt to find out the law that governs
the system. Moreover, it  must converge in probability to the risk
using a finite number of samples.

%==============================================
\section{Empirical risk minimization principle}
%==============================================
Since the risk functional (Equation \ref{EQ-CAPLG-RISK-FUNCT}) is 
normally unknown, other indirect measure of risk functional 
$R(\mathbf w)$ need to be used. The empirical risk functional 
\cite{VAPNIK9801} uses only the training data to create an approximation of 
the risk functional, stated as:
%--------------------------
\begin{equation}
R_{\mathrm{emp}}(\mathbf w) = \frac{1}{p}\sum\limits_{i=1}^{p}
L(y_i,f(\mathbf x_i,\mathbf w)).
\end{equation}
%----------------------------
The empirical risk minimization (inductive) principle presents a way 
to approximate the minimization of the risk functional in terms of the 
loss function. 
%In other words, the empirical risk depends only on the data, 
It overcomes the difficulty of estimating the distribution 
$p_{\mathbf{x}y}(\mathbf x,y)$ 
and  has only a vector $\mathbf w$.
The approximation of the loss function  (for a fixed $\mathbf w$)
is based on the \emph{Law of Large Numbers}.
When the number of samples is large, the random variable $\mathbf z$, 
defined as \cite{HAYKIN9901}
%-----------------------------------
$$
\mathbf z = L(y,f(\mathbf x,\mathbf w))
$$
%-----------------------------------
%has an arithmetic mean that 
converges to its expected value.

For instance, if a quadratic loss function is used (Equation 
\ref{EQ-CAPLG-QUAD-LOSS}), the empirical risk is:
%--------------------------
\begin{equation}
R_{\mathrm{emp}}(\mathbf w) = \frac{1}{p}\sum\limits_{i=1}^{p}
(y_i-f(\mathbf x_i,\mathbf w))^2.
\end{equation}
%----------------------------

However, it cannot be  ensured that the empirical risk 
converges to the risk functional and other conditions are necessary 
to guarantee that the parameter vector $\mathbf w$ that  minimizes 
$R_{\mathrm{emp}}(\mathbf w)$ will minimize $R(\mathbf w)$ as well.
In other words, the empirical risk minimization (ERM) principle needs 
to be consistent, that is, when the number of vectors becomes large, 
the ERM must converge to the actual value.

Assuming that $\mathbf w_e$ provides a minimum to the empirical risk 
$R_{\mathrm{emp}}(\mathbf w_e)$ and correspondingly, $\mathbf w_o$ is
the minimum of the  risk functional $R(\mathbf w_o)$,  for a given sample 
size $p$, Figure \ref{FIG-LEARN-ERM-CONSISTENCY} illustrates the 
consistency expected for the  $R_{\mathrm{emp}}(\mathbf w)$.
% ----------------------------------------------------------------
\begin{figure}[htb]
\centering
\psfrag{EX}{Risk Functional}
\psfrag{EM}{Empirical Risk}
\psfrag{p}{$p$}
\includegraphics[scale=.45]{figs/erm_consistency.eps}
\caption{Expected convergence  for the empirical risk
when the number of vectors (p) becomes large 
(horizontal axis).}
\label{FIG-LEARN-ERM-CONSISTENCY}
\end{figure}
% ----------------------------------------------------------------

Suppose that $\mathbf w ^*$ provides a minimum to the empirical risk.
The ERM principle is consistent if the $R_{\mathrm{emp}}(\mathbf w^*)$  converges 
(in probability) to $R(\mathbf w^*)$, that is, if the following 
inequality is fulfilled \cite{VAPNIK8201}:
%--------------------------
\begin{equation}
P\left(
\underset{\mathbf w }{\sup}
\left|R(\mathbf w^*) - R_{\mathrm{emp}}(\mathbf w^*)\right|> \epsilon
\right) \rightarrow 0 \ \ \ \mathrm{as} \ \ \ p \to \infty
\label{EQ-CAP2-CONV}
\end{equation}
%----------------------------
where $\epsilon$ is a positive and small constant and $P(\cdot)$
is the probability.

This consistency must hold for all approximating functions
and it does not depend on specific vectors presented in the data set.

%==============================================
\section{VC dimension}
%==============================================

Consider a classification problem with two classes and a %set of 
loss function given by the indicator functions (Equation 
\ref{EQ-CAPLG-INDICATOR}). Vapnik defines VC dimension as follows 
\cite{VAPNIK9801}:
\begin{quote}
``The VC dimension of a set of indicator functions 
$f(\mathbf x, \mathbf w)$, $\mathbf{w} \in \mathcal W$, 
is equal to the largest 
number $h$ of vectors $\mathbf x_1,\ldots,\mathbf x_p$ that can be 
separated into  two different classes in all the $2^p$ possible ways 
of combining them.''
\end{quote}

In other words, the VC dimension measures the classification power 
(capacity) of a set of indicator functions.  Note that this definition 
does not make any assumption about the distribution $p_{\mathbf{x}y}(\mathbf x,y)$ 
and considers only a sample with size $p$, not a mean over every possible 
sample of size $p$.

For example, a set of linear indicator functions
%------------------------------------
\begin{equation}
f(\mathbf x,\mathbf w) = \left\{
\begin{array}{ll}
0,\ \ \ & \mathrm{if}\ \ \sum\limits_{i=1}^{m}w_ix_i + w_0 < 0 \\
1,\ \ \ & \mathrm{if}\ \ \sum\limits_{i=1}^{m}w_ix_i + w_0 \geq 0
\end{array}
\right.
\end{equation}
%------------------------------------
has VC dimension $h=m+1$ (the number 
of free parameters \cite{HAYKIN9901}). 

Figure \ref{FIG-VCDIMENSION} 
illustrates this case for a two-dimensional space \cite{Burges98}. 
Three vectors are represented in eight ($2^3$) possible situations
when only two class labels are considered (``square'' class or 
``circle'' class). The straight line splits the space into two
parts and the small arrow indicates on which part points
are considered as belonging to the ``circle'' class.

The linear indicator 
function can separate all eight cases, hence its VC dimension is 3. If 
four or more vectors were used in this case, linear indicator functions 
could not separate all cases and a set of indicator functions with larger 
VC dimension would have to be used.

An important result from VC dimension theory is to
estimate the difference between empirical risk and the 
risk functional (actual risk). 
VC dimension gives necessary conditions to approximate 
the bounds on the rate of uniform convergence (Equation \ref{EQ-CAP2-CONV}).
See Vapnik \cite{VAPNIK9801} for a detailed description on bounds 
of convergence.
% ----------------------------------------------------------------
\begin{figure}[htb]
\centering
\includegraphics[scale=.5]{figs/vcdimension2.eps}
\caption[VC dimension for linear indicator functions in a two-dimensional space]
{
VC dimension for linear indicator functions in a two-dimensional space. The 
three vectors can be combined in $2^3=8$ different arrangements 
when two class labels are considered.
}
\label{FIG-VCDIMENSION}
\end{figure}
% ----------------------------------------------------------------
%==============================================
\section{Structural risk minimization principle}
%==============================================

Consider the loss function described in Equation \ref{EQ-CAPLG-INDICATOR},
with 0 and 1 as possible outputs. An important result from the Statistical
Learning theory is that, for some $\eta$ in the range $[0,1]$,
the following bound
%
\begin{equation}
R(\mathbf w) \leq R_{\mathrm{emp}}(\mathbf w) + 
\sqrt{
\frac
{h(\log(2p/h)+1)-\log(\eta/4)}
{p}
}
\label{EQ-RISK-BOUND}
\end{equation}
%
holds with probability ($1-\eta$), 
where $h$ is the VC dimension and the right hand side of Equation 
\ref{EQ-RISK-BOUND} is known as ``risk bound'' \cite{VAPNIK9801,Burges98}.

For large data sets\footnote{A data 
set may be considered large when 
the ratio $p/h$ is large, where $h$ is the VC dimension 
of the approximating function set. Small data sets are associated
with small values for the ratio $p/h$, for instance 
$p/h < 20$ \cite{VAPNIK9801}.},
the process of minimizing the risk functional
may be based only on ERM inductive principle, since 
the last term of Equation \ref{EQ-RISK-BOUND} is 
much smaller than the empirical risk.
This term is called ``VC confidence'' and it is related
to the VC dimension or, in other words, it is related 
to the structure of the learning machine. 

However, for small samples, the difference between empirical 
risk and functional risk becomes large and
it is necessary to minimize the VC confidence
and the empirical risk simultaneously. In this case, 
a new inductive principle, known as 
``Structural Risk Minimization'' (SRM), is introduced to minimize the 
VC confidence. The SRM principle uses VC dimension
as a control parameter for building learning machines with different 
complexities, as follows.
% ----------------------------------------------------------------
\begin{figure}[htb]
\centering
\psfrag{S1}{$S_1$}
\psfrag{S2}{$S_2$}
\psfrag{S3}{$S_3$}
\psfrag{Sk}{$S_k$}
\includegraphics[scale=.45]{figs/srm.eps}
\caption{Nested structures for different model complexities.}
\label{FIG-LEARN-SRM}
\end{figure}
% ----------------------------------------------------------------

Consider a generic loss function 
$L(y,f(\mathbf x,\mathbf w))$
where each vector $\mathbf w$ is related to a structure 
$S_k = \{ L(y,f(\mathbf x,\mathbf w_k))\}$. Suppose that structures 
can be organized in a nested way,
$$
S_1 \subset S_2 \subset \cdots \subset S_k \subset \cdots
$$
where each $S_k$ has a finite VC dimension. The structures
$S_k$ are arranged in increasing order of their VC dimension,
as follows:
$$
h_1 < h_2 < \cdots < h_k < \cdots
$$
Therefore, the process of building a learning machine starts with a prior
selection of the structure, given a loss function and an approximating
function. For instance, if a polynomial is chosen as approximating
function, its degree can be used as a parameter to select distinct
structures. 
Furthermore, an optimal relationship between 
error (the ERM principle) and structure (the SRM principle),
over the data set, need to be established for minimizing
the risk functional.

\section{Conclusion}

This chapter presented some basic concepts on learning theory,
in special ERM and SRM principles. These principles allows us to
build a solid basis 
to construct efficient learning machines.
Next chapter presents a kind of learning machine called Support
Vector Machines, that implements both SRM and ERM principles in its
formulation.



