\chapter{Linear Models} \label{sec:loss}

\chapterquote{The essence of mathematics is not to make simple things complicated, but to make complicated things simple.}{Stanley~Gudder}

\begin{learningobjectives}
\item Define and plot four surrogate loss functions: squared loss,
  logistic loss, exponential loss and hinge loss.
\item Compare and contrast the optimization of 0/1 loss and surrogate
  loss functions.
\item Solve the optimization problem for squared loss with a
  quadratic regularizer in closed form.
\item Implement and debug gradient descent and subgradient descent.
\end{learningobjectives}

\dependencies{}

\newthought{In \chref{sec:perc}, you learned} about the perceptron
algorithm for linear classification.  This was both a \emph{model}
(linear classifier) and \emph{algorithm} (the perceptron update rule)
in one.  In this section, we will separate these two, and consider
general ways for optimizing linear models.  This will lead us into
some aspects of optimization (aka mathematical programming), but not
very far.  At the end of this chapter, there are pointers to more
literature on optimization for those who are interested.

The basic idea of the perceptron is to run a particular algorithm
until a linear separator is found.  You might ask: are there better
algorithms for finding such a linear separator?  We will follow this
idea and formulate a learning problem as an explicit optimization
problem: find me a linear separator that is not too complicated.  We
will see that finding an ``optimal'' separator is actually
computationally prohibitive, and so will need to ``relax'' the
optimality requirement.  This will lead us to a \concept{convex}
objective that combines a loss function (how well are we doing on the
training data?) and a regularizer (how complicated is our learned
model?).  This learning framework is known as both \concept{Tikhonov
  regularization} and \concept{structural risk minimization}.

\section{The Optimization Framework for Linear Models}

You have already seen the perceptron as a way of finding a weight
vector $\vw$ and bias $b$ that do a good job of separating positive
training examples from negative training examples.  The perceptron is
a \concept{model} and \concept{algorithm} in one.  Here, we are
interested in \emph{separating} these issues.  We will focus on linear
models, like the perceptron.  But we will think about other, more
generic ways of finding good parameters of these models.

The goal of the perceptron was to find a \concept{separating
  hyperplane} for some training data set.  For simplicity, you can
ignore the issue of overfitting (but just for now!).  Not all data
sets are linearly separable.  In the case that your training data
\emph{isn't} linearly separable, you might want to find the hyperplane
that makes the \emph{fewest errors} on the training data.  We can
write this down as a formal mathematics \concept{optimization problem}
as follows:
%
\optimizeuc{loss:zeroone}{\vw,b}{\sum_n \Ind[y_n (\dotp{\vec w}{\vx_n}+b) > 0]}
%
In this expression, you are optimizing over two variables, $\vw$ and
$b$.  The \concept{objective function} is the thing you are trying to
minimize.  In this case, the objective function is simply the
\concept{error rate} (or \concept{0/1 loss}) of the linear classifier
parameterized by $\vw,b$.  In this expression, $\Ind[\cdot]$ is the
\concept{indicator function}: it is one when $(\cdot)$ is true and
zero otherwise.

\thinkaboutit{You should remember the $y\dotp{\vw}{\vx}$ trick from
  the perceptron discussion.  If not, re-convince yourself that this
  is doing the right thing.}

We know that the perceptron algorithm is guaranteed to find parameters
for this model if the data is linearly separable.  In other words, if
the optimum of Eq~\eqref{opt:loss:zeroone} is zero, then the
perceptron will efficiently find parameters for this model.  The
notion of ``efficiency'' depends on the margin of the data for the
perceptron.

You might ask: what happens if the data is \emph{not} linearly
separable?  Is there an efficient algorithm for finding an optimal
setting of the parameters?  Unfortunately, the answer is \emph{no.}
There is no polynomial time algorithm for solving
Eq~\eqref{opt:loss:zeroone}, unless P=NP.  In other words, this
problem is NP-hard.  Sadly, the proof of this is quite complicated and
beyond the scope of this book, but it relies on a reduction from a
variant of satisfiability.  The key idea is to turn a satisfiability
problem into an optimization problem where a clause is satisfied
exactly when the hyperplane correctly separates the data.

You might then come back and say: okay, well I don't really need an
\emph{exact} solution.  I'm willing to have a solution that makes one
or two more errors than it has to.  Unfortunately, the situation is
really bad.  Zero/one loss is NP-hard to even \emph{appproximately
  minimize}.  In other words, there is no efficient algorithm for even
finding a solution that's a small constant worse than optimal.  (The
best known constant at this time is $418/415 \approx 1.007$.)

However, before getting too disillusioned about this whole enterprise
(remember: there's an entire chapter about this framework, so it must
be going somewhere!), you should remember that optimizing
Eq~\eqref{opt:loss:zeroone} perhaps isn't even what you want to do!
In particular, all it says is that you will get minimal \emph{training
  error.}  It says nothing about what your \emph{test error} will be
like.  In order to try to find a solution that will \emph{generalize}
well to test data, you need to ensure that you do not overfit the
data.  To do this, you can introduce a \concept{regularizer} over the
parameters of the model.  For now, we will be vague about what this
regularizer looks like, and simply call it an arbitrary function
$R(\vw,b)$.  This leads to the following, \concept{regularized
  objective}:
%
\optimizeuc{loss:zeroonereg}{\vw,b}{\sum_n \Ind[y_n (\dotp{\vec
    w}{\vx_n}+b) > 0] + \la R(\vw,b)}
%
In Eq~\eqref{opt:loss:zeroonereg}, we are now trying to optimize a
\emph{trade-off} between a solution that gives low training error (the
first term) and a solution that is ``simple'' (the second term).  You
can think of the maximum depth hyperparameter of a decision tree as a
form of regularization for trees.  Here, $R$ is a form of
regularization for hyperplanes.  In this formulation, $\la$ becomes a
\concept{hyperparameter} for the optimization.

\thinkaboutit{Assuming $R$ does the ``right thing,'' what value(s) of
  $\la$ will lead to overfitting?  What value(s) will lead to
  underfitting?}

The key remaining questions, given this formalism, are:
\begin{itemize}
\item How can we adjust the optimization problem so that there
  \emph{are} efficient algorithms for solving it?
\item What are good regularizers $R(\vw,b)$ for hyperplanes?
\item Assuming we can adjust the optimization problem appropriately,
  what algorithms exist for efficiently solving this regularized
  optimization problem?
\end{itemize}
We will address these three questions in the next sections.

\section{Convex Surrogate Loss Functions}

You might ask: why is optimizing zero/one loss so hard?  Intuitively,
one reason is that small changes to $\vw,b$ can have a large impact on
the value of the objective function.  For instance, if there is a
positive training example with $\dotp{\vw,\vx}+b = -0.0000001$, then
adjusting $b$ upwards by $0.00000011$ will decrease your error rate by
$1$.  But adjusting it upwards by $0.00000009$ will have no effect.
This makes it really difficult to figure out good ways to adjust the
parameters.

\Figure{loss:zeroone}{plot of zero/one versus margin}

To see this more clearly, it is useful to look at plots that relate
\emph{margin} to \emph{loss}.  Such a plot for zero/one loss is shown
in Figure~\ref{fig:loss:zeroone}.  In this plot, the horizontal axis
measures the margin of a data point and the vertical axis measures the
loss associated with that margin.  For zero/one loss, the story is
simple.  If you get a positive margin (i.e., $y(\dotp{\vw}{\vx}+b)>0$)
then you get a loss of zero.  Otherwise you get a loss of one.  By
thinking about this plot, you can see how changes to the parameters
that change the margin \emph{just a little bit} can have an enormous
effect on the overall loss.

\Figure{loss:sigmoidzeroone}{plot of zero/one versus margin and an
  S version of it}

You might decide that a reasonable way to address this problem is to
replace the non-smooth zero/one loss with a smooth approximation.
With a bit of effort, you could probably concoct an ``S''-shaped
function like that shown in Figure~\ref{fig:loss:sigmoidzeroone}.  The
benefit of using such an S-function is that it is smooth, and
potentially easier to optimize.  The difficulty is that it is not
\concept{convex}.

If you remember from calculus, a convex function is one that looks
like a happy face ($\smile$).  (On the other hand, a \concept{concave}
function is one that looks like a sad face ($\frown$); an easy
mnemonic is that you can hide under a con{\bf cave} function.)  There
are two equivalent definitions of a convex function.  The first is
that it's second derivative is always non-negative.  The second, more
geometric, defition is that any \concept{chord} of the function lies
above it.  This is shown in Figure~\ref{fig:loss:convex}.  There you
can see a convex function and a non-convex function, both with two
chords drawn in.  In the case of the convex function, the chords lie
above the function.  In the case of the non-convex function, there are
parts of the chord that lie below the function.

\Figure{loss:convex}{plot of convex and non-convex functions
  with two chords each}

Convex functions are nice because they are \emph{easy to minimize}.
Intuitively, if you drop a ball anywhere in a convex function, it will
eventually get to the minimum.  This is not true for non-convex
functions.  For example, if you drop a ball on the very left end of
the S-function from Figure~\ref{fig:loss:sigmoidzeroone}, it will not
go anywhere.

This leads to the idea of {\bf convex surrogate loss functions}.
Since zero/one loss is hard to optimize, you want to optimize
something else, instead.  Since convex functions are easy to optimize,
we want to approximate zero/one loss with a convex function.  This
approximating function will be called a \concept{surrogate loss}.  The
surrogate losses we construct will always be \emph{upper bounds} on
the true loss function: this guarantees that if you minimize the
surrogate loss, you are also pushing down the real loss.

\Figure{loss:surrogate}{surrogate loss fns}

There are four common surrogate loss functions, each with their own
properties: \concept{hinge loss}, \concept{logistic loss},
\concept{exponential loss} and \concept{squared loss}.  These are
shown in Figure~\ref{fig:loss:surrogate} and defined below.  These are
defined in terms of the true label $y$ (which is just $\{-1,+1\}$) and
the predicted value $\hat y = \dotp{\vw}{\vx}+b$.
%
\begin{align}
\text{Zero/one:}    && \ell\xth{0/1}(y,\hat y) &= \Ind[y \hat y \leq 0] \\
\text{Hinge:}       && \ell\xth{hin}(y,\hat y) &= \max \{ 0, 1-y\hat y \}\\
\text{Logistic:}    && \ell\xth{log}(y,\hat y) &= \frac 1 {\log2} \log\left( 1 + \exp[-y\hat y]\right)\\
\text{Exponential:} && \ell\xth{exp}(y,\hat y) &= \exp[-y\hat y]\\
\text{Squared:}     && \ell\xth{sqr}(y,\hat y) &= (y-\hat y)^2
\end{align}
%
In the definition of logistic loss, the $\frac 1 {\log2}$ term out
front is there simply to ensure that $\ell\xth{log}(y,0) = 1$.  This
ensures, like all the other surrogate loss functions, that logistic
loss upper bounds the zero/one loss.  (In practice, people typically
omit this constant since it does not affect the optimization.)

There are two big differences in these loss functions.  The first
difference is how ``upset'' they get by erroneous predictions.  In the
case of hinge loss and logistic loss, the growth of the function as
$\hat y$ goes negative is linear.  For squared loss and exponential
loss, it is super-linear.  This means that exponential loss would
rather get a few examples a little wrong than one example really
wrong.  The other difference is how they deal with very confident
correct predictions.  Once $y\hat y>1$, hinge loss does not care any
more, but logistic and exponential still think you can do better.  On
the other hand, squared loss thinks it's just as bad to predict $+3$
on a positive example as it is to predict $-1$ on a positive example.

\section{Weight Regularization}

In our learning objective, Eq~\eqref{opt:loss:zeroonereg}, we had a term
correspond to the zero/one loss on the training data, plus a
\concept{regularizer} whose goal was to ensure that the learned
function didn't get too ``crazy.''  (Or, more formally, to ensure that
the function did not overfit.)  If you replace to zero/one loss with a
surrogate loss, you obtain the following objective:
%
\optimizeuc{loss:reg}{\vw,b}{%
  \sum_n \ell(y_n, \dotp{\vec w}{\vx_n}+b) + \la R(\vw,b)}
%
The question is: what should $R(\vw,b)$ look like?

From the discussion of surrogate loss function, we would like to
ensure that $R$ is convex.  Otherwise, we will be back to the point
where optimization becomes difficult.  Beyond that, a common desire is
that the components of the weight vector (i.e., the $w_d$s) should be
small (close to zero).  This is a form of \concept{inductive bias}.

Why are small values of $w_d$ good?  Or, more precisely, why do small
values of $w_d$ correspond to \emph{simple functions}?  Suppose that
we have an example $\vx$ with label $+1$.  We might believe that other
examples, $\vx'$ that are nearby $\vx$ should also have label $+1$.
For example, if I obtain $\vx'$ by taking $\vx$ and changing the first
component by some small value $\ep$ and leaving the rest the same, you
might think that the classification would be the same.  If you do
this, the difference between $\hat y$ and $\hat y'$ will be exactly
$\ep w_1$.  So if $w_1$ is reasonably small, this is unlikely to have
much of an effect on the classification decision.  On the other hand,
if $w_1$ is large, this could have a large effect.

Another way of saying the same thing is to look at the derivative of
the predictions as a function of $w_1$.  The derivative of
$\dotp{\vw}{\vx}+b$ with respect to $w_1$ is:
\begin{equation}
\frac {\partial \left[\dotp{\vw}{\vx}+b\right]} {\partial w_1}
= \frac {\partial \left[\sum_d w_d x_d+b\right]} {\partial w_1}
= x_1
\end{equation}
Interpreting the derivative as the rate of change, we can see that the
rate of change of the prediction function is proportional to the
individual weights.  So if you want the function to change slowly, you
want to ensure that the weights stay small.

One way to accomplish this is to simply use the norm of the weight
vector.  Namely $R\xth{norm}(\vw,b) = \norm{\vw} = \sqrt{\sum_d
  w_d^2}$.  This function is convex and smooth, which makes it easy to
minimize.  In practice, it's often easier to use the squared norm,
namely $R\xth{sqr}(\vw,b) = \norm{\vw}^2 = \sum_d w_d^2$ because it
removes the ugly square root term and remains convex.  An alternative
to using the sum of squared weights is to use the sum of absolute
weights: $R\xth{abs}(\vw,b) = \sum_d \ab{w_d}$.  Both of these norms
are convex.

\thinkaboutit{Why do we not regularize the bias term $b$?}

In addition to small weights being good, you could argue that
\emph{zero} weights are better.  If a weight $w_d$ goes to zero, then
this means that feature $d$ is not used at all in the classification
decision.  If there are a large number of irrelevant features, you
might want as many weights to go to zero as possible.  This suggests
an alternative regularizer: $R\xth{cnt}(\vw,b) = \sum_d \Ind[x_d \neq
0]$.

\thinkaboutit{Why might you not want to use $R\xth{cnt}$ as a regularizer?}

This line of thinking leads to the general concept of
\concept{$p$-norms}.  (Technically these are called $\ell_p$ (or ``ell
$p$'') norms, but this notation clashes with the use of $\ell$ for
``loss.'')  This is a family of norms that all have the same general
flavor.  We write $\norm{\vw}_p$ to denote the $p$-norm of $\vw$.
%
\begin{equation} \label{eq:loss:l}
  \norm{\vw}_p = \left( \sum_d \ab{w_d}^p \right)^{\frac 1 p}
\end{equation}
%
You can check that the $2$-norm exactly corresponds to the usual
Euclidean norm, and that the $1$-norm corresponds to the ``absolute''
regularizer described above.

\thinkaboutit{You can actually identify the $R\xth{cnt}$ regularizer
  with a $p$-norm as well.  Which value of $p$ gives it to you?
  (Hint: you may have to take a limit.)}

\TODOFigure{loss:norms2d}{level sets of the same $p$-norms}

When $p$-norms are used to regularize weight vectors, the interesting
aspect is how they trade-off multiple features.  To see the behavior
of $p$-norms in two dimensions, we can plot their \concept{contour}
(or \concept{level-set}).  Figure~\ref{fig:loss:norms2d} shows the
contours for the same $p$ norms in two dimensions.  Each line denotes
the two-dimensional vectors to which this norm assignes a total value
of $1$.  By changing the value of $p$, you can interpolate between a
square (the so-called ``max norm''), down to a circle ($2$-norm),
diamond ($1$-norm) and pointy-star-shaped-thing ($p<1$ norm).

\thinkaboutit{The max norm corresponds to $\lim_{p \rightarrow
    \infty}$.  Why is this called the max norm?}

In general, smaller values of $p$ ``prefer'' sparser vectors.  You can
see this by noticing that the contours of small $p$-norms ``stretch''
out along the axes.  It is for this reason that small $p$-norms tend
to yield weight vectors with many zero entries (aka \concept{sparse}
weight vectors).  Unfortunately, for $p<1$ the norm becomes
non-convex.  As you might guess, this means that the $1$-norm is a
popular choice for sparsity-seeking applications.

\section{Optimization with Gradient Descent}

\begin{mathreview}{Gradients}
  A gradient is a multidimensional generalization of a derivative.
  Suppose you have a function $f : \R^D \fto \R$ that takes a vector $\vx = \langle x_1, x_2, \dots, x_D \rangle$ as input and produces a scalar value as output.
  You can differentite this function according to any one of the inputs; for instance, you can compute $\frac {\partial f} {\partial x_5}$ to get the derivative with respect to the fifth input.
  The \concept{gradient} of $f$ is just the vector consisting of the derivative $f$ with respect to each of its input coordinates independently, and is denoted $\grad f$, or, when the input to $f$ is ambiguous, $\grad_{\vec x} f$.
  This is defined as:
  ~
  \begin{align}
    \grad_{\vec x} f &= \left\langle \frac {\partial f} {\partial x_1}~~,~~ 
                                    \frac {\partial f} {\partial x_2}~~,~~ 
                                    \dots~~,~~
                                    \frac {\partial f} {\partial x_D} \right\rangle
  \end{align}
  ~
  For example, consider the function $f(x_1, x_2, x_3) = x_1^3 + 5 x_1 x_2 - 3 x_2 x_3^2$.
  The gradient is:
  ~
  \begin{align}
    \grad_{\vec x} f &= \left\langle
                       3 x_1^2 + 5 x_2~~,~~
                       5 x_1 - 3 x_3^2~~,~~
                       -6 x_2 x_3 \right\rangle
  \end{align}
  Note that if $f : \R^D \fto \R$, then $\grad f : \R^D \fto \R^D$.
  If you evaluate $\grad f(\vec x)$, this will give you the gradient \emph{at} $\vec x$, a vector in $\R^D$.
  This vector can be interpreted as the direction of \concept{steepest ascent}: namely, if you were to travel an infinitesimal amount in the direction of the gradient, you would go uphill (i.e., increase $f$) the most.
\end{mathreview}

Envision the following problem.  You're taking up a new hobby:
blindfolded mountain climbing.  Someone blindfolds you and drops you
on the side of a mountain.  Your goal is to get to the peak of the
mountain as quickly as possible.  All you can do is feel the mountain
where you are standing, and take steps.  How would you get to the top
of the mountain?  Perhaps you would feel to find out what direction
feels the most ``upward'' and take a step in that direction.  If you
do this repeatedly, you might hope to get the the top of the mountain.
(Actually, if your friend promises always to drop you on purely
concave mountains, you \emph{will} eventually get to the peak!)

The idea of gradient-based methods of optimization is exactly the
same.  Suppose you are trying to find the maximum of a function
$f(\vx)$.  The optimizer maintains a current estimate of the parameter
of interest, $\vx$.  At each step, it measures the \concept{gradient}
of the function it is trying to optimize.  This measurement occurs
\emph{at} the current location, $\vx$.  Call the gradient $\vg$.  It
then takes a step in the direction of the gradient, where the size of
the step is controlled by a parameter $\eta$ (eta).  The complete step
is $\vx \leftarrow \vx + \eta \vg$.  This is the basic idea of
\concept{gradient ascent}.

The opposite of gradient ascent is \concept{gradient descent}.  All of
our learning problems will be framed as \emph{minimization} problems
(trying to reach the bottom of a ditch, rather than the top of a
hill).  Therefore, descent is the primary approach you will use.
One of the major conditions for gradient ascent being able to find the
true, \concept{global minimum}, of its objective function is
convexity.  Without convexity, all is lost.


\newalgorithm{loss:gd}%
  {\FUN{GradientDescent}($\cF$, \VAR{K}, \VAR{$\eta_1$}, \dots)}
  {
\SETST{$\vz\zth$}{$\langle \CON0,\CON0, \dots, \CON0\rangle$}
  \COMMENT{initialize variable we are optimizing}
\FOR{\VAR{k} = \CON{1} \dots \VAR{K}}
\SETST{$\vg\kth$}{$\left. \grad_\vz \cF \right\vert_{\VARm{\vz\kpth}}$}
  \COMMENT{compute gradient at current location}
\SETST{$\vz\kth$}{$\VARm{\vz\kpth} - \VARm{\eta}\VARm{\kth} \VARm{\vg\kth}$}
  \COMMENT{take a step down the gradient}
\ENDFOR
\RETURN \VARm{$\vz\Kth$}
}


The gradient descent algorithm is sketched in
Algorithm~\ref{alg:loss:gd}.  The function takes as arguments the
function $\cF$ to be minimized, the number of iterations $K$ to run
and a sequence of learning rates $\eta_1, \dots, \eta_K$.  (This is to
address the case that you might want to start your mountain climbing
taking large steps, but only take small steps when you are close to
the peak.)

The only real work you need to do to apply a gradient descent method
is be able to compute derivatives.  For concreteness, suppose that you
choose exponential loss as a loss function and the $2$-norm as a
regularizer.  Then, the regularized objective function is:
%
\begin{equation}
\cL(\vw,b) =
\sum_n 
  \exp\big[-y_n (\dotp{\vec w}{\vx_n}+b)\big] +
 \frac \la 2 \norm{\vw}^2
\end{equation}
%
The only ``strange'' thing in this objective is that we have replaced
$\la$ with $\frac \la 2$.  The reason for this change is just to make
the gradients cleaner.  We can first compute derivatives with respect
to $b$:
%
\begin{align}
\frac {\partial\cL} {\partial b}
&= \partialby{b}\sum_n \exp\big[-y_n (\dotp{\vec w}{\vx_n}+b)\big] + \partialby{b}\frac \la 2 \norm{\vw}^2\\
&= \sum_n \partialby{b} \exp\big[ -y_n (\dotp{\vec w}{\vx_n}+b)\big] + 0\\
&= \sum_n \left( \partialby{b} -y_n (\dotp{\vec w}{\vx_n}+b) \right) \exp\big[ -y_n (\dotp{\vec w}{\vx_n}+b)\big]\\
&= - \sum_n y_n \exp\big[ -y_n (\dotp{\vec w}{\vx_n}+b)\big]
\end{align}
%
Before proceeding, it is worth thinking about what this says.  From a
practical perspective, the optimization will operate by updating $b
\leftarrow b - \eta \partialof{\cL}{b}$.  Consider positive examples:
examples with $y_n=+1$.  We would hope for these examples that the
current prediction, $\dotp{\vw}{\vx_n}+b$, is as large as possible.
As this value tends toward $\infty$, the term in the $\exp[]$ goes to
zero.  Thus, such points will not contribute to the step.  However, if
the current prediction is small, then the $\exp[]$ term will be
positive and non-zero.  This means that the bias term $b$ will be
\emph{increased}, which is exactly what you would want.  Moreover,
once all points are very well classified, the derivative goes to zero.

\thinkaboutit{This considered the case of positive examples.  What
  happens with negative examples?}

Now that we have done the easy case, let's do the gradient with
respect to $\vw$.
%
\begin{align}
\grad_\vw \cL
&= \grad_\vw \sum_n \exp\big[-y_n (\dotp{\vec w}{\vx_n}+b)\big] 
 + \grad_\vw\frac \la 2 \norm{\vw}^2 \\
&= \sum_n \left( \grad_\vw -y_n (\dotp{\vec w}{\vx_n}+b) \right) \exp\big[-y_n (\dotp{\vec w}{\vx_n}+b)\big] 
 + \la \vw \\
&= -\sum_n y_n \vx_n \exp\big[-y_n (\dotp{\vec w}{\vx_n}+b)\big] 
 + \la \vw
\end{align}
%
Now you can repeat the previous exercise.  The update is of the form
$\vw \leftarrow \vw - \eta \grad_\vw \cL$.  For well classified points
(ones that tend toward $y_n \infty$), the gradient is near zero.
For poorly classified points, the gradient points in the direction
$-y_n\vx_n$, so the update is of the form $\vw \leftarrow \vw + c
y_n\vx_n$, where $c$ is some constant.  This is just like the
perceptron update!  Note that $c$ is large for very poorly classified
points and small for relatively well classified points.

By looking at the part of the gradient related to the regularizer, the
update says: $\vw \leftarrow \vw - \la \vw = (1-\la) \vw$.  This has
the effect of \emph{shrinking} the weights toward zero.  This is
exactly what we expect the regulaizer to be doing!

\Figure{loss:step}{good and bad step sizes}

The success of gradient descent hinges on appropriate choices for the
step size.  Figure~\ref{fig:loss:step} shows what can happen with
gradient descent with poorly chosen step sizes.  If the step size is
too big, you can accidentally step over the optimum and end up
oscillating.  If the step size is too small, it will take way too long
to get to the optimum.  For a well-chosen step size, you can show that
gradient descent will approach the optimal value at a fast
\emph{rate}.  The notion of convergence here is that the
\emph{objective value} converges to the true minimum.

\begin{theorem}[Gradient Descent Convergence] \label{thm:loss:gd}
  Under suitable conditions\sidenote{Specifically the function to be
    optimized needs to be \concept{strongly convex}.  This is true for
    all our problems, provided $\la>0$.  For $\la=0$ the rate could be
    as bad as $\cO(1/\sqrt{k})$.}, for an appropriately chosen
  constant step size (i.e., $\eta_1 = \eta_2, \dots = \eta$), the
  \concept{convergence rate} of gradient descent is $\cO(1/k)$.  More
  specifically, letting $\vz^*$ be the global minimum of $\cF$, we
  have: $\cF(\vz\kth) - \cF(\vz^*) \leq \frac {2
    \norm{\vz\zth-\vz^*}^2} {\eta k}$.
\end{theorem}

\thinkaboutit{A naive reading of this theorem seems to say that you
  should choose huge values of $\eta$.  It should be obvious that this
  cannot be right.  What is missing?}

The proof of this theorem is a bit complicated because it makes heavy
use of some linear algebra.  The key is to set the learning rate to
$1/L$, where $L$ is the maximum \concept{curvature} of the function
that is being optimized.  The curvature is simply the ``size'' of the
second derivative.  Functions with high curvature have gradients that
change quickly, which means that you need to take small steps to avoid
overstepping the optimum.
% mention 1/k^2 speed limit?

\begin{comment}
Fortunately, we can prove a simpler version of this theorem for
one-dimensional functions.  We do this just to give a sense of how
convergence proofs go.  In the single dimensional case, we will call
the algorithm ``derivative descent.''  To set up some notation, let
$f(z)$ be the function to be minimized by derivative descent.  Let
$z_0$ be the initial value and $z^*$ be the optimum.  Let $f'(z)$ be
the first derivative of $f$ and let $f''$ be the second derivative.
Let $L$ be large enough that $f''(z) < L$ for all $z$.

\begin{theorem}[Derivative Descent Convergence] \label{thm:loss:dd}
  For constant $\eta = 1/L$, derivative descent will converge at a
  rate of $f(z\kth) - f(z^*) \leq \frac {2 (z\zth-z^*)^2} {\eta(k+1)}$.
\end{theorem}

\begin{myproof}{\ref{thm:loss:dd}}
  We use Taylor's theorem to expand $f(z)$:
  \begin{align}
    f(z)
      &= f(a)
       + f'(a) (z-a)
       + \frac 1 2 f''(a) (z-a)^2
       + \dots \\
      &\leq f(a)
       + f'(a) (z-a)
       + \frac 1 2 f''(a) (z-a)^2
  \end{align}
  The ``\dots'' is guaranteed to be non-negative because $f$ is
  convex.  
\end{myproof}
\end{comment}

This convergence result suggests a simple approach to deciding when to
stop optimizing: wait until the objective function stops changing by
much.  An alternative is to wait until the \emph{parameters} stop
changing by much.  A final example is to do what you did for
perceptron: early stopping.  Every iteration, you can check the
performance of the current model on some held-out data, and stop
optimizing when performance plateaus.

\section{From Gradients to Subgradients}

As a good exercise, you should try deriving gradient descent update
rules for the different loss functions and different regularizers
you've learned about.  However, if you do this, you might notice that
\emph{hinge loss} and the $1$-norm regularizer are not differentiable
everywhere!  In particular, the $1$-norm is not differentiable around
$w_d=0$, and the hinge loss is not differentiable around $y\hat y=1$.

The solution to this is to use \concept{subgradient} optimization.
One way to think about subgradients is just to not think about it: you
essentially need to just ignore the fact that you forgot that your
function wasn't differentiable, and just try to apply gradient descent
anyway.

To be more concrete, consider the hinge function $f(z) =
\max\{0,1-z\}$.  This
function is differentiable for $z>1$ and differentiable for $z<1$, but
not differentiable at $z=1$.  You can derive this using
differentiation by parts:
%
\begin{align}
  \partialby{z} f(z)
  &= \partialby{z} \brack{ 0 & \txtif z > 1 \\ 1-z & \txtif z < 1} \\
  &= \brack{ \partialby{z} 0 & \txtif z > 1 \\ \partialby{z} (1-z) & \txtif z < 1}\\
  &= \brack{ 0 & \txtif z \geq 1 \\ -1 & \txtif z < 1}
\end{align}
%
\Figure{loss:sub}{hinge loss with sub}
%
Thus, the derivative is zero for $z<1$ and $-1$ for $z>1$, matching
intuition from the Figure.  At the non-differentiable point, $z=1$, we
can use a \concept{subderivative}: a generalization of derivatives to
non-differentiable functions.  Intuitively, you can think of the
derivative of $f$ at $z$ as the tangent line.  Namely, it is the line
that touches $f$ at $z$ that is always below $f$ (for convex
functions).  The subderivative, denoted $\subgrad f$, is the
\emph{set} of all such lines.  At differentiable positions, this set
consists just of the actual derivative.  At non-differentiable
positions, this contains all slopes that define lines that always lie
under the function and make contact at the operating point.  This is
shown pictorally in Figure~\ref{fig:loss:sub}, where example
subderivatives are shown for the hinge loss function.  In the
particular case of hinge loss, any value between $0$ and $-1$ is a
valid subderivative at $z=0$.  In fact, the subderivative is always a
closed set of the form $[a,b]$, where $a$ and $b$ can be derived by
looking at limits from the left and right.

This gives you a way of computing derivative-like things for
non-differentiable functions.  Take hinge loss as an example.  For a
given example $n$, the subgradient of hinge loss can be computed as:
%
\begin{align}
&\subgrad_\vw \max \{ 0, 1 - y_n ( \dotp{\vw}{\vx_n}+b ) \} \\
&= \subgrad_\vw \brack{ 0 & \txtif y_n ( \dotp{\vw}{\vx_n}+b ) > 1 \\
                        1 - y_n ( \dotp{\vw}{\vx_n}+b )  & \text{otherwise}}\\
&= \brack{ \subgrad_\vw 0 & \txtif y_n ( \dotp{\vw}{\vx_n}+b ) > 1 \\
           \subgrad_\vw 1 - y_n ( \dotp{\vw}{\vx_n}+b )  & \text{otherwise}}\\
&= \brack{ \vec 0 & \txtif y_n ( \dotp{\vw}{\vx_n}+b ) > 1 \\
           - y_n \vx_n  & \text{otherwise}}
\end{align}

\newalgorithm%
  {loss:gdhinge}%
  {\FUN{HingeRegularizedGD}(\VAR{$\mat D$}, \VAR{$\la$}, \VAR{MaxIter})}
  {
\SETST{$\vw$}{$\langle \CON{0}, \CON{0}, \dots \CON{0} \rangle$
  \quad,\quad
  \VAR{$b$} $\leftarrow$ \CON{0}}
  \COMMENT{initialize weights and bias}
\FOR{\VAR{iter} = \CON{1} \dots \VAR{MaxIter}}
\SETST{$\vg$}{$\langle \CON{0}, \CON{0}, \dots \CON{0} \rangle$
  \quad,\quad
  \VAR{$g$} $\leftarrow$ \CON{0}}
  \COMMENT{initialize gradient of weights and bias}
\FORALL{(\VAR{$\vx$},\VAR{$y$}) $\in$ \VAR{$\mat D$}}
\IF{\VAR{$y$}$\left( \dotp{\VARm{\vw}}{\VARm{\vx}} + \VARm{b} \right) \leq \CON{1}$}
\SETST{$\vg$}{\VAR{$\vg$} + \VAR{$y$} \VAR{$\vx$}}
  \COMMENT{update weight gradient}
\SETST{$g$}{\VAR{$g$} + \VAR{$y$}}
  \COMMENT{update bias derivative}
\ENDIF
\ENDFOR
\SETST{$\vg$}{$\VARm{\vg} - \VARm{\la} \VARm{\vw}$}
  \COMMENT{add in regularization term}
\SETST{$\vw$}{$\VARm{\vw} + \VARm{\eta} \VARm{\vg}$}
  \COMMENT{update weights}
\SETST{$b$}{$\VARm{b} + \VARm{\eta} \VARm{g}$}
  \COMMENT{update bias}
\ENDFOR
\RETURN \VAR{$\vw$}, \VAR{$b$}
}

If you plug this subgradient form into Algorithm~\ref{alg:loss:gd},
you obtain Algorithm~\ref{alg:loss:gdhinge}.  This is the
\concept{subgradient descent} for regularized hinge loss (with a
$2$-norm regularizer).


\section{Closed-form Optimization for Squared Loss}
\label{sec:loss:reg}

\begin{mathreview}{Matrix multiplication and inversion}
  If $\mat A$ and $\mat B$ are matrices, and $\mat A$ is $N \times K$ and $\mat B$ is $K \times M$ (the inner dimensions must match), then the matrix product $\mat A \mat B$ is a matrix $\mat C$ that is $N \times M$, with $\mat C_{n,m} = \sum_k \mat A_{n,k} \mat B_{k,m}$.
If $\vec v$ is a vector in $\R^D$, we will treat is as a \emph{column vector}, or a matrix of size $D \times 1$.
Thus, $\mat A \vec v$ is well defined if $\mat A$ is $D \times M$, and the resulting product is a vector $\vec u$ with $u_m = \sum_d \mat A_{d,m} v_d$.
  ~\\~\\
  Aside from matrix product, a fundamental matrix operation is inversion. We will often encounter a form like $\mat A \vec x = \vec y$, where $\mat A$ and $\vec y$ are known and we want to solve for $\mat A$.
  If $\mat A$ is square of size $N \times N$, then the inverse of $\mat A$, denoted $\mat A\inv$, is also a square matrix of size $N \times N$, such that $\mat A \mat A\inv = \eye_N = \mat A\inv \mat A$. I.e., multiplying a matrix by its inverse (on either side) gives back the identity matrix.
  Using this, we can solve $\mat A \vec x = \vec y$ by multiplying both sides by $\mat A\inv$ on the left (recall that order matters in matrix multiplication), yielding $\mat A\inv \mat A \vec x = \mat A\inv \vec y$ from which we can conclude $\vec x = \mat A\inv\vec y$.
  Note that not all square matrices are invertible. For instance, the all zeros matrix does not have an inverse (in the same way that $1/0$ is not defined for scalars).
  However, there are other matrices that do not have inverses; such matrices are called \concept{singular}.
\end{mathreview}

Although gradient descent is a good, generic optimization algorithm,
there are cases when you can do better.  An example is the case of a
$2$-norm regularizer and squared error loss function.  For this, you
can actually obtain a \emph{closed form} solution for the optimal
weights.  However, to obtain this, you need to rewrite the
optimization problem in terms of matrix operations.  For simplicity,
we will only consider the \emph{unbiased} version, but the extension
is Exercise~\ref{}.  This is precisely the \concept{linear regression}
setting.

You can think of the training data as a large matrix $\mat X$ of size
$N \times D$, where $X_{n,d}$ is the value of the $d$th feature on the
$n$th example.  You can think of the labels as a column (``tall'')
vector $\vec Y$ of dimension $N$.  Finally, you can think of the
weights as a column vector $\vec w$ of size $D$.  Thus, the
matrix-vector product $\vec a = \mat X \vec w$ has dimension $N$.  In
particular:
%
\begin{equation}
a_n
= \left[ \mat X \vec w \right]_n 
= \sum_d \mat X_{n,d} w_d
\end{equation}
%
This means, in particular, that $\vec a$ is actually the predictions
of the model.  Instead of calling this $\vec a$, we will call it
$\hat{\vec Y}$.  The squared error says that we should minimize $\frac 1
2 \sum_n (\hat Y_n - Y_n)^2$, which can be written in vector form as a
minimization of $\frac 1 2 \norm{\hat{\vec Y} - \vec Y}^2$.

\thinkaboutit{Verify that the squared error can actually be written as
  this vector norm.}

This can be expanded visually as:
%
\begin{equation}
\underbrace{
\textcolor{darkblue}{
\left[
\begin{array}{cccc}
  x_{1,1} & x_{1,2} & \dots & x_{1,D} \\
  x_{2,1} & x_{2,2} & \dots & x_{2,D} \\
  \vdots & \vdots & \ddots & \vdots \\
  x_{N,1} & x_{N,2} & \dots & x_{N,D}
\end{array}
\right]
}}_{\textcolor{darkblue}{\mat X}}
\underbrace{
\textcolor{darkergreen}{
\left[
\begin{array}{c}
  w_{1} \\
  w_{2} \\ 
  \vdots \\
  w_{D}
\end{array}
\right]
}}_{\textcolor{darkergreen}{\vec w}}
=
\underbrace{
\left[
\begin{array}{c}
  \sum_d \textcolor{darkblue}{x_{1,d}} \textcolor{darkergreen}{w_d} \\
  \sum_d \textcolor{darkblue}{x_{2,d}} \textcolor{darkergreen}{w_d} \\
  \vdots \\
  \sum_d \textcolor{darkblue}{x_{N,d}} \textcolor{darkergreen}{w_d}
\end{array}
\right]
}_{\vec{\hat Y}}
\approx
\underbrace{
\textcolor{darkred}{
\left[
\begin{array}{c}
  y_{1} \\
  y_{2} \\ 
  \vdots \\
  y_{N}
\end{array}
\right]
}}_{\textcolor{darkred}{\vec{\hat Y}}}
\end{equation}
%
So, compactly, our optimization problem can be written as:
%
\optimizeuc{loss:squarederror}{\vw}{%
  \cL(\vw) =
  \frac 1 2 \norm{\mat X \vw - \vec Y}^2 + \frac \la 2 \norm{\vw}^2}
%
If you recall from calculus, you can minimize a function by setting
its derivative to zero.  We start with the weights $\vw$ and take
gradients:
%
\begin{align}
\grad_\vw \cL(\vw)
&= \mat X \T \left( \mat X \vw - \vec Y \right) + \la \vw \\
&= \mat X \T \mat X \vw - \mat X \T \vec Y + \la \vw \\
&= \left( \mat X \T \mat X + \la \eye \right) \vw - \mat X \T \vec Y
\end{align}
%
We can equate this to zero and solve, yielding:
%
\begin{align}
     & \textcolor{darkblue}{\left( \mat X \T \mat X + \la \eye \right)} \textcolor{darkred}{\vw} - \textcolor{darkergreen}{\mat X \T \vec Y} = 0\\
\myiff~~ & \textcolor{darkblue}{\left( \mat X \T \mat X + \la \eye_D \right)} \textcolor{darkred}{\vw} = \textcolor{darkergreen}{\mat X \T \vec Y} \\
\myiff~~ & \textcolor{darkred}{\vw} = \textcolor{darkblue}{\left( \mat X \T \mat X + \la \eye_D \right)}\inv \textcolor{darkergreen}{\mat X \T \vec Y}
\end{align}
%
Thus, the \emph{optimal} solution of the weights can be computed by a
few matrix multiplications and a matrix inversion.
%
As a sanity check, you can make sure that the dimensions match.  The
matrix $\mat X\T\mat X$ has dimension $D\times D$, and therefore so
does the inverse term.  The inverse is $D\times D$ and $\mat X\T$ is
$D\times N$, so that product is $D\times N$.  Multiplying through by
the $N\times 1$ vector $\vec Y$ yields a $D \times 1$ vector, which is
precisely what we want for the weights.

\thinkaboutit{For those who are keen on linear algebra, you might be
  worried that the matrix you must invert might not be invertible.  Is
  this actually a problem?}

Note that this gives an \emph{exact solution}, modulo numerical
innacuracies with computing matrix inverses.  In contrast, gradient
descent will give you progressively better solutions and will
``eventually'' converge to the optimum at a rate of $1/k$.  This means
that if you want an answer that's within an accuracy of $\ep =
10^{-4}$, you will need something on the order of one thousand steps.

The question is whether getting this exact solution is always more
efficient.  To run gradient descent for one step will take $\cO(ND)$
time, with a relatively small constant.  You will have to run $K$
iterations, yielding an overall runtime of $\cO(KND)$.  On the other
hand, the closed form solution requires constructing $\mat X\T\mat X$,
which takes $\cO(D^2 N)$ time.  The inversion take $\cO(D^3)$ time
using standard matrix inversion routines.  The final multiplications
take $\cO(ND)$ time.  Thus, the overall runtime is on the order
$\cO(D^3 + D^2 N)$.  In most standard cases (though this is becoming
less true over time), $N > D$, so this is dominated by $\cO(D^2 N)$.

Thus, the overall question is whether you will need to run more than
$D$-many iterations of gradient descent.  If so, then the matrix
inversion will be (roughly) faster.  Otherwise, gradient descent will
be (roughly) faster.  For low- and medium-dimensional problems (say,
$D \leq 100$), it is probably faster to do the closed form solution
via matrix inversion.  For high dimensional problems ($D \geq
10,000$), it is probably faster to do gradient descent.  For things in
the middle, it's hard to say for sure.

\section{Support Vector Machines} \label{sec:loss:svm}

At the beginning of this chapter, you may have looked at the convex
surrogate loss functions and asked yourself: where did these come
from?!  They are all derived from different underlying principles,
which essentially correspond to different inductive biases.  

\Figure{loss:geom}{picture of data points with three hyperplanes,
  RGB with G the best}

Let's start by thinking back to the original goal of linear
classifiers: to find a hyperplane that separates the positive training
examples from the negative ones.  Figure~\ref{fig:loss:geom} shows
some data and three potential hyperplanes: red, green and blue.  Which
one do you like best?

Most likely you chose the green hyperplane.  And most likely you chose
it because it was furthest away from the closest training points.  In
other words, it had a large \concept{margin}.  The desire for
hyperplanes with large margins is a perfect example of an inductive
bias.  The data does not tell us which of the three hyperplanes is
best: we have to choose one using some other source of information.

Following this line of thinking leads us to the \concept{support
  vector machine} (SVM).  This is simply a way of setting up an
optimization problem that attempts to find a separating hyperplane
with as large a margin as possible.  It is written as a
\concept{constrained optimization problem}:
%
\optimize{loss:svmhard}{\vw,b}{\frac 1 {\ga(\vw,b)}}{%
  y_n \left( \dotp{\vw}{\vx_n} + b \right) \geq 1& (\forall n)
%\nonumber &
%  \norm{\vw} \leq 1
}
%
In this optimization, you are trying to find parameters that maximize
the margin, denoted $\ga$, (i.e., minimize the reciprocal of the
margin) subject to the constraint that \emph{all} training examples
are correctly classified.

\begin{comment}
There are two aspects of this optimization problem that are not
immediately obvious.  The first is the constraint that the norm of the
weight vector is at most $1$.  This constraint is necessary in order
to avoid degenerate solutions.  In particular, suppose you solved the
optimization problem \emph{with} this constraint in place.  You found
some optimal solution $\vw^*,b^*$.  Now, consider what happens if you
were to remove the norm constraint.  You could then look at
$2\vw^*,2b$.  The constraints would still be satisfied, but you would
have \emph{doubled} the size of the margin!  You could do this again,
eventually getting an infinitely large margin with $\infty\vw^*,\infty
b$ as parameters.  The constraint on the norm of $\vw$ is to ensure
that this doesn't happen.

\thinkaboutit{You can also think about the constraint on $\vw$ in
  terms of the fact that it's not $\vw$ we care about, but rather the
  hyperplane it defines.  Why does this suggest that constraining the
  norm of $\vw$ is reasonable?}
\end{comment}

\Figure{loss:margin}{hyperplane with margins on sides}

The ``odd'' thing about this optimization problem is that we require
the classification of each point to be greater than \emph{one} rather
than simply greater than \emph{zero}.  However, the problem doesn't
fundamentally change if you replace the ``1'' with any other positive
constant (see Exercise~\ref{}).  As shown in
Figure~\ref{fig:loss:margin}, the constant one can be interpreted
visually as ensuring that there is a non-trivial margin between the
positive points and negative points.

The difficulty with the optimization problem in
Eq~\eqref{opt:loss:svmhard} is what happens with data that is not
linearly separable.  In that case, there \emph{is no} set of
parameters $\vw,b$ that can simultaneously satisfy all the
constraints.  In optimization terms, you would say that the
\concept{feasible region} is \emph{empty}.  (The feasible region is
simply the set of all parameters that satify the constraints.)  For
this reason, this is refered to as the \concept{hard-margin SVM},
because enforcing the margin is a hard constraint.  The question is:
how to modify this optimization problem so that it can handle
inseparable data.

\Figure{loss:slack}{one bad point with slack}

The key idea is the use of \concept{slack parameters}.  The intuition
behind slack parameters is the following.  Suppose we find a set of
parameters $\vw,b$ that do a really good job on $9999$ data points.
The points are perfectly classifed and you achieve a large margin.
But there's one pesky data point left that cannot be put on the proper
side of the margin: perhaps it is noisy.  (See
Figure~\ref{fig:loss:slack}.)  You want to be able to pretend that you
can ``move'' that point across the hyperplane on to the proper side.
You will have to pay a little bit to do so, but as long as you aren't
moving a \emph{lot} of points around, it should be a good idea to do
this.  In this picture, the amount that you move the point is denoted
$\xi$ (xi).

By introducing one slack parameter for each training example, and
penalizing yourself for having to use slack, you can create an
objective function like the following, \concept{soft-margin SVM}:
%
\optimize{loss:svm}{\vw,b,\vec\xi}{%
  \underbrace{\frac 1 {\ga(\vw,b)}}_{\text{large margin}}
+ \underbrace{C \sum_n \xi_n}_{\text{small slack}}
}{%
  y_n \left( \dotp{\vw}{\vx_n} + b \right) \geq 1 - \xi_n & (\forall n) \\
\nonumber &
  \xi_n \geq 0 & (\forall n)
%\nonumber &
%  \norm{\vw} \leq 1
}
%
The goal of this objective function is to ensure that all points are
correctly classified (the first constraint).  But if a point $n$
cannot be correctly classified, then you can set the slack $\xi_n$ to
something greater than zero to ``move'' it in the correct direction.
However, for all non-zero slacks, you have to pay in the objective
function proportional to the amount of slack.  The hyperparameter
$C>0$ controls overfitting versus underfitting.  The second constraint
simply says that you must not have negative slack.

\thinkaboutit{What values of $C$ will lead to overfitting?  What
  values will lead to underfitting?}

One major advantage of the soft-margin SVM over the original
hard-margin SVM is that the feasible region is \emph{never empty}.
That is, there is always going to be some solution, regardless of
whether your training data is linearly separable or not.

\thinkaboutit{Suppose I give you a data set.  Without even looking at
  the data, construct for me a feasible solution to the soft-margin
  SVM.  What is the value of the objective for this solution?}

It's one thing to write down an optimization problem.  It's another
thing to try to solve it.  There are a very large number of ways to
optimize SVMs, essentially because they are such a popular learning
model.  Here, we will talk just about one, very simple way.  More
complex methods will be discussed later in this book once you have a
bit more background.

To make progress, you need to be able to measure the size of the
margin.  Suppose someone gives you parameters $\vw,b$ that optimize
the hard-margin SVM.  We wish to measure the size of the margin.  The
first observation is that the hyperplane will lie \emph{exactly}
halfway between the nearest positive point and nearest negative point.
If not, the margin could be made bigger by simply sliding it one way
or the other by adjusting the bias $b$.

\Figure{loss:marginsize}{copy of figure from p5 of cs544 svm tutorial}

By this observation, there is some positive example that that lies
exactly $1$ unit from the hyperplane.  Call it $\vx^+$, so that
$\dotp{\vw}{\vx^+}+b = 1$.  Similarly, there is some negative example,
$\vx^-$, that lies exactly on the other side of the margin: for which
$\dotp{\vw}{\vx^-}+b = -1$.  These two points, $\vx^+$ and $\vx^-$ give
us a way to measure the size of the margin.  As shown in
Figure~\ref{fig:loss:margin}, we can measure the size of the margin by
looking at the difference between the lengths of projections of
$\vx^+$ and $\vx^-$ onto the hyperplane.  Since projection requires a
normalized vector, we can measure the distances as:
%
\begin{align}
d^+ &= \frac 1 {\norm{\vw}} \dotp{\vw}{\vx^+} + b - 1 \\
d^- &= - \frac 1 {\norm{\vw}} \dotp{\vw}{\vx^-} - b + 1
\end{align}
%
We can then compute the margin by algebra:
%
\begin{align}
\ga
&= \frac 1 2 \left[ \textcolor{darkblue}{d^+} 
                  - \textcolor{darkred}{d^-} \right] \\
&= \frac 1 2
   \left[
   \textcolor{darkblue}{\frac 1 {\norm{\vw}} \dotp{\vw}{\vx^+} + b - 1}
 - \textcolor{darkred}{\frac 1 {\norm{\vw}} \dotp{\vw}{\vx^-} - b + 1}
   \right]
\\
&= \frac 1 2 \left[
   \textcolor{darkblue}{\frac 1 {\norm{\vw}} \dotp{\vw}{\vx^+}}
 - \textcolor{darkred}{\frac 1 {\norm{\vw}} \dotp{\vw}{\vx^-}}
 \right]
\\
&= \frac 1 2 \left[
   \textcolor{darkblue}{\frac 1 {\norm{\vw}} (+1) }
 - \textcolor{darkred}{\frac 1 {\norm{\vw}} (-1) }
 \right]
\\
&= \frac 1 {\norm{\vw}}
\end{align}
%
This is a remarkable conclusion: the size of the margin is inversely
proportional to the norm of the weight vector.  Thus, {\bf maximizing
  the margin is equivalent to minimizing $\norm{\vw}$!}  This serves
as an additional justification of the $2$-norm regularizer: having
small weights means having large margins!

However, our goal wasn't to justify the regularizer: it was to
understand hinge loss.  So let us go back to the soft-margin SVM and
plug in our new knowledge about margins:
%
\optimize{loss:svms}{\vw,b,\vec\xi}{%
  \underbrace{\frac 1 2 \norm{\vw}^2}_{\text{large margin}}
+ \underbrace{C \sum_n \xi_n}_{\text{small slack}}
}{%
  y_n \left( \dotp{\vw}{\vx_n} + b \right) \geq 1 - \xi_n & (\forall n) \\
\nonumber &
  \xi_n \geq 0 & (\forall n)
}
%
Now, let's play a thought experiment.  Suppose someone handed you a
solution to this optimization problem that consisted of weights
($\vw$) and a bias ($b$), but they forgot to give you the slacks.
Could you recover the slacks from the information you have?

In fact, the answer is yes!  For simplicity, let's consider positive
examples.  Suppose that you look at some positive example $\vx_n$.
You need to figure out what the slack, $\xi_n$, would have been.
There are two cases.  Either $\dotp{\vw}{\vx_n}+b$ is at least $1$ or
it is not.  If it's large enough, then you want to set $\xi_n = 0$.
Why?  It cannot be less than zero by the second constraint.  Moreover,
if you set it greater than zero, you will ``pay'' unnecessarily in the
objective.  So in this case, $\xi_n=0$.  Next, suppose that
$\dotp{\vw}{\vx_n}+b = 0.2$, so it is not big enough.  In order to
satisfy the first constraint, you'll need to set $\xi_n \geq 0.8$.
But because of the objective, you'll not want to set it any larger
than necessary, so you'll set $\xi_n = 0.8$ exactly.

Following this argument through for both positive and negative points,
if someone gives you solutions for $\vw,b$, you can automatically
compute the optimal $\xi$ variables as:
%
\begin{equation}
  \xi_n = \brack{
    0 & \txtif y_n(\dotp{\vw}{\vx_n}+b) \geq 1 \\
    1 - y_n(\dotp{\vw}{\vx_n}+b) & \text{otherwise}}
\end{equation}
%
In other words, the optimal value for a slack variable is
\emph{exactly} the hinge loss on the corresponding example!  Thus, we
can write the SVM objective as an \emph{unconstrained} optimization
problem:
%
\optimizeuc{loss:svmuc}{\vw,b}{%
  \underbrace{\frac 1 2 \norm{\vw}^2}_{\text{large margin}}
+ \underbrace{C \sum_n \ell\xth{hin}(y_n,
  \dotp{\vw}{\vx_n}+b)}_{\text{small slack}}}
%
Multiplying this objective through by $\la/C$, we obtain exactly the
regularized objective from Eq~\eqref{opt:loss:reg} with hinge loss as
the loss function and the $2$-norm as the regularizer!

%TODO: justify in term of one dimensional projections!

\section{Further Reading}

TODO further reading



\begin{comment}
   - Squared error for regression
   - Closed form
   - Gradient descent on error function
   - 0/1 loss and convex upper bounds for classification
   - Logistic loss, exponential loss, gradient descent
   - Hinge loss and subgradient descent
\end{comment}


%%% Local Variables: 
%%% mode: latex
%%% TeX-master: "courseml"
%%% End: 
