\chapter{Background}

\section*{Gaussian processes}

A Gaussian Process is a stochastic process whereby each value it takes is distributed as a Gaussian; as, furthermore, each finite subset of its values. This is clearly a very general family of functions; as such, and also because it has amenable mathematical properties, for this project we use it to model real life data.

\section*{Regression}

Suppose we have a dataset $D$, consisting of $(x, y)$ duples. Call $x \in X$ a datum's x-coordinate. In our work, we let $X = \mathbb{R}^d$ for some $d$ so $x$ is actually $d$-dimensional, but it may be much more general. For the method presented, all we need is a covariance function definable on $X$. Call $y \in Y$ the datum's y-coordinate; for standard regression techniques, we need $Y = \mathbb{R}^n$ for some $n$.

Given this, we then receive another dataset $D'$ which only has $x$-coordinates. The task is to predict the y-coordinates of these unlabelled data. A simple solution would predict a mode or mean value of $y$ for each datum; a better one would return a probability distribution over all possible values $y$ might take. This task is usually done for each $x \in D'$ independently.

\section*{Classification}

This is like regression, except that $Y$ is now a discrete, unstructured space, which we rename $C$. We suppose there are $|C|$ classes, and that each datum belongs to exactly one of these. The task is now to predict the class of unlabelled data, and this is the object of this thesis.

In regression, the probability distribution may be approximated by a Gaussian, as in Gaussian Process Regression, which leads to some elegant mathematical results. Of course if $c \in C$ is discrete it can't be Gaussian, so different methods must be used.

Our method is, for each class $c$, to suppose there is a latent function $f_c: X \rightarrow \mathbb{R}$, use regression on that function, and compose it with a sigmoid. This gives the probability of each class. Clearly this is easier said than done, since we do not in fact know this latent function at any point. We only have fuzzy notions, such as that $f_c$ must be relatively high near points from $c$.

Obviously $|C| \geq 2$. If we have equality, this is known as a binary classification problem; otherwise we call it multiclass. This is considered a special case because there exist specialised methods that solve this problem very efficiently. One is tempted, in multiclass problems, to cascade one such method, e.g. by nested one-against-all classifiers. This can work to an extent, but there are problems with it. We generally get differing results if we reorder which class we check first, and there is no natural order. We often get non-differentiable decision boundaries (see under Loss Functions) which are difficult to model. And with data subset methods (see below) we have no notion of cross-class covariance.

\section*{Loss functions}

A loss function $L$ maps from $(X, C, C)$ to $\mathbb{R}^- \bigcup \{0\}$. $L(x, c, c')$ denotes the loss of predicting the label $c$ for a point at $x$ when its actual label is $c'$. This is a notion of how much worse our machine is than a hypothetical omniscient oracle; $L(x, c, c') = 0$ if and only if $c = c'$. $L$ is said to be symmetric if $\forall x, c, c', L(x,c,c') = L(x,c',c)$ and asymmetric otherwise. Symmetric loss functions are generally easier to deal with, as then the problem simplifies to choosing which class has the highest probability of being correct, whereas with asymmetric loss functions we need to know by what margin. The simplest useful loss function is the indicator function, where $L(x, c, c') = 0$ if $c = c'$ and equals $-1$ otherwise.

\section*{Prediction}

Prediction is the objective of classification problems. The theory is nice, but ultimately we want to build a machine which can estimate something we don't already know.

In this project, a two-phase approach is used. In the first phase, we try to fit a Gaussian Process to the latent function of the data. Specifically, we build an algorithm which can estimate the mean and variance of $f_c(x)$ for each $c$ at any test point $x$. In the second phase, we repeatedly sample from each of these Gaussians. Then the probability that $x$'s class is $c$ is approximately the average of the softmax of these samples.

With the classwise probabilities, it is easy to decide which class to predict, depending on our criteria. A common default choice is the prediction minimising the expected loss. If we have the indicator loss function, this simplifies to choosing the likeliest class.

A decision boundary is a subspace for which there are two or more optimal decisions; for example, with a symmetric loss function, one where each class is equally probable. Note that this depends on the loss function used. If we know a dataset's decision boundary\footnote{The presented method does not explicitly compute a decision boundary, leaving it as a theoretical notion only; but a useful one}, the task reduces to finding which side of the boundary a given point is on. Prior work has found efficient solutions for problems which may be accurately modelled as having linear decision boundaries; these solutions do extremely badly on datasets for which this model is inaccurate.

\section*{Hyperparameter Learning}

Assuming the latent function $f$ is reasonably well-behaved [NTS: check exactly what is required for this], we may describe it with a covariance function, a measure of how similar two points are; if $cov(x, x')$ is large (close to 1), we expect $f(x)$ and $f(x')$ to be similar; if $cov(x, x') \approx 0$, $f(x)$ and $f(x')$ are expected to be very different. Given an arbitrary dataset, one cannot uniquely determine a covariance function describing it, as there are infinitely many. However, it must be known for all of these computations.

Instead, we assume that the covariance function belongs to a smaller, reasonable-looking family of functions, with a unique, finite set of identifying hyperparameters, $\theta$. Then the task of finding $cov$ is reduced to a search in a finite-dimensional space.

We want to find the most plausible hyperparameters for the dataset, i.e. maximise $p(\theta | D)$. By Bayes' Rule, if $D$ is fixed and we use a flat prior over $\theta$\footnote{Which is reasonable if we have no expert knowledge}, this is equivalent to maximising $p(D | \theta)$, the likelihood.

It turns out that the log likelihood is convex and so can be maximised by off-the-shelf optimisers; as such we usually only consider the log likelihood.

A common default covariance function (and the one used in this project) is the squared exponential function. In its isotropic form, it is:
\begin{align*}
cov(x, x') &= \theta _1e^{-\theta _2 x\cdot x'}
\end{align*}
Thus $\theta \in \mathbb{R}^2$. Its first entry is the signal variance, which might be thought of as the magnitude of points' similarities; its second entry is a length scale, which indicates how rapidly $f$ changes over distance. A large value corresponds to a short length scale.

\section*{Large dataset approximations}

Na\"ive Gaussian Process Classification can get good results, but with the drawback of requiring $O(cn^3)$ processing time (we assume $n >> c$). Unsurprisingly this can become intractable, and we are forced to use further approximations to keep processing time reasonable.

\section*{This thesis}

The basic problem of classification --- learning from $D$ a rule which can generalise --- has effectively been solved.

This thesis aspires to show how to do the above quickly, for multiclass problems with asymmetrical loss functions and nontrivial decision boundaries, without any expert knowledge, and (as much as possible) without compromising accuracy.

