\documentclass[10pt,twocolumn]{article}
\usepackage{fullpage}
\usepackage{subfig}
\usepackage[top=1in,bottom=1in,left=1in,right=1in]{geometry}
\usepackage{graphicx}

\title{15-740 Project (Milestone 1):\\ Branch Prediction }
\author{Bernardo Toninho, Ligia Nistor, Filipe Milit\~{a}o}
\date{}
\begin{document}

\maketitle


This document consists of a progress report on the status of our
course project on branch prediction.

\section{Introduction}

As detailed in the project proposal, there are a great variety of
techniques that are employed in modern branch predictors. Up to this
point, we have began to explore this design space by considering a
two-level branch predictor \cite{Evers:1998:ACP:279358.279368}, 
the O-GEHL predictor \cite{Seznec:2005:AOH:1069807.1070003};
perceptron \cite{Jimenez:2001:DBP:580550.876441} and
piecewise linear branch predictor\cite{Jimenez:2005:PLB:1069807.1070002}.
We now briefly summarize the key concepts behind these predictors.

\paragraph{Two-level Branch Predictor}

The idea behind the two-level branch predictor is to keep track of the
history of the last $n$ branches and maintain a saturating counter per
each of the possible $2^n$ history patterns. A saturating counter
consists of a 4 -state state machine, in which 2 states determine not
taking the branch and 2 states determine taking it (this counter is
initialized at $0$ and incremented/decremented upon a correct/incorrect
prediction -- $0$ or $1$ determine a not-taken prediction and $2$ and
$3$ determine a taken prediction). The idea is to explore correlation
between branches that are close to one another in the control flow of
a program, and use that correlation as a means of prediction. 
The implementation of this predictor hashes the branch history vector
and the low order bits of the program counter of the branch that is currently being
predicted to index into the prediction table containing the
saturating counters.

\paragraph{O-GEHL Branch Predictor}

The O-GEHL branch predictor is a variant of the perceptron predictor
that is specially designed to exploit very long global histories
(100-200 history bit vector length), and hence capture correlations
between larger portions of the control flow graph of a program. The
predictor maintains a collection of tables, the first indexed by the
branch address, the remaning indexed using a varying length prefix
of the global history (following a geometric series). Each table
stores predictions as signed counters. Computing a prediction consists
of indexing into each table to obtain the value of a counter per
table, and then computing the sign of the sum of all counters (taken
if positive, not taken if negative). Counters are updated upon wrong
predictions or when the value of the prediction is below a given
threshold (this is the same technique used in the perceptron
predictor), incrementing each counter when a branch is taken and
decrementing otherwise.

\paragraph{Perceptron Branch Predictor}

The perceptron branch predictor learns correlations between branch
outcomes in the global branch history and the behavior of the
considered branch, by varying weights on a simple neural network (the
larger the weight, higher is the correlation). The predictor maintains
a table of $n$ perceptrons. Indexing into this table is determined by
a hash of the branch address, and fetching a perceptron entails
fetching a vector of weights. Predictions are established by computing
the dot product of the vector of weights and the global history
register -- a not-taken prediction is produced when the outcome is
negative, and taken otherwise. The weights are updated similarly to
the O-GEHL predictor mentioned above. Perceptron predictors can use
larger history lengths than two-level predictors, since the indexing
is not made using the history vector. The potential disadvantage is
that a perceptron can only learn so-called linearly separable
functions, which does not cover the full correlation space of branches.

\paragraph{Piecewise Linear Branch Predictor}

The piecewise linear branch predictor is an extension of the
perceptron predictor that can learn linearly inseparable
functions. Informally, this predictor generalizes the previous one by
learning using correlation data from ``all paths'' in a program to a
particular branch (instead of maintaining just a table of weights, it
maintains a three-dimensional array). The prediction engine is
somewhat complex, and idealized in the sense that its perhaps too
sophisticated to be amenable to actual implementation.


\section{Project Status}

We have implemented (in most cases the code is publicly available from the
authors' homepage) the branch predictors mentioned in the previous section, as
well as a few simple predictors as a baseline cases (``always
taken'',``always not taken'', ``last taken'' and random predictors), in the simulation
infrastructure provided by the Branch Prediction Challenge workshop
\cite{bpc}. 

The simulator provides quantitative measurements on
the accuracy of each predictor by recording correct and incorrect
predictions and assigning a misprediction penalty for each branch,
which is measured by the number of cycles that the fetch engine was
following the wrong path. The simulator returns two final scores (one
for conditional branches and another for indirect branches), using
what is called a Misprediction Penalty per Kilo Instructions
(MPPKI) metric, which we mostly ignore for our purposes. 
To be able to produce deeper analyses of each predictor, we
have modified the simulation infrastructure to extract more detailed
performance metrics about the execution and overall progress of each
predictor throughout each of the benchmark runs. The implemented
extensions are as follows:
\begin{itemize}
\item Evolution of branch misses per number of re-executions.\\
For each branch execution, we track how the percentage of branch
misses evolves as a function of the number of times the branch was
executed. Ideally, given enough executions, the quality of the
prediction for a branch should improve. Naturally, the prediction
accuracy of the first time a branch is executed is expected to be
no better than a random prediction. The goal is to analyze and develop
prediction techniques that with a relatively low number of re-executions
stabilize with the correct answer.
\item Number of re-executions for each branch\\
This metric is related to the previous one. Branches that are not
re-executed often will generally have worse results than those for
which predictors can track more history, and so this metric can act as
a sort of filter for those branches. Furthermore, our aim is that by
determining the average number of re-executions for each branch we can
determine the ``minimum threshold'' that each predictor requires to
learn the behavior of branches.
\item Prediction vs Correct result per branch\\
The idea for this metric is that it may reveal interesting patterns in
terms of the bias of each predictor vs the correct result. For
instance, certain branches are almost never taken (or almost always
taken), and it might be interesting to determine the behavior of
predictors in these extremal cases.
\end{itemize}

\section{Preliminary Results}

In this section we detail some of the preliminary simulation results
we have obtained regarding the following metrics:
\begin{enumerate}
\item Re-executions of branches per benchmark
\item Misprediction rate
\item Evolution of branch misses over time
\end{enumerate}
The first allows to profile our benchmarks, given that they were
provided with the simulation infrastructure and we cannot effectively
inspect them. The misprediction rate gives us an absolute metric for
the quality of each predictor. The latter allows to determine how fast a
predictor converges on a good misprediction rate.

We collected data for our five benchmarks: a server
process, a client process, an integer arithmetic process, a multimedia
process and a web-server process. The benchmark infrastructure supplied
by the simulation framework splits each benchmark into several trace
files. Due to size constraints and for the sake of clarity of presentation, we will
present here data regarding only one file for the client, multimedia
and web-server benchmarks.

\subsection{Benchmark Profiling}

Our benchmark profile graphs plot the number of branch executions
(number of executions on the x-axis and count on the y-axis). As we
can observe in Fig.~\ref{fig:prof_mm}, the multimedia benchmark has a
larger amount of branches that are executed a small number of times,
opposed to the web-server which has a more even distribution of branch
re-executions. This is to be expected given the cyclic nature of a
web-server, although there is a substantial amount of branches that are
executed a diminished number of times. The client benchmark exhibits a
similar distribution (Fig.~\ref{fig:prof_cl}). 

\begin{figure*}[p]
\begin{center}
\includegraphics[height=0.3\textheight,width=0.79\textwidth]{../graphs/MM04_branch_distribution.png}
\caption{Multimedia Benchmark Profile}\label{fig:prof_mm}
\end{center}
\end{figure*}
\begin{figure*}[p]
\begin{center}
\includegraphics[height=0.3\textheight,width=0.79\textwidth]{../graphs/WS01_branch_distribution.png}
\caption{Webserver Benchmark Profile}\label{fig:prof_ws}
\end{center}
\end{figure*}
\begin{figure*}[p]
\begin{center}
\includegraphics[height=0.3\textheight,width=0.79\textwidth]{../graphs/client01_total.jpg}
\caption{Client Benchmark Profile}\label{fig:prof_cl}
\end{center}
\end{figure*}

\subsection{Misprediction Rates}

The misprediction rates that we obtained for the multimedia benchmark (Fig.~\ref{fig:pred_mm}
reveal that the gshare predictor obtains the best results, which is
somewhat inconsistent with the results presented in the o-gehl,
perceptron and piecewise linear predictor papers. We believe this is
due to faults in the implementation of these three predictors (we
obtained the code for these from the authors and adapted it to the
simulation infrastructure). The web-server benchmark misprediction
rates (Fig.~\ref{fig:pred_ws}) are identical for the gshare predictor,
but substantially worse for the other predictors, despite the fact
that in the web-server benchmark, there is substantial more re-execution
of the same branch. This result further strengthens our belief that
there must be a fault in the implementation of these predictors (given
that these are designed to explore much longer histories than the
gshare predictor). For the client benchmark (Fig.~\ref{fig:pred_cl}),
gshare does very well, and as can be seen by the improvement in the
last taken predictor, this must be due to the fact that while both the
client and web-server benchmarks have similar re-execution counts, the
client benchmark is such that the branch direction is repeated to a
much higher degree. The improvements in prediction rates in this
situation reveal that, in general, the predictors perform better for
branches that commonly follow the same directions throughout their
execution.

\begin{figure*}[p]
\begin{center}
\includegraphics[height=0.3\textheight,width=0.79\textwidth]{../graphs/MM04_bar.png}
\caption{Misprediction Rates - Multimedia}\label{fig:pred_mm}
\end{center}
\end{figure*}
\begin{figure*}[p]
\begin{center}
\includegraphics[height=0.3\textheight,width=0.79\textwidth]{../graphs/WS01_bar.png}
\caption{Misprediction Rates - Webserver}\label{fig:pred_ws}
\end{center}
\end{figure*}
\begin{figure*}[p]
\begin{center}
\includegraphics[height=0.3\textheight,width=0.79\textwidth]{../graphs/client01_bar.jpg}
\caption{Misprediction Rates - Client}\label{fig:pred_cl}
\end{center}
\end{figure*}

\subsection{Evolution of Prediction Rates}

The evolution of the misprediction rates are relatively similar across
the presented benchmarks (Fig.~\ref{fig:evol_mm}, \ref{fig:evol_ws}
and \ref{fig:evol_cl}). The results reveal that gshare is the
predictor that fluctuates less in its misprediction rate, with the
``last taken'' predictor exhibiting a similar behavior. The piecewise
linear predictor appears to be the less fluctuating one of the neural
network based predictors (again, we believe our implementations may be
faulty, but this result is expected given the higher sophistication of
the piecewise linear predictor when compared to the other predictors).

The gshare predictor has a faster ``warm up'' period than
the other consider predictors (if we disregard the baseline
predictors). This is to be expected given that the learning of neural
network based predictors is much more sophisticated and requires more
data to be collected.


\begin{figure*}[p]
\begin{center}
\includegraphics[height=0.3\textheight,width=0.79\textwidth]{../graphs/MM04_evol.png}
\caption{Evolution of Misprediction Rates - Multimedia}\label{fig:evol_mm}
\end{center}
\end{figure*}
\begin{figure*}[p]
\begin{center}
\includegraphics[height=0.3\textheight,width=0.79\textwidth]{../graphs/WS01_prediction_evol.png}
\caption{Evolution of Misprediction Rates - Webserver}\label{fig:evol_ws}
\end{center}
\end{figure*}
\begin{figure*}[p]
\begin{center}
\includegraphics[height=0.3\textheight,width=0.79\textwidth]{../graphs/client01_squiggly.jpg}
\caption{Evolution of Misprediction Rates - Client}\label{fig:evol_cl}
\end{center}
\end{figure*}

\section{Future Directions}

We will proceed with the plan that we detailed in our project
proposal. Our next direction is to explore hybrid predictors, in
particular combining local and global predictors. We will also
consider more global predictors that, in general, outperform those
presented in this report. One of the challenges of this project is
devising metrics that enable us to extract relevant information for
the development of a new predictor. We believe we still need to
reconsider some of our choices, but that the data we collect at the
moment gives us some reasonable insights into the behavior of the
predictors we will consider.

\bibliography{milestone1}
\bibliographystyle{abbrv}

\end{document}
