\documentclass[10pt,twocolumn]{article}
\usepackage{fullpage}
\usepackage{subfig}
\usepackage[top=1in,bottom=1in,left=1in,right=1in]{geometry}
\usepackage{graphicx}

\title{15-740 Project (Milestone 2):\\ Branch Prediction }
\author{Bernardo Toninho, Ligia Nistor, Filipe Milit\~{a}o}
\date{}
\begin{document}

\maketitle


This document consists of a progress report on the status of our
course project on branch prediction.

\section*{Report Context}

In our previous developments for this project, we have implemented a
few global branch predictors using the CBP\cite{bpc} simulation
infrastructure, as well as extended the simulation infrastructure to
produce additional performance metrics regarding the considered branch
predictors.\\


For this component of the project, we will focus on \textit{hybridizing
predictors} as a means to improve overall prediction rates. Typically,
hybrid branch predictors consist of a combination of two (or more)
branch predictors that have different, complementary, qualities
(e.g. a global and a local branch predictor). A hybrid predictor,
besides having to implement the \textit{component predictors}, also has to
implement a mechanism that decides for any given branch, which
component predictor to use. This mechanism is frequently called the
\emph{meta-predictor}. While several hybrid predictors have been
proposed [cites], there does not seem to be much emphasis on the
mechanism used to choose between different predictors. 

While empirical results show that hybrid predictors typically
perform better than their components, it is unclear if one can do
better by modifying the meta-prediction policy to one that is more in
tune with the particulars of the component predictors. This is the
idea which we begin to explore in this portion of the project by comparing our results to how close we are to an \textit{optimal hybrid} predictor - where the meta-predictor has the foresight to always picks the correct component predictor whenever it must chose between two different answers.

\section{Hybrid Predictors}


Typically, the
meta-predictor consists of a table, indexed by the program counter, 
that stores 2 bit saturating counters - similarly to what is
used by the \emph{gshare} predictor. These counters decide which predictor to pick by using a simplistic scheme: \verb=00= and
\verb=01= entail choosing component predictor one while \verb=10= and \verb=11=
result in selecting the other\footnote{This can be easily generalized to $N$
component predictors by using a collection of $N$ such counters.}. The update
stage of the meta-predictor simply keeps track of the accuracy of the
prediction of each component and biases the counters
accordingly. 

The approach on meta-prediction described above is inherently \textit{local}:
for a given branch, it chooses the predictor that appears to have
predicted more accurately in the recent future. However, hybrid
predictors typically combine global history based predictors and local
predictors, so it is unclear whether this meta-prediction policy is
optimal. For instance, the prediction of a global history predictor
is, by construction, a function of the past history. Typically, for
any given branch, there are different control-flow paths that reach
the branch and the accuracy of the prediction of a history based
predictor relies on having previously observed a given control-flow
path. This key idea is not captured by the typical meta-predictor[cite] that
biases towards the history based predictor by considering the most recent
decisions of the predictor, independent of whether or not the
predictor produces good results for the particular control-flow
path.

\paragraph{A global meta-predictor}

Given the previous insight, we developed (what we believe to be) a new hybridization technique that instead of simply mapping program
counters to a 2-bit saturating counter, it maps a hash (XOR) of the global branch
history vector and the program counter to a table of 2-bit saturating
counters (in a sense, we are extending the indexing principle of \emph{gshare} to
the meta-level). The idea is that instead of performing
meta-predictions by associating
the prediction with just the program counter (which inherently only
exploits the correlation between multiple re-executions of the same
branch), we choose the component predictor as a function of past
history and the program counter, therefore exploiting correlations
between multiple different branches.

Since hybrid predictors typically
use global history predictors as (at least) one of their components,
the global branch history vector is already maintained and so no
additional memory overhead is incurred by our approach. In terms of
computational complexity, the XOR-ing of the history vector with the
program counter is the only newly required operation (assuming neither of the component predictors already needs it, in which case it can be reused).

\subsection{Newly Implemented Predictors}


To evaluate our proposal we implemented our new hybridization
technique by selecting XXX and YYY as the component
predictors.\footnote{potentially more than one (X,Y) combination}
To perform comparisons with other existing hybrid predictors, we
implemented hybrids of XX and YY using the typical meta-prediction
table indexed by the program counter of the considered branch.


List of new (hybrid) predictors and what they combine:
\begin{enumerate}
\item hybrid...
\end{enumerate}

\subsection{Baseline Predictors}

To analyze the performance of our hybrid predictors we use, as a baseline, some of the most relevant predictors from the previous milestone (although some had to be fixed/changed as will be described on the next subsection).

The next list briefly introduces each of the predictors:

\begin{enumerate}
\item gshare ...
\item perceptron
\item local

\end{enumerate}

\section{Evaluation}

To obtain a more precise analysis, we extended the simulation
infrastructure with branch classification techniques that allow us to
more accurately profile the considered benchmarks: one of the
classification techniques classifies the branches into groups
depending on their taken/not-taken rates; another classifies the
branches into groups depending on their change frequency. The idea for
these metrics is to allow us to identify the key characteristics of
traces in which our proposal does better than the traditional
meta-prediction technique.

...

Extensions to the simulator output:

\begin{itemize}
\item branch classes , by tracking the number of times a branch is execute, how many it was taken, frequency stuff
\item branch 
\end{itemize}

...

Give the size of our simulator output, we had to switch our plotting method away from Microsoft Excel to Matlab in order to cope with the very large data set we extract from the execution of our predictors and their manipulation. Unfortunately, since this is an automated process, this means that sometimes the graphics quality is not always the easiest to read...

\subsection{Problems, Bugs and Unexpected Tricks}

As described in the previous milestone report, we have found that the direct conversion of code for the \textit{perceptron}, \textit{o-ghel} and \textit{piecewise} predictors do not give the expected performance advantages that the literature describes.

\begin{itemize}
\item problem of perceptron and whatnot when fetched multiple and not update
\item cheating gshare? updates history always with correct branch at fetch stage!
\end{itemize}

Based on the ... we decided not to have more test sets and instead focus on filtering from the benchmarks we got.


\subsection{Preliminary Results}

blablabla awesome sauce

classification problem, matlab analysis data sizes, complexity...

simulator changes, scripts / batchers / filters etc.

\section{Progress Discussion}

what was done right / wrong ; needs to change ; how we plan to fix; talk to TA about the ``cheats'' ; papers to read and plan ahead...

\section{Update of Milestone 3 Goals}

Based on our current results, for the next milestone we plan to focus on the following somewhat open-ended goals:

\begin{enumerate}

\item \textbf{improve meta-predictor (hybrids)}, we have found that our hybrids, under certain benchmarks, consistently pick the wrong choice. Our goal will be to devise a more adaptive meta-predictor that detects such case and changes its behavior accordingly;

\item \textbf{filter results based on relevant branch classes}, although our results on \textit{wrong choices} for hybrids already filters out irrelevant choices in the hybrid's behavior, we also aim to more comprehensively categorize the behavior of all predictors on each class of branch so that we can understand how to improve performance for relevant classes (such as ``sometimes taken'').

\end{enumerate}

Similarly, we also have some specific goals:

\begin{itemize}
\item implement bi-mode predictor? and others?
\item fix sizes / costs;
\item better insight into hybrid decisions?
\end{itemize}

\bibliography{milestone2}
\bibliographystyle{abbrv}

\end{document}
