\documentclass[10pt,twocolumn]{article}
\usepackage{fullpage}
\usepackage[top=1in,bottom=1in,left=1in,right=1in]{geometry}

\title{15-740 Project Proposal:\\ Branch Prediction }
\author{Bernardo Toninho, Ligia Nistor, Filipe Milit\~{a}o}
\date{}
\begin{document}

\maketitle

\section{Introduction}

%Earlier this year, the Journal of Instruction-Level Parallelism hosted a competition to 
%compare the performance of different branch prediction algorithms using a 
%common framework. Constrained only by a fixed storage budget, contestants were 
%invited to submit novel branch prediction algorithms which were then pitted 
%against each other to determine the champion. For our purposes, this competition 
%will be reframed as course project, using the same infrastructure that the 
%competition provides. However, the goal is not to determine a winner, but to 
%implement a wide variety of existing branch prediction algorithms, evaluate their 
%performance, provide analysis, identify strengths and weaknesses, and even 
%propose new algorithms. 
%Branch Prediction Championship: http://www.jilp.org/jwac-2/ 


%1. how many branch predictor
%2. testing framework? more? simulator
%3. constraints? only 65KB?
%4. papers? references?
%5. design considerations on branch predictors

%\textit{The Problem: What is the problem you are trying to solve? Define clearly.}


With the ever increasing performance demands that applications place
on microprocessors, today's micro-architect needs to come up with
increasingly sophisticated techniques that increase effective
throughput. One of these techniques is called \emph{pipelining}, in
which the processing of an instruction is divided into a sequence of
independent stages. The essential stages can roughly be distinguished
as Fetch, Decode, Execute and Write-back (although processors of today
implement much more high-grain, or \emph{deeper}, pipelines). The
advantages of pipelining is that it allows for the CPU to begin
processing the next instruction of a program before the current one
has finished (i.e. after an instruction has entered the decode stage,
the next instruction can enter the fetch stage). 

To fully exploit the
benefits of pipelining, a steady supply of instructions must therefore
be available. However, when a branch instruction enters the pipeline,
the next instruction in the program is no longer necessarily
determined by incrementing the program counter, and so the CPU would
need to \emph{stall} until the correct next instruction could be
determined. To address this issue, modern microprocessors
speculatively fetch and execute instructions by attempting to predict
the correct path in the program. This technique is commonly called
\emph{branch prediction}. A branch predictor attempts to swiftly predict the
next instruction in the control flow of a program and instructs the
fetch engine accordingly. When the branch is executed and the actual
next instruction can be determined, the prediction is determined
correct or incorrect. Upon a branch misprediction, the pipeline needs
to flush out the incorrect instructions so that the correct path can be executed,
which results in overhead due to the wasted work on following the
wrong target.
As modern pipelines become increasingly
deeper, the penalty for a branch misprediction becomes critical and
highly accurate predictors become paramount for overall system
performance and efficiency.

The branch prediction problem is further complicated by the different kinds of existing branches: conditional, indirect, unconditional, call and return branches. We will only focus on the first two kinds: conditional branches (that can have one of two possible outcomes and are used to implement {\tt if} statements) and indirect branches (that can have one of a fixed set of results and can be used to model {\tt switch} statements). The remaining types of branches have already been extensively studied and are generally easily predicted with modern techniques.

In this project we will explore the design space of modern branch
prediction algorithms, providing analyses of the several different
classes of branch predictors in use today (as well as several promising
proposals from the architecture community that have not yet seen their
implementation in commercial processors), evaluating their performance 
comparatively in several different benchmarks. The purpose is to
determine the strengths and
weaknesses of each class of algorithm (and the many existing minor
variations) and the tradeoffs involved in their design, in a way that
can potentially inform the design of a better branch predictor 
(the definition of ``better'' doesn't solely consist of prediction
accuracy but also of complexity and speed).

%\begin{itemize}
%\item list tradeoffs more clearly? (bullets, with what is gained/lost)
%\item needs to mention different kinds of branches and that we will only focus on conditional (maybe also indirect, since it is in the championship)
%\end{itemize}

\subsection{State of the art}

% Needs citations

Branch prediction technology, like all microprocessor technology,
has come a long way since its original inception. Modern branch
predictors employ sophisticated techniques that aim to explore the
predictability of branches in a program. Invariably, modern branch predictors
maintain history information of the branch behavior of a program and
extrapolate from this data the expected behavior of a given
branch. Since there are many ways of obtaining and maintaining this
history, and several ways of making guesses based on this
information, there is a wide range of proposed solutions. Some
predictors opt to keep \emph{global} history information \cite{yehpatt91,Yeh:1998:AIT:285930.286004}
(i.e. a history of branch choices over a large portion of a program),
others choose to maintain \emph{local} histories (the decision history
for each branch instruction) \cite{Predictors93combiningbranch}. 
The former class of predictors explores 
correlation between different branches in a program, 
while the latter exploits internal correlation of each branch
\cite{Evers:1998:ACP:279361.279368}. 

Given the branch history, the predictor must
make use of this information effectively. Again, the possibilities
abound. Many modern predictors maintain saturating counters \cite{yehpatt91}
(state machines that determine whether a branch should be taken or
not), while others use more sophisticated techniques such as
(relatively simple) neural networks
\cite{Jimenez:2001:DBP:580550.876441}. 
Each choice has inherent
tradeoffs w.r.t. accuracy, efficiency and complexity and poses
substantial questions: How much history should be kept in the predictor? How
many saturating counters (or neural networks)? How to effectively
index counters without incurring in too many conflicts/interferences? Many answers
to these questions exist (and the answer to one affects the others). 
Some predictors maintain relatively small histories, others (such as
the O-GEHL predictor) are
designed to take into account larger past histories \cite{Seznec:2005:AOH:1080695.1070003}, while
maintaining a small memory footprint. A class of predictors called
\emph{hybrid predictors} combine techniques from different predictors
and use a \emph{meta predictor} to choose between the best prediction
available. Others combine local and global predictors to
maximize accuracy. The design choices are many, and so are the
tradeoffs that these choices entail. There is no clear consensus of
what the ``best'' predictor is. The performances vary between
different program profiles, and some are exceedingly complex to
implement in a fast, efficient way. For this reason, most modern processors do not implement the best known predictors and usually rely on simpler versions that can more easily fit into their design constraints.

%\begin{itemize}
%\item final paragraph with the current ``best'' predictors?
%\item state of the art on implementation (what is used in modern processors), an idea of real implementation requirements and constraints
%\end{itemize}

\section{Project Statement}

Our project consists of an implementation oriented approach to branch
prediction. We plan to implement a variety of representatives of
different classes of branch prediction algorithms and perform
comparative analyses of each, in order to better identify the
advantages and disadvantages of each approach (i.e. when and how does
each predictor work best), and their inherent
tradeoffs (e.g. accuracy vs logic complexity, more accuracy on some
special cases, etc). We will perform our implementations and analyses
through simulation in a common framework.

We will cover simple branch predictors (as a sort of
baseline) such as ``always taken''/``never taken''-style predictors,
saturating counter predictors in their global and local variants (and
different techniques to handle \emph{aliasing} conflicts), as
well more sophisticated predictors based on perceptrons (including 
variants for very long histories such as the O-GEHL) and linear-piecewise
learning. Furthermore, we plan on also considering hybrid predictors,
that combine different global and/or local predictors (for instance,
combining a global perceptron predictor with a local loop predictor,
among other possibilities). The goal is to develop the key insights
that each predictor exploits successfully and develop a prediction
technique that tries to complement the strengths of different
predictors to attempt to solve their underlying weaknesses.



\section{Methodology}

%\textit{How will you test the hypothesis/ideas? Describe what simulator or model 
%you will use and what initial experiments you will do. 
%}
%\begin{itemize}
%\item BPC Simulator
%\item Potentially extend to account for different metrics
%\item Implement several modern predictors and compare using supplied traces (maybe also try the SPEC benchmarks?)
%\item Potentially implement customizations and variations and analyze.
%\end{itemize}

Our main procedure for collecting results will be to use the testing framework provided by the \textit{Championship Branch Prediction}\cite{bpc}, which includes both a simulator and several different kinds of traces for specific workloads (multimedia, server, etc).

However, we also intend to extend the metrics provided by the simulator to account for additional meaningful statistics (for instance, how efficiently the branch predictor ``learns'' to predict a certain branch). Thus, we plan on having a more fine-grained monitoring of progress of the prediction quality to test the impact of several different design variations/customizations (size, initial values, etc).

Hopefully, we should also be able to identify and categorize several different kinds of branches (fixed length loops, etc) and gather data on how effective each branch predictor is at predicting each category so that a decision on combining several predictors (i.e. a hybrid predictor) can be better justified as well as identifying core corner cases that lead to less optimal predicting behavior.
Ideally, we would also like to create new workloads that target specific branch categories so that testing can be more accurately directed to exercise variations of some specific kind of branch. However, this is tied to how easily it will be to create these workloads and adapt them to the format the simulator expects.

\section{Plan}

\subsection{Milestone I}
By milestone one, we plan to get familiarized with the simulation
framework, so that we can potentially modify it as mentioned above (to include more performance metrics). 
%
We will implement or adapt existing code of simple branch predictions, namely:
basic non-adaptive predictors (such as the ``always taken''/``always not taken'');
a two-level branch predictor \cite{Evers:1998:ACP:279358.279368};
O-GEHL \cite{Seznec:2005:AOH:1069807.1070003};
perceptron \cite{Jimenez:2001:DBP:580550.876441} and
piecewise linear branch predictor\cite{Jimenez:2005:PLB:1069807.1070002}.
%
Finally, we intend to compare and analyze the performance results of running some preliminary benchmarks on these predictors.

\subsection{Milestone II }

For this milestone, we will implement several of more sophisticated branch predictors, namely hybrid predictors (implementing several variations), and detect cases where the previously implemented branch predictors perform worse. Another predictor that we will likely implement is one combining multiple partial matches \cite{Gao}.  

However, our main goal in this milestone is to try to find potentially new or relevant patterns in the behavior of conditional branches that can be exploited in a new or simpler branch predictor algorithm. This implies a large effort to categorize and analyze the dynamic behavior of branches in our benchmarks, as well as creating new test workloads to specifically target corner cases that might not be properly handled by existing predictors.
 
\subsection{Milestone III }

This milestone is set to focus on combining all the previous analysis work into a new branch prediction algorithm, hopefully.
%
We then plan to do an extensive performance analysis on how this new algorithm performs making recommendations on how to further improve it or show when it is an adequate choice for providing a good predictions.

%The idea is to conceive a hybrid predictor that complements the worst
%cases of the previously considered ones, that we will implement and analyze.  
%We will try to shape up the idea of an entirely novel branch predictor.

\subsection{Final Report }
The final report will collect all our results from the previous
milestones, as well as some additional results on different hybrid
predictors and our potentially new predictor.
\begin{description}
\item[75\% Goal] Implement and analyze existing branch predictors, potentially combining them into a new \textit{hybrid} predictor.
\item[100\% Goal] Implement and analyze an entirely novel branch predictor.
\item[125\% Goal] Optimize the new branch predictor (based on an extensive effort to categorize branch behavior), and combine the several versions or different parameters values into a hybrid predictor.
\item[Moonshot Goal] To show that the performance of our novel branch predictor is better than of existing branch predictors and write a publishable paper based on our findings.
\end{description}








\bibliography{proposal}
\bibliographystyle{abbrv}

\end{document}
