
\input{template}
\usepackage{graphicx}
\usepackage{epstopdf}

\usepackage{rotating}
\usepackage{multirow}
\usepackage[table]{xcolor}
\usepackage{color}
\newcommand{\TODO}[1] {\textbf{\textcolor{red}{#1}}}

\title{Analyzing Branch Mispredictions
\\\small{15-740 Computer Architecture (Fall 2011) - Project Report}}
\author{\normalsize
\begin{tabular}[t]{ccc}
Bernardo Toninho                      & Ligia Nistor & Filipe Milit\~{a}o        \\
\texttt{btoninho@cs.cmu.edu}    & \texttt{lnistor@cs.cmu.edu} & \texttt{foliveir@cs.cmu.edu}        \\
\multicolumn{3}{c}{Carnegie Mellon University}
\end{tabular}
}
\date{}
\begin{document}

\maketitle

\begin{abstract}
Given the critical impact of branch prediction accuracy on performance
in modern superscalar and deeply pipelined architectures, it is of the
utmost importance to develop a more complete understanding of when and why
branch mispredictions occur. By providing a more detailed window into a predictor's performance, we aim to enable potentially new solutions that improve upon the current state of the practice (without aggravating
costs in a prohibitive manner) by evaluating how the different components of a predictor are interacting. To this extent, in this project we
develop a novel analysis on existing state of the practice
branch predictors that enables us to study the quality of the
correlations that are captured by these predictors, and determine if these are in fact interfering
with each other, as well as why that may not be the
case (by using branch classification and other techniques). 
We further extend our analysis to determine how adequately do
the employed prediction mechanisms model the program's runtime branch behavior, determining if mispredictions could have been improved by simply
changing the resolution of the prediction mechanism. 

We deploy our extensive analysis on a local and global variant of the gshare
predictor, and do a brief branch classification analysis for
mispredictions in the TAGE predictor.
Finally, we also consider
the issue of misprediction cluster tracking, in which an inexpensive hybrid predictor
tries to avoid the host predictor mispredicting multiple times in succession. 
\end{abstract}


\section{Introduction}

Branch prediction is a fundamental component in modern pipelined
micro-architectures. Instead of forcing a processor to stall when a
branch occurs in a program, the processor makes use of \textit{branch
prediction} to speculatively fetch instructions along the predicted
control flow path in the program. If the prediction is accurate, the
processor has produced useful work and successfully avoided stalling
the pipeline until the actual outcome of the branch can be
determined. If the prediction is not accurate, all instructions that
were introduced to the pipeline after the misprediction need to be
discarded and the correct architectural state of the machine needs to
be restored (this overhead is typically called the \emph{misprediction
penalty}). As pipeline depths increase, the
criticality of branch prediction increases: not only does it become
harder to provide a steady flow of useful instructions to the
pipeline, but also the misprediction penalty increases since the
number of cycles required to determine the outcome of a branch
increases substantially. 

Given the importance of branch prediction techniques, the area has
been the subject of extensive research over the last decades, focusing
on a wide range of techniques that aim to increase the accuracy and
effectiveness of branch prediction schemes. Many solutions have been proposed
throughout the years, exploring software-based approaches,
hardware-based approaches and combinations of both. The prediction
mechanisms themselves have evolved substantially, reaching, for state of
the practice predictors, average misprediction rates of around 95\% and typically
even lower ones for (the more complex) state of the art predictors. The difficulties of branch prediction
are manyfold: predictors are typically implemented in hardware, so the
algorithmic complexity of the mechanism needs to be
simple, in order not to make the cost prohibitive.
Furthermore, predictions need to be determined in a very limited
number of cycles, which requires predictors to be very fast. Finally,
given that a reasonable percentage of all instructions in a program
are branches, minimizing useless work requires a considerably high prediction accuracy. To summarize: a
predictor needs to produce very accurate and fast predictions, while
using and storing substantially limited information. Thus, one of the
key challenges for improving branch prediction is to correctly understand what information and branch behavior can best be
used and guessed by a predictor, as well as the causes of
mispredictions, in order to develop insights on how to circumvent their limitations and better utilize its strengths so as to facilitate the 
development of better, cheaper and more accurate branch predictors.

Our project focuses on the analysis and study of conditional
branch prediction (typically, the most predominant type of branch),
with the goal of adding to the reasonable volume of understanding of the behavior
of branch predictors in general (as well as to the behavior of
specific predictors). Specifically, we analyze mispredictions 
and study the reasons why they actually occur (potentially identifying
solutions to some mispredictions). We
approach this problem on several fronts, which we summarize here:
We begin by considering the distribution of mispredictions over time
and identifying \emph{misprediction clusters} (sets of consecutive
mispredictions). The idea is to take advantage of the potential
predictability of these clusters and minimize their effects. 

In a more general setting, using branch classification techniques, 
we determine the behavior of
predictors on several classes of typical branch behavior (e.g. always
taken and never taken branches, loops, etc). The goal of this approach is to
diagnose types of branches in which certain predictors behave
poorly (or at least less accurately), potentially providing insight
into how to improve upon a specific predictor. 

As is typical in computer architecture research, the state of the art
is substantially more advanced and complex than the current state of
practice. We wish to explore a reasonable middle ground by considering the 
state of the practice branch predictors, which typically consist of
variants and combinations of two-level adaptive branch predictors
(such as gshare), and developing further insights into the behavior of
these specific predictors. Our approach on this subject consists of
decomposing the predictors into progressively more idealized forms
that minimize aliasing in the predictor tables, keeping track of
each predictor's \emph{correlation sets} (i.e. the sets of predictions
a predictor \textit{assumes} to be correlated). The goal of this analysis is
to determine how good the correlation sets of the baseline predictor
are when compared to the idealized versions (which allows us to
determine how much of the additional information used in the idealized versions
can potentially be used to produce meaningful gains), as well as identify
key characteristics of accurate and inaccurate correlation sets
(e.g. if it avoids mixing conflicting branch classes). Finally, using the correlation sets from the
previous analysis, we inspect the \emph{training stream} of a given
set, that is, the stream of data that is used to train the
prediction mechanism used to make a prediction for that set. Although there is an unavoidable interdependence on the impact of using a specific correlation set and the adequacy of its resulting training stream, the goal of this analysis is to
evaluate the ability of the prediction mechanism to model that
stream, as well as determine the sensitivity of the prediction accuracy to
the warm-up stage of the prediction mechanism (i.e. determine how relevant the
initial values are to the overall accuracy of a set).

\paragraph{Document Structure}

In the next section we elaborate on relevant existing work, detailing
the predictors we consider in this project and some analysis
techniques that have been developed in the literature. We proceed to
describe our analyses in detail, as
well as presenting and discussing the results we have obtained wrt to
our misprediction cluster analysis (Section~\ref{sect:clustering}),
branch classification (Section~\ref{sect:classification}), correlation
set (Section~\ref{sect:correlation}) and training stream analysis
(Section~\ref{sect:stream}). We conclude in Section~\ref{sect:conc}
with some final remarks.

\section{Related Work}\label{sect:related}



Branch prediction is a substantially rich area of research, not only
in the development of novel prediction techniques but also in
presenting analyses and insights into the behavior of existing
predictors. For the sake of brevity, we do not present an exhaustive
history of branch predictors, focusing mainly on the predictors that
(with variations) are implemented in modern processors, which are
those we analyze in greater detail, as well as the ``state of the
art'' branch predictors.

\begin{figure}[]
\centering
\subfloat[]{
\label{fig:sub_gshare}
\includegraphics[width=1.5in]{gshare}
}
\subfloat[]{
\label{fig:sub_local}
\includegraphics[width=1.5in]{local}
}
\caption{GShare and Local Branch Predictors}
\label{fig:gshare}
\end{figure}


Most modern microprocessors typically implement some form of two-level
branch predictor \cite{Yeh:1991:TAT:123465.123475,
  Yeh:1992:AIT:139669.139709}. This class of predictors typically
uses some form of branch direction history information as an index
into a prediction table, containing saturating counters (typically
$2$-bit). When a prediction is required, the current branch history is
used to index into the table, obtain the $2$-bit counter, and the most
significant bit is used as the prediction. The history information can
be maintained globally (which is typically the case), that is, a bit
vector with the outcome of the last $N$ executed branches in the processor's
instruction stream, or in a local manner, where a table of such history
vectors is maintained, one for each branch (a local branch prediction
scheme is given in Fig.~\ref{fig:sub_local}). The key idea of two-level branch
predictors is that it \emph{filters} the branch execution 
information in a program, streaming into the training of a prediction
mechanism only the execution data the predictor deems
\emph{correlated} (global predictors exploit correlations between
branches, while local predictors exploit correlations between previous
executions of the same branch). The training itself, for the counter
mechanism, consists of incrementing (using saturating arithmetic)
 when a branch is \emph{determined}
taken and decrementing when not taken. The counter acts as a state
machine, with half the states predicting the branch as taken and half as
not taken. The idea being that the predictor should follow the last,
most frequently taken branch direction -- the size of the counter
determines the sensitivity with which the prediction is affected by
the past outcomes.

One of the key issues of two-level branch predictors is the phenomenon
known as \emph{aliasing} \cite{Sechrest:1996:CAD:232974.232978}, 
in which two distinct indices map to the
same entry in the prediction table. A substantial amount of research
has been developed to study and minimize the aliasing problem,
although it has mostly been concerned about overall misprediction
rates and less on how \emph{different} kinds of aliasing affect
performance, and by how much (historically, some research on aliasing
arose due to the SPECint89 and SPECint92 benchmarks used in evaluating
existing predictors containing a relatively small amount of code and
thus not fully ``stress testing'' the prediction mechanisms 
\cite{Sechrest:1996:CAD:232974.232978}). With the
introduction of the \emph{gshare} (Fig.~\ref{fig:sub_gshare}) and \emph{gselect} branch
predictors \cite{Predictors93combiningbranch}, where the index into the prediction table consists of the
global history XOR'd (resp. concatenated) with the branch address,
aliasing is reasonably diminished, and in fact variants of these
predictors are implemented in modern processors. Given the importance
of this class predictors, much work has been dedicated to the study of
their behavior
\cite{Yeh:1993:CDB:165123.165161,Chang:1996:IBP:882471.883306} and how
they can be improved, commonly using more sophisticated techniques
such as static profiling of branches. We highlight the work of Evers et
al. \cite{Evers:1998:ACP:279361.279368}, which provides an in-depth
analysis of branch correlation, determining some of the fundamental
reasons that make branches predictable. Our work is significantly
related to theirs, however we focus more on the predictors themselves:
instead of determining correlations by looking at a program's source code (which we unfortunately do not have) and
determining whether or not predictors capture these correlations, we
consider the predictions that a predictor deems correlated and
analyze the characteristics of these correlated predictions. We also
mention Kim et al. \cite{sourceclass}, that using source code
inspection and disassembling, identify classes of difficult to predict
branches (such as loops associated with induction variables, branches
associated with accessing data structures, branches associated with
complex condition evaluations, etc.). Their approach is somewhat
orthogonal to ours, given that we do not have access to source code
within our framework.

The state of the art in branch prediction has grown tremendously in
sophistication when compared to the state of the practice, which
mostly consists of variants of the predictors described above. These
variants typically consist of either optimizations (such as the
Agree predictor \cite{Sprangle:1997:APM:384286.264210}) that aim to
minimize the aliasing problem further, or \emph{hybrid} predictors
\cite{Predictors93combiningbranch,Evers:1996:UHB:232974.232975}, 
combining multiple predictors (e.g. gshare and a specialized loop
predictor). The state of the art is substantially more complex,
ranging from predictors based on simple perceptron neural networks
\cite{Jimenez:2001:DBP:580550.876441,Seznec:2005:AOH:1069807.1070003}, 
more sophisticated neural networks
\cite{Jimenez:2005:PLB:1069807.1070002}. To the best of our knowledge,
the most accurate branch predictor is the TAGE
\cite{SEZNEC:2011:HAL-00639193:1} predictor, which consists of a
combination of ideas of partial tagging \cite{Gao} and the O-GEHL
predictor \cite{Seznec:2005:AOH:1069807.1070003}. A schematic
representation of the TAGE predictor is given in Fig.~\ref{fig:tage},
showing a 5 component predictor, where different portions of the
history are used to index into the tagged components in order to
obtain a prediction. 
We include some results
on this predictor as a comparison.

\begin{figure}[]
\centering
\includegraphics[width=3in]{tage}
\caption{TAGE Branch Predictor}
\label{fig:tage}
\end{figure}


\section{Methodology}
\label{sect:method}

The analyses and techniques presented in the following sections were
done using the simulator from the \textit{Championship
Branch Prediction} \cite{bpc} workshop to generate data on several prediction metrics
and Matlab for the actual analysis and cross-referencing of those results.

The simulator implements a 14 stage pipelined, superscalar
processor. The pipeline is 4-wide in general, but 12-wide in the
execution stages. The simulator, in its original form, outputs a simple score
based on the misprediction penalty and the number of mispredictions of
each considered predictor. To enable our analysis, we extended the
simulator with several custom statistics, which are then fed into a
Matlab script to be analyzed\footnote{ All code is available at (but the traces are only at\cite{bpc}):\\ \url{http://code.google.com/p/bpredict/ }}.

We used the program traces
supplied with the framework, consisting of approximately 2GB of
traces, divided into 5 classes according to the application behavior
(each trace class is further divided into multiple trace files):
INT (integer manipulation programs), SERVER (server program), CLIENT
(client program), WS (workstation style program), MM (multimedia
program). We will refer to the traces using these names. Due to space
constraints, we limit our presentation to only use one representative
trace from each class, more precisely, we will use the following
traces: INT04, CLIENT05, WS04, SERVER03 and MM04 (we omit the number
while presenting the results). Finally, 
due to the
high computational cost of running our analyses, we were forced to
restrict the analyses to the first
0.5M predictions in a trace (which is about one eight of the size of
the whole trace) to be able to get concrete results ready for this report.

For the realistic predictor implementations (that are used as baseline on the 
analysis), we consider the same size constraints as those in the 
championship. Consequently, the baseline versions of the GShare, 
Local and TAGE predictors are limited to use at most 65KB.

\section{Misprediction Clustering}
\label{sect:clustering}

%%% section figures THIS DATA NEEDS TO BE FILLED IN

\begin{figure*}
\includegraphics[width=\textwidth]{clusters_gshare}
\caption{Cluster size distribution for \textit{GShare} on the used test traces.}
\label{fig:clusters_gshare}
\end{figure*}

\begin{figure}
\includegraphics[width=\columnwidth]{clusters_ws_others}
\caption{Cluster size distribution for the IST-TAGE\_cond and Local predictors in the WS trace.}
\label{fig:clusters_others}
\end{figure}

\begin{figure}
\center
\scriptsize
\begin{tabular}{|c||c|c|c|c|c|}
\hline
missed ($n$) & INT & CLIENT & WS & SERVER & MM \\ \hline
1	&	0.03	&	0.11	&	0.38	&	0.30	&	0.09	\\
2	&	0.72	&	0.45	&	0.38	&	0.53	&	0.26	\\
3	&	0.65	&	0.50	&	0.36	&	0.60	&	0.42	\\
4	&	0.76	&	0.61	&	0.33	&	0.63	&	0.54	\\
5	&	0.85	&	0.63	&	0.37	&	0.65	&	0.59	\\
6	&	0.91	&	0.49	&	0.40	&	0.64	&	0.58	\\
7	&	0.43	&	0.60	&	0.31	&	0.68	&	0.59	\\
8	&	0.69	&	0.42	&	0.38	&	0.73	&	0.67	\\
9	&	0.56	&	1.00	&	0.30	&	0.68	&	0.77	\\
10	&	1.00	&	1.00	&	0.25	&	0.75	&	0.65	\\
\hline
\end{tabular}
\caption{Probability of continuing making mistake after making $n$. That is, given $N$ misses, the predictor also wrongly guessing the next branch. Computed using the formula (where $\mathit{cluster}_i$ is the number of clusters of length $i$):\\
$\mathit{Prob}( \mbox{missing~next} | \mbox{missed}~n ) =
\frac{ \displaystyle\sum^{\infty}_{i = n+1} \mathit{cluster}_i  }
{ \displaystyle\sum^{\infty}_{i = n} \mathit{cluster}_i }$}
\label{fig:cluster_prob}
\end{figure}

\begin{figure*}
\center
\scriptsize
\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|}
\hline
\multirow{2}{*}{Cluster}
& \multicolumn{2}{c}{INT} & \multicolumn{2}{|c|}{CLIENT} & \multicolumn{2}{|c|}{WS} & \multicolumn{2}{|c|}{SERVER} & \multicolumn{2}{|c|}{MM} \\ \cline{2-11}
 & GShare+ & GShare++ & GShare+ & GShare++ & GShare+ & GShare++ & GShare+ & GShare++& GShare+ & GShare++\\ \hline
1	&	71	&	72	&	89	&	65	&	-6863	&	0	&	1001	&	690	&	-407	&	-553	\\
2	&	46	&	23	&	51	&	67	&	-3653	&	0	&	235	&	394	&	557	&	535	\\
3	&	-14	&	-11	&	-27	&	-17	&	3719	&	0	&	206	&	221	&	528	&	551	\\
4	&	-5	&	5	&	1	&	4	&	2022	&	0	&	-168	&	-87	&	47	&	56	\\
5	&	-5	&	-3	&	-13	&	-8	&	569	&	0	&	-148	&	-97	&	-35	&	-18	\\
6	&	-2	&	0	&	-14	&	-12	&	626	&	0	&	-149	&	-112	&	-22	&	-21	\\
7	&	-17	&	-15	&	-8	&	-6	&	192	&	0	&	-112	&	-82	&	-14	&	-13	\\
8	&	-4	&	-3	&	-5	&	-5	&	67	&	0	&	-57	&	-43	&	-11	&	-10	\\
9	&	-4	&	-3	&	0	&	1	&	80	&	0	&	-53	&	-41	&	-3	&	-2	\\
10	&	0	&	1	&	1	&	2	&	35	&	0	&	-32	&	-22	&	-6	&	-6	\\
10+	&	-5	&	-4	&	-2	&	-1	&	38	&	0	&	-81	&	-48	&	-11	&	0	\\ \hline
avg. ($>=2$):	&	-2.16	&	-1.37	&	-0.31	&	-0.32	&	0.32	&	0	&	-0.56	&	-0.4	&	-0.11	&	-0.06	\\
\hline
\end{tabular}
\caption{Change of cluster distribution (compared with baseline gshare).}
\label{fig:clusters_change}
\end{figure*}


\begin{figure}
\center
\scriptsize
\begin{tabular}{|c|c|c|c||c|c|}
\hline
Trace	&	Gshare	&	Gshare+				&	Gshare++				&	Optimal	&	Worst	\\
\hline
INT	&	0.82	&	0.78	(	-0.04	)	&	0.79	(	-0.03	)	&	0.76	&	0.78	\\
CLIENT	&	1.03	&	1.01	(	-0.02	)	&	1.02	(	-0.01	)	&	0.93	&	1.02	\\
WS	&	28.73	&	31.73	(	3	)	&	28.73	(	0	)	&	24.67	&	31.48	\\
SERVER	&	6.71	&	6.06	(	-0.65	)	&	6.44	(	-0.27	)	&	5.01	&	6.15	\\
MM	&	6.73	&	7.08	(	0.35	)	&	7.11	(	0.38	)	&	6.44	&	6.99	\\
\hline
\end{tabular}
\caption{Misprediction rates for clustering exploiting predictors on all test trace.}
\label{fig:misp_clusters}
\end{figure}
%%%%%%%%%%%

While often we think of mispredictions in isolation, they do not
necessarily occur in this way. We call a \textit{misprediction
  cluster}  a set of consecutive mispredictions by a given
predictor. Naturally, given that we wish to make predictors as
accurate as possible, we aim to minimize not just the number of such
clusters, but also their \emph{length}, which is our main focus wrt to
clusters in this work. 

As we can see in Fig.~\ref{fig:clusters_gshare}, the GShare predictor
generates some misprediction clusters. A substantial amount of these
have length 2 (which limits the effectiveness of our approach, as we
detail in the following section), but there are also clusters with
greater length in all of our considered test traces. We can see a
similar pattern for the other predictors
(Fig.~\ref{fig:clusters_others}), although our focus here is solely on GShare.

\paragraph{What causes misprediction clusters?} 
Misprediction clustering can occur due to a combination of several
factors: limitations on the predicting capabilities of a given
predictor (due to interference caused by aliasing or by limitations of
the actual prediction mechanisms) that simply do not correctly predict
the branch behavior in certain circumstances; coincidence (these
limiting factors can occur on multiple, potentially uncorrelated branches
simply due to chance) or due to the impact of \emph{speculative predictions}
(i.e. predictions that have to be made while other, potentially
correlated predictions are in flight) in a deeply pipelined, superscalar
processor. 

\paragraph{Do clusters impact performance?} 
Since branches are a fairly common instruction in typical programs, 
it is possible that decreasing the length of a misprediction cluster
produces minimal actual performance gains, since the pipeline flush
caused by the initial misprediction may flush the consecutively
mispredicted branches. If a cluster is contained in the same
speculative pipeline, minimizing cluster length might yield minimal
performance gains since many of the ``corrected'' predictions might be
flushed by the initial, impossible to correct (with the techniques of
this section) misprediction. Unfortunately, our simulation
infrastructure does not allow us to measure the performance metrics
required for this analysis, which must be left for future work.

\paragraph{Our approach to misprediction clusters}

Although many analyses can be made to determine characteristics of
misprediction clusters, in this work we focus solely on their length
and attempt to minimize the length of misprediction clusters
dynamically only by considering lengths of previous clusters. 
Our approach is general and simple enough that can be
applied to virtually any branch predictor in which such clusters
potentially occur, despite having only implemented it as a hybrid
predictor on top of {\it GShare}.

We leave as future work a more extensive analysis of misprediction
clusters, such as trying to determine specific situations in which
they occur and how they are distributed across the execution of an
application (i.e. if they are related to a predictor's warm-up stage
or if they are spread evenly across the execution). In this work, we
simply detect the presence of clusters and introduce a very simple
mechanism that attempts to reduce them. Integrating such deeper
analyses might lead to potentially better results.


\subsection{Reducing misprediction clusters}

Given that we are only targeting conditional branches, inverting the
direction given by the host predictor is sufficient to correct a wrong
prediction.
Thus, we develop a mechanism that upon detecting what potentially is a
misprediction cluster, inverts the predictions of the host predictor 
for a certain number of predictions (while training the predictor with
the correct data). This raises the problem of
detecting a cluster, and the (much harder) problem of determining when to
\emph{stop inverting} the predictions of the host predictor. We call
the number of mispredictions caused by inversions the \emph{inversion
  penalty}.
Detecting clusters is a fairly simple task, given that a cluster
technically starts when the number of consecutive mispredictions is
two. 
Consequently, the gains of any prediction scheme we devise for this
matter are bounded by the \textit{optimal} theoretical case, in which 
we stop inverting exactly before the host predictor stops
mispredicting, that is, the inversion penalty is 0), and by the
\emph{worst} possible case, in which we only stop inverting after
making one wrong inversion.
Note that this worst case does not consider any predictor update delays caused
by the outcome of a branch not being available, which can cause more
than one misprediction if there are predictions in-flight between the
decision to invert and the point where the host predictor resumes
predicting correctly.  

As mentioned above, the challenge is determining when to stop
inverting predictions. Given that this is a hard problem, it can
potentially be useful to start inverting ``later rather than sooner'',
in order to minimize the likelihood of incurring in misprediction
penalties. Predicting the actual length of a misprediction cluster is
a probabilistic game. Our results, presented in
Fig.~\ref{fig:cluster_prob}, show that starting to invert after 2
mispredictions may not always be the best choice. It remains an open
problem to find an algorithm that dynamically computes a good runtime
approximation of the probability values of
Fig.~\ref{fig:cluster_prob}, determining the best misprediction
cluster length after which the predictor should start inverting (so as
to reduce the likelihood of inverting a correct prediction).
Thus, we simplified our approach to this problem by always 
beginning the inversion phase after two consecutive mispredictions and
only varying the stopping point. We consider two techniques to
determine the stopping point: after making a wrong inversion (which we
call \emph{gshare+}) and a slightly more sophisticated mechanism
(\emph{gshare++})
 in
which we track the average cluster size and stop inverting after the
sum of the average and the standard deviation (or after a
misprediction caused by an inversion).

Additionally, we employ an unintended simplification of our approach
since our code checks for a wrong inversion
(i.e. if the host predictor was correct) at fetch stage, which amounts
to knowing the outcome of a branch without having to wait until the
branch reaches the execution phase. 
This was an accidental bug in the implementation that we were unable 
to correct and re-run the tests on time for this report. However, this
just implies that unlike in a real implementation, we should never
perform worse than the theoretically worst case described above, 
since the effects of a speculative prediction will not cause us to 
delay our decision on keep or not the inversion. 

\subsection{Results for Misprediction Inversion}

Our results (Fig. \ref{fig:misp_clusters}) show an overall mixed performance impact of the techniques described above,
which is not completely surprising given that the majority of the
clusters have a length of two, and we can only correct clusters with
lengths strictly greater than that. Regardless, our very simple technique can reduce 
misprediction clusters and can produce some gains, albeit small ones.

Fig. \ref{fig:clusters_change} shows a more detailed breakdown of the
impact of our predictors on the cluster distribution. The results show
that, on average, we are effectively breaking down the longest
clusters of 
gshare (which obviously causes increases in the number of the smaller
ones). 
Although the overall effectiveness of the technique is usually
beneficial 
in reducing clusters, there are some cases (such as in the WS trace),
where 
we actually worsen the misprediction clustering problem. This is 
directly linked to the probability of a cluster occurring being too 
low in that test (see Fig.\ref{fig:cluster_prob}), thus it causes our
simple 
technique to wrongly invert the host predictor an excessive amount of
times. 
Interestingly, although usually providing smaller gains, the gshare++ 
variant more adequately adjusts its inversion decision and avoids the
negative impact present in gshare+.

Although we do not include the data here, we also have results that show the impact on the distribution of clusters when we remove speculative predictions is minimal (but it does reduce some of them).

Finally, Fig. \ref{fig:misp_clusters} also shows one case where the performance of gshare++ is actually worse than the theoretical bound. This is most likely due to a bug in the implementation. Although we did not have a chance to inspect the actual cause, it should never occur on a correct implementation since the stopping mechanism should never go beyond paying one misprediction for returning to the host's (correct) prediction.

\section{Analyzing Mispredictions}\label{sect:classification}


Our analysis begins by identifying classes of branches based on their
taken/not-taken behavior in a given trace. Since we do not have the actual source code of a trace, we are unable to do any sort of static verification to further classify the branches. Consequently, our classification effort is based only on the \textit{observed} branch behavior even if, in some cases, its classification might be the result of chance (e.g. a specific program's input may cause a branch to be always taken).

We distinguish several classes of (seemingly) \emph{static} branches: \emph{always} and \emph{never} taken
branches, which we further divide into branches that are always taken
but executed \emph{once} or never taken but only executed once (we
call this class \emph{zero}). This additional
distinction allows us to separate branches that should, in some sense,
be easy to predict (branches that are executed multiple times, always
with the same behavior) from branches that only depend on the initial
prediction value (and for which branch prediction can do little for). We also consider an \emph{alternating} branch class
(branches that consist of the patterns $(01)^{+}$ or $(10)^{+}$), a
\emph{loop} branch class (determined by branches whose target is a lower
address than the branch's address), a \emph{strict} block pattern class (patterns of the form
$(0^n1^m)^{+}$ or $(1^n0^m)^{+}$), a non-strict \emph{block} pattern class
(similar to strict block patterns, but where the first block can be of
a different length - to account for the fact that these traces are arbitrarily cut-out slices of a much larger execution), \emph{complex} pattern class (detected by matching a branch's behavior by circular-shifting it up to 32 times, thus detecting if there is a recurring, but more complex, pattern) and all \emph{other} branches, that present a
somewhat arbitrary behavior that cannot be captured by any of the previously defined classes (which may amount to random behavior or just harder to learn/predict patterns).


\begin{figure}
\includegraphics[width=\columnwidth]{classes}
\caption{Branch classification (stacked).}
\label{fig:class_br}
\end{figure}

Our goal with this classification is to determine the types of branches
that are most common in programs. The results of this analysis is
present in Fig.~\ref{fig:class_br} (and in more detail in the
appendix, Fig.~\ref{fig:class_table}). It turns out that most branches
are purely static (always taken or not taken), making up for almost
90\% of all considered traces. However, of the remaining 10\%, a
substantial chunk consist of branches in the \emph{other} class.


By cross-referencing the branch classification data with the 
mispredictions (Fig.~\ref{fig:miss_decomp}) of our analyzed predictors,
we can observe (Fig.~\ref{fig:class_miss}) that despite the clear bias towards static branches, the most
significantly mispredicted branches were in fact members of the
\emph{other} class, which is not too surprising given the inherently
unpredictable patterns that such branches have. We can also see
that loops, never taken and even always taken branches, which
would intuitively be thought of as easy to predict can still make up
for a relatively large portion of the mispredicted branches, both in
gshare (from 10 to 50\%), local (5 to 50\%) and even in the state of the
art TAGE predictor (4 to 40\%). 


\begin{figure*}
\includegraphics[width=\textwidth]{wrongs_classes}
\caption{Classifying mispredicted branches.}\label{fig:class_miss}
\end{figure*}

We leave as future work a more detailed decomposition of this
\textit{others} classes, since it is such a considerable source of
mispredictions (ideally along the lines of \cite{sourceclass}). 
Similarly, the classification of loops could be further subdivided since we are not considering the loop behavior, only that it jumps to a lower address.

Branch classification is a useful tool to identify the classes of branches that are more frequently being mispredicted. However, it does little to explain \textit{why} they were mispredicted, since the limitations of the predictor are not clearly exposed in this metric.
To further determine the causes of these somewhat expected and unexpected mispredictions, we deepen our
analysis of the gshare and the local branch predictors by carefully analyzing how their internals interact and influence the prediction outcome. 

Thus, our goal for the rest of this section is to develop a more accurate characterization of the performance of these predictors by decomposing them into two core components: the \textit{correlation mechanism} and the \textit{prediction mechanism}, although the performance of the later is invariably influenced the decisions made by the first. That is, the correlation mechanism produces a \textit{training stream} that may or may not be adequate to a specific prediction mechanism. Ideally, we would have evaluated the training stream produced by a specific correlation mechanism (such global history, stack history, etc) and then measure the properties of its resulting training stream to see how adequate that stream can be predicted by some prediction mechanism (2-bit counter, perceptron, etc). With these metrics, we could potentially expand on the previous classification of wrong predictions by also determining how to best predict a specific class or correlated class. Information that could potentially be used to design some sort of hybrid, specially targeted for some corner cases. 
Since there are many correlation and prediction mechanisms, with this
metric we could determine the best choice, or even potentially
dynamically change it, based on how accurate are the predictions for a specific class of branches.

\subsection{Correlation Analysis}\label{sect:correlation}

In this section we exploit the idea of a \emph{correlation
  set}. Correlations in a branch predictor are determined by the
indexing into a prediction table, in that branches that at a given
time index into the same table entry are considered to be correlated
by the predictor (regardless of what caused this sharing, which for 
example can be just the result of aliasing due to table size limits).

Our analysis consists
of, for a given predictor, identifying the sets of predictions that
are considered correlated, regardless of the reason. We call these a
predictor's \emph{correlation sets}. Then, by decomposing the
predictor into versions that progressively eliminate certain
ambiguities in indexing, we track how the initial predictor's
correlation set performance improved or not with these variations. The goal is to detect which correlation sets
have to gain or lose by further distinctions in their indexing,
as well as to allow us to inspect and characterize a predictor's
correlation sets in general, which gives us a metric to potentially better
understand the impact of phenomena such as \emph{destructive
  interference} (branches with conflicting behavior patterns that are assumed
to be correlated but can destroy each other's
predictions - such as mixing an always taken branch with a never taken one).

We perform the decomposition mentioned above only for the GShare and
Local branch predictors. We chose to perform the analysis only on
these mostly due to the fact that they are the \emph{de facto} state of the
practice standards, and specially due to time constraints. An analysis following
similar lines is possible for the TAGE predictor, but given the
substantially higher complexity of TAGE's indexing mechanism, we leave
this for future work. 
For GShare, we decompose it into four predictors: 
\begin{enumerate}
\item \textit{baseline} predictor - 
indexes a limited size prediction table using the global branch history XOR'd
with the branch address (modded with the table
size to correctly bound the index); 
$$\mathit{index}_{\mathit{baseline}} = (~ \mathit{History} \oplus \mathtt{PC} ~)~\%~\mathtt{TABLE\_SIZE}$$
\item \emph{unlimited} size predictor - the prediction table is assumed to be potentially
infinite in size (thus eliminating any collisions that could arise due to
ambiguities generated by index bounding);
$$\mathit{index}_{\mathit{unlimited}} = \mathit{History} \oplus \mathtt{PC}$$
\item  \emph{no collisions} version - the prediction table is assumed infinite
and is indexed by concatenating the global history with the branch
address, thus eliminating ambiguities due to the XOR operation; 
$$\mathit{index}_{\mathit{no \textendash collisions}} = \left< \mathit{History} , \mathtt{PC} \right>$$
\item \emph{path} history version - the prediction table is infinite
and the indexing is performed by using the concatenation
of the global branch target and direction histories and the
branch address (this eliminates ambiguities in which the same branch
is reached with the same global history but through completely
different control flow paths in a program). 
$$\mathit{index}_{\mathit{path}} = \left< \mathit{Path}, \mathit{History} , \mathtt{PC} \right>$$
\end{enumerate}

%$$\mathit{index}_{\mathit{baseline}} = ( \mathit{History}[\mathtt{PC}] \oplus \mathtt{PC} ) ~\%~\mathtt{TABLE\_SIZE} $$
%$$\mathit{index}_{\mathit{unlimited}} = \mathit{History}[\mathtt{PC}] \oplus \mathtt{PC} $$
%$$\mathit{index}_{\mathit{no \textendash collisions}} = \left<
%\mathit{History}[\mathtt{PC}] , \mathtt{PC} \right>$$
% \footnote{A core requirement of this decomposition is that a set from
%   each of the broken down version is strictly contained in a single
%   set from one that is less idealized. Although this condition is
%   obviously true for GShare, our local decomposition appears to either
%   be less obviously true or our implementation of this analysis is
%   wrong (since in one case, we have that a set in one of the more
%   idealized predictors is contained in more than one set of a higher
%   level).}

A similar decomposition is done for the local predictor\footnote{Note: an unexpected result has caused us to suspect that either our
implementation has a bug, or this decomposition of the Local predictor
does not obey our assumption of creating smaller, more isolated
correlation sets for this predictor - although the reason behind why
this might be the case is still unclear.}, in which we
produce the baseline predictor with \emph{limited} table size, an
\emph{unlimited} version with no restrictions on table size and a
\emph{no collisions} version, in which we eliminate the XOR
ambiguities (a path version doesn't make sense for the local predictor
since it does not consider global histories). 
%The indexing formulas are identical to the previous case with the exception that we now use a branch's history instead of the global history.

\paragraph{Performance results} Although the idealized (decomposed)
version are unimplementable in practice, their misprediction rates
are given in Fig.~\ref{fig:miss_decomp}, where we can see that there are some small
gains from diminishing aliasing, but that they are not as significant
as one might expect (this was the predominant pattern
in the rest of the trace set that we do not include due to size
constraints). These results also indicate that, while a negative effect can
be observed, aliasing more often produces \emph{positive} or at the
very least ``neutral'' effects (this is more noticeable in the GShare
predictors than in the Local ones, since there is inherently more
sharing of table entries in the former).

\begin{figure*}
\includegraphics[width=\textwidth]{misp_bars}
\caption{Misprediction rates of all predictors.}
\label{fig:miss_decomp}
\end{figure*}

\paragraph{Increasing Prediction Table Size}
Before considering the results of the correlation set analysis, we can
observe that the misprediction rates of the idealized versions are
quite similar amongst themselves, seeing as most changes appear to occur when we increase the
prediction table size to infinity, versus the baseline table size. To determine the sensitivity of these rates, we also simulated the baseline predictor but using a
larger table size (for the GShare predictor\footnote{For a 2-bit counter, the total cost of the baseline $2^{18}$ entries table is 64KB. Our larger version uses $2^{24}$, thus costing 4MB.}), with results also given in
Fig.~\ref{fig:miss_decomp}. As with the idealized versions, reducing aliasing can actually \emph{hurt}
performance, although in most other traces (not shown in the report) the decomposed GShare predictors perform
somewhat better than the baseline versions.  Interestingly, the larger GShare often outperforms the idealized ones, which hints that the difference in the correlation it is assuming improved its prediction, thus further hinting that it is more important to design a better correlation mechanism than blindingly increasing the table size.

%Interestingly, increasing the table
%sizes but not completely eliminating the aliases generated by this can
%produce better performances than just blowing up the table size
%indefinitely, which further reveals that aliasing can often have a
%positive effect.

Although this monolithic metric gives us an overall picture of a
predictor's performance, it yields little information on why the 
predictor is actually benefiting from the increased size, on how many 
predictions in the initial size were actually interfering in the first 
place or how much of the table was even ``useful''. That is, without 
knowing how and why the indexing is affecting the prediction we 
cannot know if simply increasing the table size will improve 
performance because it separates interfering branches, or if it also 
reduces performance by removing useful correlations whose 
performance penalty is hidden in the improvements related to the 
previous situation.

\paragraph{Correlation results} 
We now aim to evaluate how a predictor is benefiting (or not) from
using more space. This additional storage has the consequence of 
reducing indexing collisions which implies that the predictor is
assuming less correlation between predictions. Consequently, as 
we progress to more idealized versions - that allow less indexing 
collisions - the number of distinct correlation sets increases, but 
each set contains a smaller number of predictions. To decide if a 
decomposition was useful we simply compare the misprediction 
of a set with that of its more idealized versions. Since the higher 
the number of collisions the more feasible it is to implement such 
a predictor, we break ties by considering the most constrained
predictor to be the best solution.

We further exemplify the reasoning behind this technique with the
following two decompositions, where all the
predictions in a correlation set occur in sequence (in reality, they
are typically spread throughout an execution). Each
stream of ones (taken) and zeros (not-taken) represent a predictor's
($\mathtt{P}$) guesses. Each predictor sees the same
requests for predictions, it just organizes how to learn from them
(ie. if they are correlated) in a different way, thus we have that:

$$\mathtt{P1:} \ldots \overbrace{01011010}^{A} \overbrace{01111010}^{B}\ldots$$
$$\mathtt{P2:} \ldots \underbrace{0101}_{A_1}\underbrace{0101}_{A_2} \underbrace{1111}_{B_1}\underbrace{0111}_{B_2}\ldots$$

where the predictions $\mathtt{P1}$ deems correlated are
decomposed into several, sub-correlation sets in the more idealized
(with less indexing collisions) predictor $\mathtt{P2}$ so that we
have that the prediction sets obey:
$$A = A_1 \cup A_2 \qquad B = B_1 \cup B_2 $$
although the actual predictions done by each predictor are different
for $\mathtt{P1}$ and $\mathtt{P2}$.

With this monotonic decomposition we are testing how adequate it is to
have less deeply correlated predictions. Intuitively we have
two kinds of results: when a correlation set works best as that
predictor uses it (this means that the correlation set works
\textit{best} for that training stream and the predictions of that set
are all correlated - for instance if $A$ yields a lower misprediction
rate than that of $A_1 + A_2$) and when there is only a
\textit{partial} correlation 
between the predictions in a set (although some of the predictions
benefit from being grouped together, some of them do not). 
This is the case when we have that the $A_1$ set works best in
isolation, but the predictions of $A_2$ work best in $A$, that is, 
the training of the prediction mechanism of $A_1$ does not depend 
on the behavior of $A_2$ but the opposite is not true.

\begin{figure}
\center
\scriptsize
% -
\begin{tabular}{rr|c|c|c|}
\multirow{3}{*}{\begin{sideways}\parbox{15mm}{Training\\Resolution}\end{sideways}}  & \multicolumn{1}{c}{\multirow{2}{*}{\textbf{INT}:}} &\multicolumn{3}{c}{Prediction Resolution} \\ \cline{3-5}
&	& base	&	no collisions	&	unlimited	\\ \cline{2-5}
& \multicolumn{1}{|r|}{base}	&	78.9822	&	13.509	&	7.2892	\\ \cline{2-5}
& \multicolumn{1}{|r|}{no collisions}	&	\cellcolor{gray}	&	0.1654	&	0.0444	\\ \cline{2-5}
& \multicolumn{1}{|r|}{unlimited}	&	\cellcolor{gray}	&	\cellcolor{gray}	&	0.0098	\\ \cline{2-5}
\end{tabular}
% -
\begin{tabular}{rr|c|c|c|}
\multirow{3}{*}{\begin{sideways}\parbox{15mm}{Training\\Resolution}\end{sideways}}  & \multicolumn{1}{c}{\multirow{2}{*}{\textbf{CLIENT}:}} &\multicolumn{3}{c}{Prediction Resolution} \\ \cline{3-5}
&	& base	&	no collisions	&	unlimited	\\ \cline{2-5}
& \multicolumn{1}{|r|}{base}	&	97.0668	&	2.1738	&	0.3764	\\ \cline{2-5}
& \multicolumn{1}{|r|}{no collisions}	&	\cellcolor{gray}	&	0.3734	&	0.0012	\\ \cline{2-5}
& \multicolumn{1}{|r|}{unlimited}	&	\cellcolor{gray}	&	\cellcolor{gray}	&	0.0084	\\ \cline{2-5}
\end{tabular}
% ---
\begin{tabular}{rr|c|c|c|}
\multirow{3}{*}{\begin{sideways}\parbox{15mm}{Training\\Resolution}\end{sideways}}  & \multicolumn{1}{c}{\multirow{2}{*}{\textbf{WS}:}} &\multicolumn{3}{c}{Prediction Resolution} \\ \cline{3-5}
&	& base	&	no collisions	&	unlimited	\\ \cline{2-5}
& \multicolumn{1}{|r|}{base}	&	72.4945	&	15.5958	&	0.1886	\\ \cline{2-5}
& \multicolumn{1}{|r|}{no collisions}	&	\cellcolor{gray}	&	11.6849	&	0.0304	\\ \cline{2-5}
& \multicolumn{1}{|r|}{unlimited}	&	\cellcolor{gray}	&	\cellcolor{gray}	&	0.0058	\\ \cline{2-5}
\end{tabular}
% ---
\begin{tabular}{rr|c|c|c|}
\multirow{3}{*}{\begin{sideways}\parbox{15mm}{Training\\Resolution}\end{sideways}}  & \multicolumn{1}{c}{\multirow{2}{*}{\textbf{SERVER}:}} &\multicolumn{3}{c}{Prediction Resolution} \\ \cline{3-5}
&	& base	&	no collisions	&	unlimited	\\ \cline{2-5}
& \multicolumn{1}{|r|}{base}	&	75.8458	&	18.5276	&	2.0122	\\ \cline{2-5}
& \multicolumn{1}{|r|}{no collisions}	&	\cellcolor{gray}	&	3.3754	&	0.1474	\\ \cline{2-5}
& \multicolumn{1}{|r|}{unlimited}	&	\cellcolor{gray}	&	\cellcolor{gray}	&	0.0916	\\ \cline{2-5}
\end{tabular}
% ---
\begin{tabular}{rr|c|c|c|}
\multirow{3}{*}{\begin{sideways}\parbox{15mm}{Training\\Resolution}\end{sideways}}  & \multicolumn{1}{c}{\multirow{2}{*}{\textbf{MM}:}} &\multicolumn{3}{c}{Prediction Resolution} \\ \cline{3-5}
&	& base	&	no collisions	&	unlimited	\\ \cline{2-5}
& \multicolumn{1}{|r|}{base}	&	60.4185	&	28.4679	&	5.7122	\\ \cline{2-5}
& \multicolumn{1}{|r|}{no collisions}	&	\cellcolor{gray}	&	4.1124	&	0.8968	\\ \cline{2-5}
& \multicolumn{1}{|r|}{unlimited}	&	\cellcolor{gray}	&	\cellcolor{gray}	&	0.3922	\\ \cline{2-5}
\end{tabular}
\caption{Best combination of correlations 
(shown as \% of total predictions done) -- Local predictor.}
\label{fig:corr_set_local}
\end{figure}

This technique is useful to evaluate an hypothetical optimal
combination of all the idealized variants of a predictor so that 
we can see how much of the actual predictions are benefiting from 
the loss of aliasing, that is, what predictions are in fact not
correlated in a set. These results are presented in 
Fig.~\ref{fig:corr_set_local} for the Local predictor (and
\ref{fig:corr_set_gshare} in the appendix for GShare). Each row denotes gains 
when training a specific correlation set at a particular resolution
level (ie. when to update a counter); columns show how the predictions 
benefit from using more fine-grained indexing to the prediction
table. 
For instance, in the case of the INT trace for the
local predictor, we see that almost 79\% of the predictions work best
when considering the base indexing mechanism. Thus, these 
predictions form correlation sets that never gain any benefit 
(in terms of reduced misprediction rate) when they are further split. 
However, around 14\% of the predictions work best when, 
although their counters are trained using that indexing, the
prediction mechanism only uses 
those counters for doing some of the predictions that are contained
in the sets. This means that some predictions of the larger base sets
work well together, but some do not - they are only partially 
correlated. Consequently, the diagonal of the table shows how much 
each idealized predictor is properly modeling a behavior using solely 
its indexing mechanism, while the rest of the cells show when there 
is only a partial benefit and the break down of a set with less
resolution is not as clear cut. In this case, the optimal predictor 
would have to train the prediction mechanism at more than one 
level and be able to pick between them when doing a specific
prediction. Unfortunately, we were unable to devise  a
mechanism to find a better correlation algorithm that could do a
runtime approximation of this behavior, or potentially detect some 
pattern to alert us for the presence of such cases.

By combining these optimal correlation sets, we can build a
theoretical predictor that only uses the best of all variants (in 
the proportions that are given by the tables) and see
(Fig.~\ref{fig:miss_decomp} - best correlation) that it would always
perform better. Thus, we are lead to the conclusion that the baseline 
indexing mechanism, although usually effective, it sometimes wrongly correlates
predictions (despite complete uncorrelation being very rare). With these results we can observe that 
a more accurate correlation mechanism could perfectly approximate 
the performance of the idealized predictors in finite space, since only some of the 
predictions actually benefit from the additional separation.

Finally, we analyze the characteristics of the baseline correlation
sets in Fig.~\ref{fig:wtf_local} and Fig.~\ref{fig:wtf_gshare} (the
three bottoms graphs will be explained in the next section). 
We can see that for those traces, the overall table utilization is 
somewhat low, with lots of entries only being used once. 
This hints that more efficient uses of space could be achieved if a
clever correlation mechanism grouped more predictions, 
specially since we also see that most of these entries are only sporadically used.
Similarly, we could also have inspected the temporal use of the table 
in the predictor to see if an entry was used frequently or if it was essentially used only once or in blocks.
This result can further motivate the caching mechanism of TAGE so as
to better utilize these - mostly unpopulated - large tables.

Another interesting observation is the amount of sets with destructive interference, due to conflicting
branch classes. A set has conflicting branch classes if it is used to
predict branches that consistently go in opposite directions, 
such as always and never taken branches. Our results show that around 
15\% of the correlation sets of gshare have this destructive
interference pattern, which shows that a reasonable number of
mispredictions in GShare occur precisely due to this factor.


\begin{figure*}
\includegraphics[width=\textwidth]{corr_pro_gshare_WS}
\caption{Some relevant correlation set characteristics for the GShare predictor on the WS trace (total number of sets found 120,062 of maximum of 262,144 using about 45.8 \% of total storage space of the predictor).}
\label{fig:wtf_gshare}
\end{figure*}

\subsection{Training Stream Analysis}\label{sect:stream}


We further deepen our analysis of correlation sets by analyzing their
\emph{training stream}, that is, the stream of branch direction outcomes that is
used to train the correlation set's prediction mechanism. The idea of
this analysis is to determine the adequacy of an $N$-bit counter in
modeling this stream, to determine the warm-up value sensitivity (how
long until the prediction mechanism's initial value becomes
irrelevant). To do this, we observe that an $N$-bit counter is
fundamentally a last value predictor, where there is a 
``slack'' of $N$ that allows the predictor to tolerate alternating
sequences of outcomes in the training stream, fundamentally exploiting
repeating values in the stream. Thus, to determine the adequacy of the
counter, we keep track of the lengths of consecutive repeating values,
or \emph{bursts} in the training stream, the transition/noise length
(consisting of sequences of alternating outcomes) between bursts, and
how many bursts flip/switch their outcomes between transitions (i.e. a
burst consisting of zeros, followed by a transition, followed by a
burst consisting of ones). The idea is that in the absence of a
switching transition, the $N$-bit counter needs to be large enough to
tolerate the full length of the transition and maintain its
prediction, while in the presence of switching transitions, the
$N$-bit counter should be small since the predictor is better off
switching its prediction faster. To determine the warm-up value
sensitivity, we keep track of how many training examples need to be
observed by a prediction set in order for the prediction mechanism to
fully change its outcome (ie. to become saturated), making the initial
value irrelevant.
%\footnote{For an $N$-bit counter, where $N\geq2$, this is simply when
%there are more than $N+1$ transitions in the same direction (either
%consecutively or not). To compute this we simply need to count the
%taken branches (for instance as $1$) and the not taken (for instance
%as $- 1$) and see when the limit is reached (in this case the
%absolute value is $N+1$).\TODO{wait... is the limit actually $2^N-1$?
%ie. going to the other extreme}}. 
The
result of this analysis for the GShare predictor (and for the Local
predictor in the Appendix, Fig.~\ref{fig:wtf_local}) is presented in
Fig.~\ref{fig:wtf_gshare} (bottom three graphs).

%Burst flip:
%$...\overbrace{000000000}^{\mbox{burst of 0s}}1010\overbrace{111111111}^{\mbox{burst of 1s}}...$
%\TODO{hmmm remove?}

Our results show that the overall number of burst flips is small, 
the noise (in relation to the whole size of the set) is also relatively
small, and that the warm-up value affects a prediction stream for a
limited amount of time (since those 
that never warm-up tend to be sets that are rarely used - this
additional decomposition is not shown in the figures). 
Thus, from this we can conclude that a 2-bit counter is typically a
good prediction mechanism
 to model the training stream although we can observe corner cases 
that expose its limitations (thus, there are corner cases where different
sized counters could have been beneficial for the prediction).

%\TODO{TODO: comment and discuss results. Key things: interference,
%  delayed predictor updates and adequacy of counters.}

\paragraph{Impact of Delayed Predictor Updates}
We have also observed that there are often situations where the 2-bit counter is
adequately modeling the training stream (by the analysis and reasoning
of the previous section), the correlation set does not
have interfering types of branches, but it still has more mispredictions than
those required for the warm-up stage to occur. The reason for this is
delayed predictor updates, caused by the high number of branches that
are in-flight due to the high pipeline depth and width, giving rise to
predictions being made without the completed training stream being reflected in the
prediction mechanism. Although the impact of this is not prohibitively penalizing on the
overall prediction (Fig.~\ref{fig:miss_decomp} -- cheat) it has impact
on a few sets, lowering the overall accuracy of the predictor.

\section{Concluding Remarks}\label{sect:conc}

In this project we have analyzed several limitations on two state of
the practice predictors: Local and GShare. First, by analyzing the 
phenomenon of misprediction clustering, we developed a very simple 
and cheap hybrid predictor that successfully reduces the most extreme 
cases of this ailment, despite failing to provide significant overall performance gains.

We created a new in-depth method to evaluate the performance of the
two considered predictors by carefully analyzing how their failures
correlate with their two main interacting internal components: the 
correlation mechanism and the prediction mechanism. Using branch 
classification, we were able to identify the classes that were the
main source of mispredictions in our test traces. To further diagnose 
a predictor's problematic decisions, we showed their indexing is most 
often adequate, although there are a few predictions that would have 
benefited from further isolation. Finally, we went even further by
also analyzing the characteristics of the resulting training stream 
produced by such indexing correlations. We found that, on average, 
these can be adequately modeled by a 2-bit counter. Similarly, by
using branch classification at a more low-level inside a predictor's 
correlation set, we have detected the presence of \textit{destructive
  interference} caused by the correlation mechanism occasionally 
indexing conflicting classes of branches to the same table entry. 
Likewise, we see that these predictors greatly under-utilize their
allocated space, causing many predictions to rely on the initial value
of the predictor.

More than the brief results presented here, we believe our 
methodology of both inspecting the different components 
of a prediction and cross-referencing metrics
can potentially lead to a better understanding of the performance and
the limitations of a predictor, which cannot be 
captured by a simple misprediction rate. Although some of these 
metrics are specially targeted for a particular predictor, they are
relevant to provide insights on what it does right (so that 
a hybrid approach targeting a specific kind of branches is useful or 
not) and also on how and why it does or not benefit from additional 
state. For instance, we see that increasing the table size of gshare, 
although yielding some performance gains, is mostly unnecessary 
as the real bottleneck of the prediction is the correlation scheme 
that is incapable of properly classifying or separating predictions. 
Consequently, instead of relying on almost coincidental performance
increases, the added space should be consumed in a more intelligent 
correlation mechanism (that we were, unfortunately, not able to
explore). Although such a mechanism still eludes us, it is
potentially capable of doing an adequate correlation that properly 
approximates the performance of a larger table without actually 
requiring much additional space, since we have shown that only a 
small number of the correlation sets benefit from the added separation.

We also leave for future work some interesting questions that require
further cross referencing of these metrics, such as correlating the
wrong predictions with a specific interference and warm-up values. 
Similarly, we have only inspected \textit{smaller} correlation sets
(when compared to the baseline). Larger ones (such as those based on 
stack depth and others) could also yield potentially good correlations,
although designing a mechanism to pick a good correlation metric 
(ie. how big the history should be, how much of it is important and
where) is a problem that requires a much more efficient analysis to 
produce results in a reasonable time-frame. Additionally, it might 
also be relevant to combine such analysis with some sort of static 
source code analysis, which may produce more meaningful classes 
or further insights into good initial values.

Besides decomposing more state of the art predictors, it would also 
be interesting to have a larger 
set of metrics to classify a training stream as being adequately 
modeled by a perceptron, piecewise (if the stream is linearly
separable) or other prediction mechanisms. Such insights could 
potentially link a specific class to a cheap and efficient prediction 
model that could then be used in a hybrid predictor.
There is a whole spectrum of possible ground to
explore on this kind of evaluation of branch predictors that we
believe could give a better picture on their performance and efficiency.

To summarize: our basic methodology is to deeply analyze the
performance of a predictor's core components, determine how each is 
contributing to the prediction process and compare with alternative 
solutions for those components (either by simulation - as in the 
correlation analysis - or by characterization - as in the training
stream case) so as to provide a more detailed window on how it can 
be further improved. We claim that such metrics can potentially be 
helpful in addressing performance bottlenecks in predictors and
provide insights on how and which predictors might benefit from 
hybrid solutions or additional space.

%\begin{itemize}
%\item Summarize approach 
%\item Key insights:
%\begin{itemize}
%\item Adequacy of 2 bit counters
%\item mispredictions caused by ``speculation''
%\item misprediction distribution
%\item conflicts
%\item aliasing is not that bad?
%\end{itemize}
%\item highlight future work.
%\item done.
%\end{itemize}


\begin{footnotesize}
\bibliography{ref}
\end{footnotesize}

\appendix
\section{Extra Results}

\begin{figure*}
\center
{\small
\begin{tabular}{|c|cc|cc|cc|cc|cc|}
\hline
\multirow{2}{*}{Class:} & \multicolumn{2}{c}{INT} & \multicolumn{2}{|c|}{CLIENT} & \multicolumn{2}{|c|}{WS} & \multicolumn{2}{|c|}{SERVER} & \multicolumn{2}{|c|}{MM} \\ 
\cline{2-11}
 & \# & \% & \# & \% & \# & \% & \# & \%  & \# & \% \\ \hline
one	&	79	&	24.38	&	29	&	3.68	&	121	&	11.36	&	562	&	5.16	&	50	&	13.3	\\
zero	&	100	&	30.86	&	48	&	6.09	&	138	&	12.96	&	1154	&	10.59	&	75	&	19.95	\\
always	&	24	&	7.41	&	227	&	28.81	&	322	&	30.23	&	2700	&	24.77	&	49	&	13.03	\\
never	&	86	&	26.54	&	370	&	46.95	&	363	&	34.08	&	5279	&	48.43	&	101	&	26.86	\\
loop	&	10	&	3.09	&	19	&	2.41	&	36	&	3.38	&	334	&	3.06	&	18	&	4.79	\\
alternating	&	7	&	2.16	&	6	&	0.76	&	20	&	1.88	&	74	&	0.68	&	5	&	1.33	\\
strict	&	7	&	2.16	&	24	&	3.05	&	36	&	3.38	&	215	&	1.97	&	8	&	2.13	\\
block	&	2	&	0.62	&	0	&	0	&	0	&	0	&	41	&	0.38	&	1	&	0.27	\\
complex	&	0	&	0	&	0	&	0	&	0	&	0	&	0	&	0	&	0	&	0	\\
others	&	9	&	2.78	&	65	&	8.25	&	29	&	2.72	&	541	&	4.96	&	69	&	18.35	\\ \hline
TOTAL:	&	324	&	100	&	788	&	100	&	1065	&	100	&	10900	&	100	&	376	&	100	\\ \hline
\end{tabular}}
\caption{Classifying branches of each trace (precise break down).}\label{fig:class_table}
\end{figure*}

\begin{figure*}
\center
\scriptsize
\begin{tabular}{rr|c|c|c|c|}
\multirow{4}{*}{\begin{sideways}\parbox{15mm}{Training\\Resolution}\end{sideways}}  & \multicolumn{1}{c}{\multirow{2}{*}{\textbf{INT}:}} &\multicolumn{4}{c}{Prediction Resolution} \\ \cline{3-6}
&	& base	&	no collisions	&	unlimited & path	\\ \cline{2-6}
& \multicolumn{1}{|r|}{base}	&	79.4452	&	13.5042	&	0	&	0 \\ \cline{2-6}
& \multicolumn{1}{|r|}{no collisions}	&	\cellcolor{gray}	&	7.0506	&	0	&	0 \\ \cline{2-6}
& \multicolumn{1}{|r|}{unlimited}	&	\cellcolor{gray}	&	\cellcolor{gray}	&	0	&	0	\\ \cline{2-6}
& \multicolumn{1}{|r|}{path}	&	\cellcolor{gray}	&	\cellcolor{gray}	&	\cellcolor{gray} 	& 0	\\ \cline{2-6}
\end{tabular}
% ----
\begin{tabular}{rr|c|c|c|c|}
\multirow{4}{*}{\begin{sideways}\parbox{15mm}{Training\\Resolution}\end{sideways}}  & \multicolumn{1}{c}{\multirow{2}{*}{\textbf{CLIENT}:}} &\multicolumn{4}{c}{Prediction Resolution} \\ \cline{3-6}
&	& base	&	no collisions	&	unlimited & path	\\ \cline{2-6}
& \multicolumn{1}{|r|}{base}	&	98.1536	&	1.1576	&	0	&	0 \\ \cline{2-6}
& \multicolumn{1}{|r|}{no collisions}	&	\cellcolor{gray}	&	0.6888	&	0	&	0 \\ \cline{2-6}
& \multicolumn{1}{|r|}{unlimited}	&	\cellcolor{gray}	&	\cellcolor{gray}	&	0	&	0	\\ \cline{2-6}
& \multicolumn{1}{|r|}{path}	&	\cellcolor{gray}	&	\cellcolor{gray}	&	\cellcolor{gray} 	& 0	\\ \cline{2-6}
\end{tabular}
% ----
\begin{tabular}{rr|c|c|c|c|}
\multirow{4}{*}{\begin{sideways}\parbox{15mm}{Training\\Resolution}\end{sideways}}  & \multicolumn{1}{c}{\multirow{2}{*}{\textbf{WS}:}} &\multicolumn{4}{c}{Prediction Resolution} \\ \cline{3-6}
&	& base	&	no collisions	&	unlimited & path	\\ \cline{2-6}
& \multicolumn{1}{|r|}{base}	&	64.7906	&	25.171	&	0.0042	&	0 \\ \cline{2-6}
& \multicolumn{1}{|r|}{no collisions}	&	\cellcolor{gray}	&	9.8486	&	0.003	&	0 \\ \cline{2-6}
& \multicolumn{1}{|r|}{unlimited}	&	\cellcolor{gray}	&	\cellcolor{gray}	&	0.1826	&	0	\\ \cline{2-6}
& \multicolumn{1}{|r|}{path}	&	\cellcolor{gray}	&	\cellcolor{gray}	&	\cellcolor{gray} 	& 0	\\ \cline{2-6}
\end{tabular}
% ----
\begin{tabular}{rr|c|c|c|c|}
\multirow{4}{*}{\begin{sideways}\parbox{15mm}{Training\\Resolution}\end{sideways}}  & \multicolumn{1}{c}{\multirow{2}{*}{\textbf{SERVER}:}} &\multicolumn{4}{c}{Prediction Resolution} \\ \cline{3-6}
&	& base	&	no collisions	&	unlimited & path	\\ \cline{2-6}
& \multicolumn{1}{|r|}{base}	&	88.913	&	7.1174	&	0.0246	&	0.0784	 \\ \cline{2-6}
& \multicolumn{1}{|r|}{no collisions}	&	\cellcolor{gray}	&	3.7198	&	0.0036	&	0.067 \\ \cline{2-6}
& \multicolumn{1}{|r|}{unlimited}	&	\cellcolor{gray}	&	\cellcolor{gray}	&	0.0246	&	0	\\ \cline{2-6}
& \multicolumn{1}{|r|}{path}	&	\cellcolor{gray}	&	\cellcolor{gray}	&	\cellcolor{gray} 	& 0.0516	\\ \cline{2-6}
\end{tabular}
% ----
\begin{tabular}{rr|c|c|c|c|}
\multirow{4}{*}{\begin{sideways}\parbox{15mm}{Training\\Resolution}\end{sideways}}  & \multicolumn{1}{c}{\multirow{2}{*}{\textbf{MM}:}} &\multicolumn{4}{c}{Prediction Resolution} \\ \cline{3-6}
&	& base	&	no collisions	&	unlimited & path	\\ \cline{2-6}
& \multicolumn{1}{|r|}{base}	&	73.8408	&	14.3438	&	0.0016	&	0.1652	 \\ \cline{2-6}
& \multicolumn{1}{|r|}{no collisions}	&	\cellcolor{gray}	&	11.3408	&	0.1326	&	0.1082 \\ \cline{2-6}
& \multicolumn{1}{|r|}{unlimited}	&	\cellcolor{gray}	&	\cellcolor{gray}	&	0.0072	&	0	\\ \cline{2-6}
& \multicolumn{1}{|r|}{path}	&	\cellcolor{gray}	&	\cellcolor{gray}	&	\cellcolor{gray} 	& 0.0598	\\ \cline{2-6}
\end{tabular}

\caption{Best combination of correlations 
(shown as \% of total predictions done) -- GShare predictor.}
\label{fig:corr_set_gshare}
\end{figure*}


\begin{figure*}
\includegraphics[width=\textwidth]{corr_pro_local_MM}
\caption{Some relevant correlation set characteristics for the Local predictor on the MM trace (total number of sets found 63,515 of maximum of 262,144 using about 24.2\% of total storage space of the predictor).}
\label{fig:wtf_local}
\end{figure*}


\end{document}
