
%As stated above, our goal is to improve application-level throughput by steering requests belong to different
%application classes to a sub-network that is optimized to handle traffic for that particular
%application class and support intra-class ranking and prioritization of packets for better performance/energy
%tuning.
This section elaborates on the dynamic application classification scheme and proposes a mechanism for
intra-class ranking and prioritization of packets for better performance/energy tuning.
%application class identification and packet ranking.
%Additionally, since not all application belonging to a particular class would be
%equally sensitive to the optimization done in a sub-network, we plan to first rank, and then
%application class identification and packet ranking.

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\ignore{

\noindent{\bf Identifying application class}: We propose a mechanism to dynamically classify
applications to be either bandwidth sensitive or latency sensitive so that they can be steered to the
proper network. Ideally, this should be done as soon as packets are injected to a network so that
they are not routed to a wrong network. For this, we use two novel heuristics, {\it episode length}
and {\it episode height}, that capture the application's latency and bandwidth demand from a network.
We describe these heuristics in the next sub-section.
%Once determined, the application packets are then steered into a sub-network that is
%optimized either for latency sensitive application packets or bandwidth sensitive application
%packets.

\noindent{\bf Application packet ranking}: Application ranking within a sub-network is done to
improve the application-level throughput further. This ranking is based on the {\it latency
criticality} of a packet compared to other packets in the same sub-network. We identify this
criticality of a network packet belonging an application using the same {\it episode length} and {\it
episode width} heuristics.
%that help to identify an application class.
A packet's rank is determined at the network interface (NI) just
before a packet is injected into a sub-network. The packet is then tagged with its rank and
individual routers in a network use this information to determine which packets are prioritized at
any given cycle. The ranking schemes, based on the heuristics, is described in Section~\ref{}.
}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%


\vspace{-2mm}
\subsection{Dynamic classification of applications}

The goal of identifying an application's sensitivity to latency/bandwidth, is to enable the network
interface (NI) to inject or steer packets into a sub-network that has been optimized for either
latency or bandwidth. We propose two novel metrics, called {\it episode length} and {\it episode
height}, that effectively capture the latency and bandwidth demands of an application and help the NI
to classify an application as either bandwidth or latency sensitive. We contrast the new metrics
against two heuristics (L1MPKI~\cite{stc} and Slack~\cite{aergia}), which were recently proposed to
estimate a packet's criticality in the network.
%These heuristics are computed at runtime and is used by the NI to steer packets to one of the two sub-networks.

\noindent{\bf Episode length and height:} During an application's life cycle, the application
alternates between two kinds of episodes (shown in Figure~\ref{fig:episodes}): (1) {\it network
episode}, where the application has at least one packet (to L2 cache or to DRAM) in the network, and
(2) {\it compute episode}, where there are no outstanding cache/memory requests by the thread. During
the network phase, there may be multiple outstanding packets from the application in the network
owing to various techniques that exploit memory-level parallelism
(MLP)~\cite{Fields-2001,Srinivasan-1998,Samantika-2009}. During this network phase, the processor is
most likely to be stalling for the L2 and memory requests to be serviced. Because of this, the
instruction throughput of the processor is low during this episode. During the compute episode,
however, the instruction throughput is high. In this paper, we quantify a network episode by its
length and height. Length is the number of cycles the episode lasts starting from when the first
packet is injected into the network till there are no more outstanding packets belonging to that
episode. Height is the average number of packets (L1 misses) injected by the application during the
network episode. To compute this average height, the processor hosting the application keeps track of
the number of outstanding L1 misses (when there is at least 1 L1 miss) in the re-order buffer on a
per-cycle basis. For example, if the episode lasts for 3 cycles and there are 2, 3 and 1 L1 misses in
each of those cycles, then the average episode height is $\frac{2+3+1}{3}$$ =2$.

\begin{floatingfigure}[r]{3.1in}
\vspace{5.75mm}
%\begin{figure*} [t]
%\end{figure*}
    \centering
    \psfig{figure=figures/figure_episodes.eps, width=3.0in, height=1.25in}
    \caption{\label{fig:episodes} Network and compute episodes.}
\end{floatingfigure}

If an episode lasts for a very few cycles, intuitively it reflects that all packets belonging to this
episode are very critical for the application to make progress. Any delay of packets belonging to
this short lasting episode will delay the start of the following computation phase, and, thus the
performance of the application will degrade. Hence, these packets are latency sensitive. On the other
hand, if an episode is long lasting, the application is most likely tolerant to this long episode
length, and delaying any packets belong to this episode will not degrade the performance much.

If an episode's height is short, it suggests that the application is likely to have low MLP in this
episode and hence, its requests are likely to be very critical for the application to make progress.
The packets belonging to this phase are likely to be latency sensitive. On the other hand, if an
episode height is high, then the application has a large number of requests in the network, and the
network latency of all those packets are overlapped. Large number of packets in the network means
that the application most likely needs more bandwidth, but the network latency is not very critical
for the application. Our analysis shows that, these two heuristics are least affected by the system
state or network characteristics such as interference from other applications in the network.
Therefore, these two metrics provide an intuitive, easy-to-compute, accurate and stable
characterization of an application's network demand.

%Intuitively, episode height captures the MLP of an application
%Our analysis shows that episode length and episode height are sufficient to predict an application's
%type (i.e. latency vs. bandwidth sensitive) and also the criticality of an application within a
%sub-network to further help us in ranking.

\begin{figure*} [t]
\centering
 \psfig{figure=figures/l1-l2mpki-slack.eps, width=6.25in, height=1.475in}
 \caption{\label{fig:variation_1l_l2} L1MPKI, L2MPKI and slack in applications.}
\end{figure*}

\noindent{\bf  Private cache misses per instruction (MPI):} This metric captures an application's
network intensity. If the network intensity is lower, the application has low MLP and hence, its
request are latency sensitive as opposed to bandwidth sensitive. Figure~\ref{fig:variation_1l_l2}
shows the L1MPKI and L2 MPKI of several applications. We find that, MPI (or MPKI) can help in
identifying latency sensitive applications from bandwidth sensitive ones. In
Figure~\ref{fig:variation_1l_l2}, all applications to the left of \verb"sjbb" have a lower MPKI than
\verb"sjbb"'s MPKI. Since these applications are latency sensitive, empirically we can think of
having a threshold in MPKI (equal to \verb"sjbb"'s MPKI) to classify applications as bandwidth or
latency sensitive. However, as mentioned earlier, this metric is not accurate in estimating the
criticality of applications {\it within} the latency sensitive class or bandwidth sensitive class.
For instance, \verb"bzip" in spite of having higher L1MPKI than \verb"xalan", is less sensitive to
bandwidth than \verb"xalan". Similarly, \verb"hmmer" and \verb"swim", in spite of having higher
L1MPKI when compared to \verb"gobmk" and \verb"astar", do not show proportional performance
improvement with increase in bandwidth as the later applications show.

\noindent{\bf Packet slack:} Slack, as a metric, was recently investigated~\cite{aergia} to identify
a packet's criticality in the network. We measured an instruction's slack from when it enters the
re-order buffer (ROB) to when the instruction actually becomes the oldest in the ROB and is ready to
commit. Figure~\ref{fig:variation_1l_l2} shows how slack varies across applications. Intuitively,
slack of a L1-miss instruction directly translates to the instruction's criticality in the network.
Based on this, applications that have a longer slack are more tolerant to network delays when
compared to applications that have smaller or no slack. Unfortunately, slack does not capture the MLP
of an application and has low correlation in identifying increase in performance with increase in
bandwidth/frequency. Furthermore, slack is influenced by contention in the network and fluctuates
significantly.

%Hence, based on the above evaluations, we chose episode length and height as a metric to not only
%classify applications but also estimate a packets criticality in the network.

%Later in Section~\ref{}, we demonstrate
%the effectiveness of our proposed scheme.

\subsection{Analysis of episode length and height}

\begin{figure*} [t]
\begin{minipage}{1\textwidth}
\centering
 \psfig{figure=figures/avg_episode_length.eps, width=6.20in, height=1.325in}
 \caption{\label{fig:avg_episode_length} Average episode length (in cycles) across applications.}
%\end{figure*}
\end{minipage}
\\
\begin{minipage}{1\textwidth}
%\begin{figure*} [t]
\centering
 \psfig{figure=figures/avg_episode_height.eps, width=6.20in, height=1.325in}
 \caption{\label{fig:avg_episode_height} Average episode height (in packets) across applications.}
\end{minipage}
\end{figure*}

To avoid short term fluctuations, we use running averages of the episode height and length to keep
track of these metrics at runtime. Further, we quantify episode height as {\it high}, {\it medium} or
{\it short} and episode length as {\it long}, {\it medium} and {\it short}. This allows us to perform
a fine grain application classification based to episode length and height to classify them as either
latency sensitive or bandwidth sensitive. Section~\ref{subsec:ranking} provides empirical data to
support such a classification scheme. Figures~\ref{fig:avg_episode_length} and
\ref{fig:avg_episode_height} show these metrics for 30 applications in our benchmark suite. Based on
Figures~\ref{fig:it-scaling} and \ref{fig:frequency-scaling}, we classify all applications whose
episode length and height  are shorter than \verb"sjbb"'s episode length and height, respectively, to
be short in length and height (shaded black in the figures). Applications whose average episode is
larger than \verb"sjbb"'s episode height but lower than 7 (empirically chosen) are classified as
medium (shaded blue in the figures) and the remaining as high episode heights (shaded with hatches in
Figure~\ref{fig:avg_episode_height}). Empirically, a cut-off of 10K cycles is chosen to classify
applications as having medium episode length.

\begin{figure*} [t]
\centering
 \psfig{figure=figures/matrix-classification, width=6.25in, height=1.65in}
 \caption{\label{fig:classification-matrix} Application classification and ranking based on episode length and height.}
\end{figure*}

Figure~\ref{fig:classification-matrix} shows the classification of applications based on their
episode height and length. The figure also shows the bandwidth sensitive applications and the latency
sensitive applications based on such a classification. In general, we classify applications having
high episode height as bandwidth sensitive and vice-versa for latency sensitive.

\subsection{Ranking of applications} \label{subsec:ranking}

We use the above fine-grained classification to rank applications for providing  customized
prioritization in a network. Essentially, applications whose episode length lasts longer, are
prioritized the least in the network over other applications. Below, we discuss the steering and
ranking of a few application classes and our intuition behind doing so.

\noindent{\bf (1) Episode length is short and height is short}: Applications belonging to this
category have very low MPKI and since their episode lasts for a very short period, delaying any
packet is most likely to delay the start of the computation phase. This makes these applications
highly latency sensitive and ranks them with the heights priority (rank 1).
%thus, our scheme steers packets belonging to these applications into the latency optimized
%sub-network and also ranks them the highest in this sub-network.
{\bf (2) Episode length is short and height is high}: These applications are bursty, but for a very
short period of time. Because of this burstiness, the packets' network latency are overlapped and
hence, we classify these applications as bandwidth sensitive but rank them the highest in the
bandwidth optimized sub-network (owing to their criticality to network latency because of a very
short episode length). {\bf (3) Episode length is long and height is short}: These applications are
still latency sensitive, but are relatively latency tolerant compared to applications having
medium/short episode length. So, these applications are prioritized the least (rank 4) in the latency
optimized sub-network. {\bf (4) Episode length is long and height is high}: These applications are
the most bandwidth sensitive applications and owing to their large episode height, they are the most
tolerant to network delay. Thus, these applications are classified as bandwidth sensitive and we
prioritize them the least in the bandwidth optimized network.

Applications that do not belong to the above classes, have either latency or bandwidth sensitivity
that lie within the extremes and are prioritized based on their relative tolerance to network delays
when compared to others. Figure~\ref{fig:classification-matrix} shows the ranking of the applications
in their respective sub-networks.

We took two critical decisions in our classifications - (1) choosing \verb"sjbb"'s episode length and
height as a threshold for short lasting episodes and episodes with smaller heights, and (2) choosing
9 smaller sub-classes after classifying the applications as bandwidth or latency sensitive. We next
outline the empirical results that led us in taking these decisions.


\begin{figure*} %[t]
\begin{minipage}{0.680\textwidth}
\centering
 \psfig{figure=figures/hierarchical_clustering.eps, width=1\textwidth, height=1.5in}
 \caption{\label{fig:hierarchical_cluster} Hierarchical clustering of applications. The input
 to the clustering algorithm consists of improvement in IPC with bandwidth scaling (from 64b to 512b)
 and improvement in IPC with frequency scaling (2GHz to 6GHz).}
\end{minipage}
\hfill %\vfill
\begin{minipage}{0.28\textwidth}
 \centering
 \psfig{figure=figures/num_cluster.eps, width=1\textwidth, height=1.5in}
 \caption{\label{fig:num_cluster} Reduction in within group sum of squares with increase in number of clusters.}
\end{minipage}
\end{figure*}


\noindent{\bf Rationality of our classification}: Figure~\ref{fig:hierarchical_cluster} shows the
results of a hierarchical clustering of all the applications in our benchmark suite. Hierarchical
clustering incrementally groups objects that are similar, i.e., objects that are close to each other
in terms of some distance metric. In our case, the input to the clustering algorithm consists of the
improvement in IPC with bandwidth scaling (from 64b to 512b) and improvement in IPC with frequency
scaling (from 2GHz to 6GHz) i.e. values from Figures~\ref{fig:it-scaling} and
\ref{fig:frequency-scaling}. The hypothesis behind this is to observe whether a clustering algorithm
perceives noticeable difference between applications' performance with frequency and bandwidth
scaling. We tried various linkage distance metrics like Euclidean distance, Pearson correlation and
average distance between the objects, and in all cases the clustering was consistent with that shown
in Figure~\ref{fig:hierarchical_cluster} (shown for Euclidean distance). Although the eventual
hierarchical cluster memberships are different from that shown in our classification matrix, the
broader classification of how hierarchical clustering groups applications in bandwidth and latency
sensitive clusters matches exactly with our classification scheme, which is based in episode height
and length (with the exception of \verb"sjeng"). The reason for \verb"sjeng"'s misclassification is
because its performance does not scale with bandwidth and hence, hierarchical clustering classifies
it as a latency sensitive application. However, \verb"sjeng"'s episode has a high episode height but
a short episode length on average, meaning it is very bursty (and hence, high MLP) during a small
interval of time. Because of this, we classify it as the highest ranking application in the bandwidth
optimized sub-network.


\noindent{\bf Why 9 sub-classes?} To answer this question, we measure the total within-group
sum-of-squares (WG-SS) of the clusters resulting with hierarchical clustering.
Figure~\ref{fig:num_cluster} shows this metric as the number of clusters increase. The total WG-SS is
a measure of the total dispersion between individual clusters and often regarded as a metric to
decide the optimal number of clusters from a hierarchical or K-means
algorithm~\cite{wg-ss-1,wg-ss-2}. When all clustering objects are grouped into one cluster, the total
WG-SS is maximum, whereas, if each object is classified as a separate object, the WG-SS is minimum
(=0). Figure~\ref{fig:num_cluster} suggests that 8 or 9 clusters have similar WG-SS and, 8 or 9
clusters reduce the total WG-SS by 13x compared to a single cluster. Based on this, we chose 9
classes for our application classification and hence, sub-divided episode height and length into
three quantitative class each.


%\begin{figure*} [t]
%\centering
% \psfig{figure=figures/variation_netlat_cop.eps, width=6.25in, height=1.75in}
% \hrule
% \caption{\label{fig:variation_netlat_cop} Network latency and average number of
% current-outstanding network packets (COP) in applications.}
%\end{figure*}

%\begin{figure*} [t]
%\centering
% \psfig{figure=figures/variation_slack.eps, width=6.25in, height=1.75in}
% \hrule
% \caption{\label{fig:variation_slack} Variation in {\it slack} across applications.}
%\end{figure*}
