%results -IT, WS
%comparison with STC
%Energy
%sensitivity - thresholds, load balancing

\noindent{\bf Performance comparison}: Figure~\ref{fig:WS_IT_res} shows the performance comparison
across the various network designs.
% when the load in the network consists of 50\%/50\% latency/bandwidth
%sensitive applications (all results are averaged across {\it 25 workload combinations}).
The following observations are in order:

\begin{figure*} [t]
\centering
\begin{tabular}{cc}
\centering
 \psfig{figure=figures/WS.eps, width=3.1in, height=1.9in} &
 \psfig{figure=figures/IT.eps, width=3.1in, height=1.9in} \\
 \scriptsize (a) Weighted speedup (WS) (system throughput) & \scriptsize (b) Instruction throughput
 (IT) (application throughput)
\end{tabular}
 \caption{\label{fig:WS_IT_res} Performance comparison across various network designs
 with multiprogram mixes.}
\end{figure*}

\squishlist

\item Two 128b sub-networks (2N-128x128) provide similar performance (both system and application throughput)
as compared to a bandwidth equivalent single monolithic network with 256b link width (1N-256). This
is in spite of the increase in packet serialization in the sub-networks. The primary reason for this
performance improvement is reduction in congestion across each sub-network when compared to a
monolithic wider network.

\item Bandwidth and latency optimized parallel sub-networks operating at the same frequency as the processor along with
steering of packets based on their bandwidth/latency sensitivity (2N-64x256-ST) provides
18.3\%/16.9\% (system/application) throughput improvement, respectively, over the baseline (1N-128)
design. By providing bandwidth sensitive applications more bandwidth and reducing the congestion when
compared to a monolithic network, the performance of both bandwidth and latency sensitive
applications are improved. Prioritizing and ranking packets based on their criticality after steering
them into a sub-network (2N-64x245-ST+RK(no FS)) provides an additional 7\%/3\% improvement in
system/application throughput, respectively, over the 2N-64x256-ST design. This is because, our
ranking scheme prioritizes the more (relatively) network-sensitive applications in each sub-network,
and ensures, using batching, that there is no starvation.

\item Frequency scaling the latency/bandwidth sub-network along with steering and ranking the
applications (2N-64x245-ST+RK(FS)) provides the maximum performance improvement among our proposals
(34\%/24\% (system/application) throughput improvement) over the baseline network. With frequency
scaling, the latency optimized sub-network is clocked at a higher frequency, accelerating the latency
sensitive packets and this brings an additional 4.4\% overall improvement in application throughput.

\item Frequency scaling the sub-networks and steering along with ranking of applications (2N-64x245-ST+RK(FS)) is better
than an iso-resource network (1N-320(no FS)) by 5\%/3\% in weighted/instruction throughput. The
performance of 2N-64x245-ST+RK(FS) is within 2.0\%/2.2\% (system/application throughput) of the high
frequency iso-resource network with frequency increased by 3x (1N-320(FS)). Frequency scaling the
320b link width network helps latency sensitive applications and more bandwidth (when compared to
256b link width) helps the bandwidth sensitive applications. But as will be shown shortly, the energy
consumption of such a network is higher when compared to our proposal.

\item Our proposed network (2N-64x245-ST+RK(FS)) design's system performance is within 1.8\% of a very high
bandwidth network (1N-512). A high bandwidth network helps bandwidth sensitive applications, but
provides little benefit for latency sensitive applications. Additionally, as will be shown next, a
wide-channel network's energy consumption is very high (about 75\% higher than a 128b link width
network). Hence, although our proposed network provides similar performance as a high bandwidth
network, it does so at a lower energy envelope.

\squishend

\begin{figure*} [t]
\centering
\begin{tabular}{cc}
\centering
 \psfig{figure=figures/Energy.eps, width=3.1in, height=1.9in} &
 \psfig{figure=figures/EDP.eps, width=3.1in, height=1.9in} \\
 \scriptsize (a) Energy consumption in the network & \scriptsize (b) Energy-delay product (EDP) of
 applications
\end{tabular}
 \caption{\label{fig:Energy_EDP_res} Energy and EDP
 comparison across various network designs (all results normalized to 128 net.).}
\end{figure*}


\noindent{\bf Energy and EDP comparison}: Increasing the channel bandwidth decreases the
serialization (and zero-load) latency and hence, end-to-end latency is reduced. However, increasing
the channel bandwidth also affects router crossbar power. Figure~\ref{fig:Energy_EDP_res} shows the
energy and energy-delay product (EDP) of the applications across the 9 designs. We find that:

\squishlist

\item The average energy consumption of a 256b link network (1N-256) is 38\% higher than a 128b link
network (1N-128). However, the two 128b sub-networks design (2N-128x128) has similar energy
consumption as a single 128b link monolithic network. The energy reduction going from one network to
two sub-networks comes primarily from reduction in network latency (by reducing the congestion in
each sub-network). In fact, we observed that the energy consumption of two parallel sub-networks,
each with channel width $\frac{N}{2}$, is always lower than a single network with channel width $N$.

\item The average energy consumption of a high bandwidth network with 512b links (1N-512) is
75\% higher than a 128b link network. When link width increases, although serialization latency
reduces, the crossbar power starts to dominate the energy component and, thus the overall energy
consumption increases.

\item Steering packets along with application prioritization in the
routers (2N-64x256-ST-RK(no FS)) reduces energy consumption by 6.7\% when compared to just steering
packets (2N-64x256-ST). Amongst our proposed designs, steering along with ranking in frequency scaled
sub-networks (2N-64x256-ST-RK(FS)), consumes only 16\% more energy than the baseline 1N-128 network.
This is 59\% lower energy when compared to a high-bandwidth network (1N-512) and 47\% lower energy
than an iso-resource network which is frequency scaled (1N-320(FS)). Overall, our proposed scheme
consisting of heterogeneous parallel sub-network architecture always consumes lower energy than a
high-bandwidth network (1N-512) and an iso-resource 320b link width network.

%Further, when compared to the baseline design, our proposed schemes employ a 256b router and because
%of this the network power consumption increases. However, this is somewhat offset by the reduction
%in network latency with our schemes.

\item When comparing EDP metric, steering along with ranking in frequency
scaled sub-networks (2N-64x256-ST-RK(FS)) design is 19\% better than the baseline design. This is
because, our scheme reduces network latency significantly and this lowers the delay component in EDP
metric. Even without frequency scaling, the 2N-64x256-ST-RK(no FS) design has 3\% lower EDP than the
baseline design. Again, our proposed schemes always have lower EDP than a high-bandwidth network
(1N-512) or an iso-resource 320b link network.

\noindent{\bf Reply packets from L2 cache (DRAM) to L1 cache (L2 cache)}: In all the above
evaluations, we routed the L2 cache (DRAM) replies to the L1 cache (L2 cache) in either the 64b or
the 256b sub-network depending on where the request packet traversed the network: if the request
packet was bandwidth sensitive, the matching reply is sent on the 256b sub-network and vice-versa.
Reply packets are L1/L2 cache line sized packets (1024b) and transmitting them over the 64b network
increases their serialization latency. However, the 64b sub-network is relatively less congested when
compared to the 256b sub-network (because of lower injection ratio of latency sensitive applications)
and since the 64b sub-network is clocked at 3x frequency, the network latency in this sub-network is
lower. Our analysis shows that, transmitting {\it all} the reply packets in the 256b network
increases the system/application throughput by an additional 1.6\%/2.4\% and reduces energy
consumption by an additional 4\% when compared to the baseline 1N-128 network. Also, since coherence
packets are latency sensitive packets, we always route them in the 64b high frequency sub-network.

%In conclusion, we find that having two separate networks (each customized either for latency or for
%bandwidth), is beneficial from both system and application performance perspective while consuming
%minimally higher energy when compared to a monolithic network.

%Moreover, a designer should be careful
%in not choosing a very high-bandwidth network or a very narrow bandwidth network that operates at a
%high frequency as his design point.

%Thus, compared to a 128b link monolithic network operating at 2GHz, we find a 64b
%sub-network operating at 6GHz and a 256b sub-network operating at 2GHz is an optimal design from
%performance, energy and EDP perspective.

\squishend

%\subsection{Comparison with prior works} \label{subsec:prior_work_comp}


%\begin{figure*} [t]
%\begin{tabular}{cc}
%\centering
% \psfig{figure=figures/WS_IT_prior_work_comp.eps, width=3.15in, height=2.0in} &
% \psfig{figure=figures/HS_prior_work_comp.eps, width=3.15in, height=2.0in} \\
% \scriptsize (a) Weighted speedup (WS) and instruction throughput (IT) with prior proposals
% & \scriptsize (b) Harmonic speedup (application fairness metric) (HS)
%\end{tabular}
% \caption{\label{fig:WS_IT_HS_prior_work_comp} Weighted speedup, instruction throughput and
% harmonic speedup when compared to state-of-the art design (all results normalized to 1N-128 net.).}
%\end{figure*}



\begin{figure*} %[t]
\begin{minipage}{0.59\textwidth}
\centering
 \psfig{figure=figures/WS_IT_prior_work_comp.eps, width=2.85in, height=1.85in}
 \caption{\label{fig:WS_IT_HS_prior_work_comp} Weighted speedup (WS) and instruction throughput (IT)
 when compared to state-of-the art design (all results normalized to 1N-128 net.).}
\end{minipage}
\hfill
\begin{minipage}{0.39\textwidth}
\centering
 \psfig{figure=figures/var_load.eps, width=0.90\textwidth, height=1.85in}
 \caption{\label{fig:var_load_res} Performance comparison when varying proportion of
 bandwidth/latency intensive applications in each workload.}
\end{minipage}
\end{figure*}

\noindent{\bf Comparison with prior works} \label{subsec:prior_work_comp}: A previous work by Das et
al.~\cite{stc} proposed a ranking framework, called STC, that is based on criticality of a packet in
the network. In this work, the authors use L1MPKI as a heuristic to estimate the criticality of a
packet and based on this, propose a ranking framework which ranks applications with lower L1MPKI over
applications with higher L1MPKI. In their work, a central decision logic periodically gathers
information from each node, determines a global application ranking and batch boundaries, and
communicates these information to each node.
%Each node then
%prioritizes packets belonging to the oldest batch and ranks within a batch.
Apart from performance benefits, the authors also show that STC is better in terms of fairness when
compared to the round-robin arbitration often employed in routers. Since, we also prioritize
applications in the network, we compare our scheme with STC below. When comparing with STC for a
single network design, we utilize a 2-level ranking scheme when using our technique. The first level
ranking prioritizes latency sensitive applications over bandwidth sensitive applications, and then
among the latency and bandwidth sensitive applications, we use episode width and height to rank the
applications (based on ranking in Figure~\ref{fig:classification-matrix}).
% i.e. an
%application with shorter episode width and height is ranked higher than an application that is bursty
%(high episode height) and has long episode width.

Another recent work by Balfour and Dally~\cite{dally-cmesh} showed the effectiveness of
load-distributing traffic {\it equally} over two parallel sub-networks. In this work, each of the
sub-networks is a concentrated mesh with similar bandwidth. With detailed layout/area analysis, the
authors found that a second network has no impact on the chip area since the additional routers can
reside  in areas initially allocated for wider channels in the first network. Since, we also propose
parallel sub-networks (although our design shows heterogeneous networks are better than homogeneous),
we compare our scheme with a similar load-balancing scheme proposed as by Balfour and
Dally~\cite{dally-cmesh}.

Figure~\ref{fig:WS_IT_HS_prior_work_comp} shows the results, where we compare the performance and
fairness of our schemes with the two prior proposals mentioned above. All numbers in these plots are
normalized to that of a 128b link network with no prioritization (i.e our baseline network, 1N-128).
The STC schemes are annotated as {\bf -STC} with a given network design and the load-balancing
schemes are annotated as {\bf -LD-BAL} in the figures. The overall performance improvement with STC
is 6\%/3\% (system/application) in a single 128b link monolithic network when compared to 1N-128.
Compared to this, our 2-level ranking scheme shows 11\%/8\% (system/application) throughput
improvement over 1N-128 design. Since STC uses L1MPKI to decide rankings, and as shown earlier,
L1MPKI is not a very strong metric to decide the latency/bandwidth criticality of applications.
Moreover, when using L1MPKI, STC does not take into account the {\it time} factor i.e how long in
cycles does an application has this L1MPKI. Our proposed episode length captures this factor, and
hence, can differentiate between two applications having similar episode height (L1MPKI in the
context of STC) from each other. Based on this, our design ranks an application with shorter episode
length higher than an application with longer episode length, hence capturing the {\it true}
criticality of these packets. Even when comparing our scheme (2N-64x256-ST-RK(FS)) with that of STC
in a two parallel network design (2N-64x256-ST-STC) (where applications are first steered into the
appropriate network and then ranked using STC), we see an additional 12\%/5\% (system/application)
benefit over the STC based design. Moreover, in terms of fairness (harmonic speedup results omitted
for brevity), our scheme is 4\% and 2\% better than STC in a single and multiple parallel network
design, respectively. Further, in our scheme the rankings are determined dynamically when the packet
enters into each sub-network and there is no requirement of a dynamic co-ordination scheme to decide
rankings as is required by STC scheme.
%So, our scheme has not only lower overhead but is also better
%in both performance and fairness compared to STC.

Since we propose heterogeneous sub-networks, when load balancing between two sub-networks we steer
packets in the weighted-ratio of $\frac{256}{256+64}$ and $\frac{256}{256+64}$ between the 256b and
the 64b sub-network. This scheme is annotated as {\bf -W-LD-BAL} in the
Figure~\ref{fig:WS_IT_HS_prior_work_comp}. Our evaluations show that, steering packets with equal
probability into each network leads to more congestion the 64b link sub-network and under-utilizes
the 256b sub-network. We find that our proposal (2N-64x256-ST-RK(FS)) has an additional 18\%/10\%
(system/application) throughput improvement over the weighted load-balancing scheme
(2N-64x256-W-LD-BAL). Load balancing scheme is oblivious to the sensitivity or criticality of
packets. With this scheme, a latency sensitive packet is steered into the bandwidth optimized network
with a probability of 0.8 and bandwidth sensitive packet is steered into the latency sensitive
network with a probability of 0.2 and, thus in both these cases performance either does not improve
(for the former) or degrades (with the later). Further, with weighted load-balancing, there is
negligible improvement in fairness whereas, in our scheme the fairness of the system improves by 19\%
over the baseline network. Overall, we believe that with heterogeneous sub-networks, load balancing
is a sub-optimal scheme and that intelligently steering packets based on their sensitivity and
criticality can lead to significant performance benefits.


\subsection{Sensitivity to distribution of bandwidth-latency applications in the workload}
\label{subsec:sensitivity}

%\begin{figure*} [t]
%%\begin{wrapfigure}{l}{3.5in}
%\centering
% \psfig{figure=figures/var_load.eps, width=2.5in, height=2.0in}
% \caption{\label{fig:var_load_res} Performance comparison when varying proportion of
% bandwidth/latency intensive applications in each workload.}
%\end{figure*}
%%\end{wrapfigure}

All results shown till now had a multiprogram mix with equal percentage of latency and bandwidth
sensitive applications. To analyze the sensitivity of our scheme across various application mixes, we
varied the bandwidth sensitive application mix in a workload from 100\%, 25\%, 50\%, 75\% to 0\%.
Figure~\ref{fig:var_load_res} shows the results of this analysis. We find that our proposal, in
general, has higher system/application throughput across the entire spectrum of workload mix.
However, the benefits are small (4\%/9\% system/application throughput improvement over baseline)
when the system has 100\% latency sensitive applications. When the application mix is skewed (i.e.
system has {\it only} bandwidth {\it or} latency sensitive applications), we have assumed an oracle
knowledge, and weighted-load balanced both the sub-networks. As such, with 100\% latency sensitive
applications in the workload mix, benefits arise only due to load distribution and the benefits are
minimal in this case. Without this load balancing, the benefits of our proposal will only be because
of ranking. We are currently working on a scheme that can dynamically measure this skew (by measuring
that a particular sub-network is over-provisioned) and can then steer packets to the second
sub-network.

\subsection{Sensitivity to frequency scaling}

\begin{figure*} [t]
\centering
 \psfig{figure=figures/sensitivity_frequency.eps, width=1.5in, height=1.5in}
 \caption{\label{fig:sensitivity_frequency} Sensitivity of performance and energy to frequency scaling}
\end{figure*}

All results shown till now had the frequency of the latency customized network clocked at 4.5GHz (3x
processor frequency). In this sub-section, we analyze how the performance and energy varies when the
latency customized sub-network is clocked at a lower frequency of 3GHZ (2x processor frequency). We
do not analyze the case where this network is clocked at a higher frequency since doing so would
increase the absolute power envelope of the sub-network. When the latency customized network is
clocked at 2x the processor frequency the performance degrades by 4.5\%/1.5\% (system/application
throughput) when compared to clocking this network at 3x the frequency of the processor. Because of
this, the energy consumption of the network increases by 9\% when compared to clocking it at 4.5GHz.
