

%Ideally, an on-chip interconnect should be able to handle the non-uniform traffic demand of
% different classes of applications.
%Usually, applications can be classified into two
%broad categories based on whether their performance is sensitive to network  bandwidth or latency.
%Thus, design of NoC have traditionally evolved around providing high bandwidth and low latency.

\ignore{ Existing NoCs are implicitly built on the paradigm that all applications have similar demand
from the network. Based on this, most NoC designs treat applications uniformly while being agnostic
to their actual communication requirements. In this section, we start by looking at two of the
fundamental parameters: network bandwidth and latency, that affect the performance of hosted
applications to the first order.}

%For example, not only there is little understanding about the performance implications of bandwidth
%and latency sensitive applications, but also there is no dynamic mechanism to classify applications
%so that they can be treated differently in a network. Thus, in this section, we analyze the
%performance implications of serval applications in terms of bandwidth and latency  sensitivity.

As mentioned above, existing NoC designs are implicitly built on the paradigm that all the hosted
applications place similar demands on the underlying network.
%Based on this paradigm, a single
%underlying interconnect architecture caters to all applications' network demand.
In this paper, we argue against this paradigm by observing how different packets (even within the
same application, but particularly across different applications) have vastly differing network
resource demand and how each individual network packet impacts application-level performance. In this
section, we contrast few observations that highlight the intrinsic heterogeneity in network demands
across applications. These observations put together, provide the motivation for our
application-aware design of NoC, which is described in Section~\ref{sec:design}. We start by looking
at two of the fundamental parameters: network channel bandwidth and latency. %, that affect the
%performance of hosted applications to the first order.

\noindent{\bf Impact of channel bandwidth on performance scaling of applications}: Channel or link
bandwidth is a critical design parameter that affects network latency, throughput and energy/power of
the entire network. By increasing the link bandwidth, the packet serialization latency reduces,
however increase in link bandwidth adversely affects a router crossbar power envelope. To study the
sensitivity of an application to variation in link bandwidth, we perform a simple analysis. For this
analysis, we use an 8x8 mesh network and run 64 copies of the same application on all nodes on the
network\footnote{The network is wormhole switched, uses deterministic X-Y routing, has 6 virtual
channels per physical channel and 5-flit buffer depth. Each router in the network tile is connected
to a core, a private L1 cache, and an 1MB per core shared L2 cache (Table~\ref{table:sim_config}).
The network is clocked at the same frequency as the cores (2GHz). Table~\ref{table:benchmark} mention
the application details.}.

%\footnote{Table~\ref{table:benchmark} mention the application details; 6 applications are omitted in
%the plot to reduce clutter}.

Figure~\ref{fig:it-scaling} shows the results of this analysis for 30 out of the 36 applications in
our benchmark suite (6 applications are omitted to reduce clutter in the plots). We analyze
scenarios, where we double the bandwidth starting with 64b links to 512b links (annotated as BW-64b,
BW-128b, BW-256b and BW-512b in the figure).
%We also study an additional scenario (annotated as BW-2x128b)
%where, we use two parallel sub-networks with \textit{each} sub-network having 128b links and half the
%buffering resource (in terms of virtual channels in the routers) as that a single 256b network.
In this figure, the applications are shown on the X-axis in order of their increasing L1MPKI (L1
misses per 1000 instructions), i.e. \verb"applu" has the lowest L1MPKI and \verb"mcf" has the highest
L1MPKI. The Y-axis shows the average instruction throughput when normalized to the instruction
throughput of the 64b network.

\begin{figure*} [t]
\centering
 \psfig{figure=figures/ins_thpt_scaling.eps, width=6.25in, height=1.5in}
 %\hrule
 \caption{\label{fig:it-scaling} Instruction throughput (IT) scaling of
 applications with increase in network bandwidth.}
\end{figure*}

Observations from this analysis are as follows: (1) Out of the 30 applications shown,
performance of 12 applications (the rightmost 12 in the figure after \verb"swim") %(shown in the gray area)
scale with increase in channel bandwidth. For these applications, an increase in 8x bandwidth results
in at least 2x increase in performance.
%In the figure, all applications to the right of \verb"swim"
%exhibit this scaling.
We call these applications \textit{bandwidth sensitive} applications. (2) The
rest 18 applications (all applications to the left of \verb"swim" and including it), show very little
to no improvement in performance with increase in network bandwidth.
%(3) Employing two parallel networks with each network having link
%width 128b, shows performance that is within 0.4\% of a single network with 256b bandwidth. This is
%in spite of the doubling of packet serialization latency with decrease in channel bandwidth. In fact,
%our analysis shows that, two parallel sub-networks with link width $\frac{N}{2}$ is always within
%0.5\% of that of a single network with link width $N$.
(3) Even for bandwidth sensitive applications, not all applications' performance scale equally with
increase in bandwidth. For example, while \verb"omnet", \verb"gems" and \verb"mcf" show more than 5x
performance improvement for 8x bandwidth increase, applications like \verb"xalan" , \verb"soplex" and
\verb"cacts" show only 3x improvement for the same bandwidth increase. (4) L1MPKI is not necessarily
a good predictor of bandwidth sensitivity of applications. Intuitively, applications that have high
L1MPKI would inject more packets into the network, and hence, would require more bandwidth from the
network. But this intuition does not hold entirely true. For instance, \verb"bzip" in spite of having
higher L1MPKI than \verb"xalan", is less sensitive to bandwidth than \verb"xalan". Thus, we need a
better metric to identify bandwidth sensitive applications.

\begin{figure*} [t]
\centering
 \psfig{figure=figures/latency-scaling.eps, width=6.25in, height=1.5in} %frequency-scaling.eps
 %\hrule
 \caption{\label{fig:frequency-scaling} Instruction throughput scaling of
 applications with increase in router latency.}
\end{figure*}


\noindent{\bf Impact of network latency on performance scaling of applications}: Next, we analyze the
impact of network/router latency on the instruction throughput of these applications. Router pipeline critically affects
the network throughput and also dictates the network frequency. To study the latency sensitiveness of applications,
we add an extra pipeline latency of 2 and 4 cycles to each router (in the form of dummy pipeline stages) on top of the 
baseline router's 2-cycle latency. The cores and the network are clocked at 2 GHz for this analysis as well. Increasing the 
pipeline stages at each router {\it mimics} additional contention at the routers when compared to the baseline network. 
%ASIT: does it also affect throughput?? 

%A packet's network latency consists of its serialization latency (when a long packet is broken down into 
%flits) and latency due to contention for network resources (e.g. links). When a packet faces contention
%(from other network flows), it is temporarily stalled at one of the routers along its path. 
%To {\it mimic} this contention, we add additional dummy pipeline stages to each router on top of the baseline
%router's 2-cycle latency.

%Network and router frequency have been advocated by few recent works~\cite{raft,peh:singlecycle,topology-hpca} to
%improve performance. By increasing the frequency of the routers in a network, packet latency can be
%reduced, while adversely affecting the energy envelope. To study the sensitivity of applications, we
%increased the frequency of the network from 2GHz to 4Ghz and 6Ghz, while keeping the core frequency at 2GHz. 

Figure~\ref{fig:frequency-scaling} shows the results for this analysis, where the channel
bandwidth is 128b (although the observation from this analysis holds true for other channel
bandwidths as well). Our observations are the following: (1) Bandwidth sensitive applications are not
very responsive to increase in network/router latency and on an average, for a 3x increase in per-hop latency, 
there is only 7\% degradation in application performance (instruction throughput) for these applications, i.e. an extra
4 cycle latency per router is very easily tolerated by these applications. 
(2) On the other hand, for all applications to the left of \verb"swim" (and including \verb"swim"), there is
about 25\% performance degradation when the router latency increases from 2-cycles to 6-cycles. These applications
are clearly very sensitive to network latency and we call these \textit{latency sensitive}
applications. 
(3) Further, L1MPKI is not a good indicator of latency sensitivity (\verb"hmmer" in spite of having higher
L1MPKI when compared to \verb"h264", does not  show proportional performance improvement with reduction
in router latency).

%%ASIT: Should this paragraph come later in the design of low latency networks or now??
To improve the performance of the latency sensitive applications, a network architect can, thus, reduce the router 
pipeline latency from 2-cycles to single cycle while keeping the frequency constant or increase the network frequency (to reduce 
network latency). Although there are proposals that advocate for single cycle routers~\cite{peh:singlecycle,Mullins:lowlatency}, 
their design is often complex (invloves speculation which can be ineffective at high or adverse load conditions) and requires 
sophisticated arbiters. Hence, while single cycle routers are certainly feasible, in this paper, however, we use frequency
as a knob to reduce the network latency. Our analysis shows that, increasing the frequency of the network
from 2GHz to 6GHz leads to less than 1.5\% increase in energy for the latency sensitive applications
(results for energy with frequency scaling is omitted for brevity).
%%ASIT: discuss how increase in frequency also increases the net BW, but latency sensitive applications are not sensitive to this
%% increase. Further, we are only increasing the frequency of the 64b network, hence power envelope also does not increase much.
%% If we however, use this frequency knob for BW optimized network, then the power envelope will increase significantly (even at 128b link width, tripling the frequency significantly increases the power envelope, P is propn to f, 3x F increase, means 3x power increase).
 

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
 \ignore{
\begin{figure*} [t]
\centering
 \psfig{figure=figures/energy-scaling.eps, width=6.25in, height=1.45in}
 %\hrule
 \caption{\label{fig:energy-scaling} Energy consumption of network with increase in
 network bandwidth across various applications.}
\end{figure*}

{\bf Energy consumption of network with increase/decrease in channel bandwidth}: Increasing the
channel bandwidth decreases the serialization (and zero-load) latency and hence, end-to-end latency
is reduced. However, as mentioned above, increasing the channel bandwidth also affects router
crossbar power. Figure~\ref{fig:energy-scaling} shows the network energy consumption as bandwidth is
scaled from 64b to 512b. We observer that: (1) The energy consumption of a 64b and a 256b network is
on an average just 16\% and 13\% higher than a 128b network. (2) The energy consumption of two
parallel 128b networks is, on an average, 13\% lower than a 256b single network. In fact, the energy
consumption of two parallel sub-networks, each with channel width $\frac{N}{2}$, is always lower than
a single network with channel width $N$. Additionally, a 128b network and two parallel 128b networks
have similar network energy consumption. (3) The energy consumption with 512b networks is on an
average 58\% higher compared to a 64b network.

This analysis suggests that, two parallel narrow 128b sub-networks have the same energy envelope as
that of a single 128b network, while having the performance of a wider 256b network. Since
performance of bandwidth sensitive applications increase with increase in channel bandwidth, given a
128b single network, it makes intuitive sense to increase the bandwidth of this network to 256b to
improve performance while staying within 13\% of the energy envelope of the 128b network.
 }
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

\noindent{\bf Application-level implications on network design}: The above analysis suggests that a
single monolithic network is not the best option for catering various application demands.
Therefore, an alternative approach to designing an on-chip interconnect is to explore the feasibility
of multiple networks each of which is specialized for common application requirements, and
dynamically steer requests of each application to the network that matches the application's
requirements. Based on Figures~\ref{fig:it-scaling} and \ref{fig:frequency-scaling}, a wider and a
low frequency network is suitable for bandwidth sensitive applications, while a narrow and high
frequency network is best for latency sensitive benchmarks. However, we also need a mechanism to
classify applications at runtime to one of the two categories: {\it bandwidth/latency sensitive} for
guiding them to the appropriate network. In addition, since not all applications are equally
sensitive to bandwidth or latency, we propose a fine grain prioritization of applications within the
bandwidth and latency optimized sub-networks. This scheme further improves the overall
application/system performance.
%We found that
%none of the existing metrics such as L1MPKI~\cite{stc} or slack~\cite{stc} is a good classifier of
%the applications when it comes to classifying applications as latency/bandwidth sensitive. In the
%next section, we look at the application communication characteristics at a finer granularity and
%propose two new metrics to drive us in designing a two-layer NoC.
