
Network-on-Chips (NoCs) are envisioned to be a scalable communication substrate for building
multicore systems, which are expected to execute a large number of different applications and threads
concurrently to maximize system performance. The NoC is a critical shared resource among these
concurrently-executing applications, significantly affecting each application's performance, system
performance, and energy efficiency. Applications that share the NoC are likely to have diverse
characteristics and performance requirements, resulting in different performance demands from the
network. The design parameters and algorithms employed in the NoC critically affect the latency and
bandwidth provided to each application, thereby affecting the performance and efficiency of each
application's execution. Therefore, devising NoCs that can efficiently satisfy diverse
characteristics of different applications is likely to become increasingly important.

Traditionally, NoCs have been designed in a monolithic, one-size-fits-all manner, agnostic to the
needs of different access patterns and application characteristics. Two common solutions are to
design a single NoC for 1) common-case, or average-case, application behavior or 2) near-worst case
application behavior, by overprovisioning the design as much as possible to maximize network
bandwidth and to minimize network latency. However, applications have widely different demands from
the network, e.g. some require low latency, some high bandwidth, some both, and some neither. As a
result, both design choices are suboptimal in terms of either performance or efficiency. The
``average-case'' network design cannot provide good performance for applications that require more
than the supported bandwidth or benefit from lower latency. Both network designs, especially the
``overprovisioned'' design, is power- and energy-inefficient for applications that do not need the
provided high bandwidth or low latency. Hence, monolithic, one-size-fits-all NoC designs are either
low performance or energy-inefficient for different applications.

Ideally, we would like an NoC design that can provide just the right amount of bandwidth and latency
for an application such that the application's performance is maximized, while the system's energy
consumption is minimized. This can be achieved by dedicating each application its own NoC that is
dynamically customized for the application's bandwidth and latency requirements. Unfortunately, such
a design would not only be very costly in terms of die area, but also requires innovations to
dynamically change the network bandwidth and latency across a wide range. Instead, if we can
categorize applications into a {\em small} number of classes based on similarity in resource
requirements, and design multiple networks that can efficiently execute each class of applications,
then we can potentially have a cost-efficient network design that can adapt itself to application
requirements.

Building upon this insight, this paper proposes a new approach to designing an on-chip interconnect
that can satisfy the diverse performance requirements of applications in an energy efficient manner.
We observe that applications can be generally divided into two general classes in terms of their requirements
from the network: bandwidth-sensitive and latency-sensitive. Two different NoC designs, each of which
is customized for high bandwidth or low latency can, respectively, satisfy requirements of the two
classes in a more power efficient manner than a monolithic single network. We, therefore, propose designing two separate, heterogeneous
networks on a chip, dynamically monitoring executing applications' bandwidth and latency sensitivity,
and steering/injecting network packets of each application to the appropriate network based on
whether the application is deemed to be bandwidth-sensitive or latency-sensitive. We show that such a
heterogeneous design can achieve better performance and energy efficiency than current average-case overprovisioned one-size-fits-all NoC designs.

To this end, based on extensive application profiling, we first show that a high-bandwidth, low
frequency network is best suited for bandwidth-sensitive applications and a low-bandwidth but high
frequency network is best for latency sensitive applications. Next, to steer packets into a particular
sub-network, we identify a packet's sensitivity to network latency or bandwidth. To do this, we propose a
new packet classification scheme that is based on an application's intrinsic network requirement property. 
For this, we use an application's {\it network episode length} and {\it height} metric to dynamically identify the communication requirements (latency/bandwidth criticality).
Further, observing the property that not all applications are equally sensitive to latency or
bandwidth, we propose a fine grain prioritization mechanism for applications within the bandwidth and latency
optimized sub-networks.
%Thus, our mechanism consists of first classifying an application as latency
%or bandwidth sensitive and then prioritizing each application's packet based on its criticality to
%improving system/application performance.

Evaluations on a 64 core 2D mesh architecture considering 9 design alternatives with 36 diverse
applications, show that our proposed two-layer heterogeneous network architecture outperforms all
competitive monolithic network designs in terms of system/application performance and
energy/energy-delay envelope.
%Further, our ranking/prioritization schemes are shown to outperform a
%state-of-the-art prioritization mechanism in NoCs.
Overall, the primary contributions of this work are the following:

\squishlist

\item We identify that a monolithic network design is sub-optimal when hosting applications with
diverse network demands. As a step further, with extensive application level profiling, we identify
that applications can be divided into two general classes in terms of their requirements from the
network: bandwidth-sensitive and latency-sensitive.

\item To exploit these two application classes, we propose a two-tier heterogeneous network architecture suitable for
bandwidth and latency sensitive applications. For steering packets to an appropriate network, we
propose  a novel dynamic mechanism that utilizes the communication episodes of an application, called
network episode length and height. These two metrics combined helps us to classify applications as latency or bandwidth sensitive. Further, using these two metrics, we classify applications into 9 sub-groups for the purpose of identifying an application's criticality within each of the bandwidth-sensitive or latency-sensitive class. This fine grain classification allows us to facilitate a prioritization mechanism for applications within the bandwidth and latency
optimized sub-networks for further improving the performance. This dynamic ranking/prioritization scheme is shown to perform better than two recently proposed schemes.

%and latency tolerance inside each sub-network. To do this, we introduce two novel application level
%metrics, called network episode length and height, which can successfully capture an application's
%network demand and it's tolerance to latency-delay or bandwidth-unavailability.

\item We show that our two-layer NoC design consisting of a 64b link-width latency optimized sub-network and a 256b link-width bandwith optimized sub-network, provides 5\%/3\% (weighted/instruction) throughput improvement over
an iso-resource (320b link-width) monolithic network design, and consumes 47\% and 59\% lower energy when compared to the iso-resource and a high bandwidth (512b link-width) monolithic network, respectively. When compared to a widely used 256b link-width monolithic network, our design provides XX\%/YY\% (weighted/instruction) throughput improvement respectively.

\squishend
