\section{Methodology}\label{sec:methodology}

%In this section, we describe our simulation and design methodology.

%\subsection{System Configuration}

\textbf{System Configuration:} Our baseline configuration is a
36-core in-order processor using the Ultra SPARC-III ISA. We use
McPAT~\cite{mcpat}, an integrated power, area, and timing modeling
framework, to estimate the area of the cores in 45nm technology. The
area of one core is estimated to be 6.8$mm^{2}$. By using
CACTI~\cite{3D:CACTI}, we further obtain that one cache layer fits
to approximately 36MB SRAM L2 cache, assume the cache layer has
similar area to the core layer. The configurations are detailed in
Table~\ref{tab:baseline}. We use the Simics toolset~\cite{simics}
for performance simulations. We also evaluate 9-core processor with
9MB L2 cache configuration for the performance using different
network topologies. The parameters will be described in
Section~\ref{sec:result}.


%\begin{wraptable}{r}{0.6\textwidth}
\begin{table}[htb]
\scriptsize \vspace{-0.5cm}
\begin{center}
\caption{System configuration.} \label{tab:baseline}
%\setlength{\tabcolsep}{0.2mm}
\begin{tabular}{||c|c||} \hline
\hline  Processor & 36-core, in order, 2GHz \\

%\hline  Number & 36 \\

%\hline  Frequency & 4GHz \\

%\hline  Issue width & 8 \\

\hline  L1 & 32KB DL1/IL1 per core, 128B, 2-way, 2 cycles\\

\hline  L2 & 36MB shared cache, 1MB per bank, 8-way, 10 cycles\\

\hline  Memory & 400 cycles latency, 16MB large page\\

\hline  Router lat. & 5 cycles\\

\hline\hline

\end{tabular}
\vspace{-0.5cm}
\end{center}
\end{table}
%\end{wraptable}

%The configurations of caches are listed as in the following table.
%Note that the number of banks are always 36 for both 36-core and
%9-core CMPs. The L2 caches are NOC-based NUCA. There is one router
%connected to each L2 cache bank, and the 36 L2 cache banks are
%connected with routers as a mesh structure.
%\begin{table}[h]
%\begin{tabular}{|c|c|c|c|c|c|c|}
%\hline
%L1 D Caches & Private & 32KB/per core & 36 banks &2 way  & 2-cycle-latency & write through\\
%\hline
%L1 I Caches & Private & 32KB/per core & 36 banks & 2 way  & 2-cycle-latency & write through\\
%\hline
%L2 Caches & Shared & 1MB*(Core num) & 36 banks & 8 way  & 10-cycle-latency & write back\\
%\hline
%\end{tabular}
%\caption{Cache configurations.}
%\end{table}

%We assume that each core is stacked on top a cache bank, and the
%core is also connected to the router of the corresponding cache bank
%with TSB. For the baseline~(fine) 3D configuration, the core can
%directly access the cache bank just beneath itself. If the core
%wants to access the other cache banks, the request is sent to the
%router of the bank beneath the core, and then, the request is sent
%to the destination bank through the mesh NoC in the cache layer. We
%do not simulate the congestion, and just forward the request with
%the shortest hop numbers~(Manhattan algorithm). The latency of one
%hop is 5 cycles including the latency in the router. For the 2D
%case, we assume the cache banks is placed next to its core, and the
%connection is similar to the case in 3D baseline except that the
%latency of each hop is 7 latency because the 2D placement increase
%the wire length.
%
%For the hybrid 3D case, the difference from the 3D baseline is that
%there is another coarse mesh NoC beside the fine mesh. There are 9
%routers in the coarse and connected to each other as a mesh. In
%addition, one routers in the coarse mesh is connected to four
%routers in the fine mesh. With this extra coarse mesh, there is
%alternative way for the core to access the cache bank. It first
%connects to the coarse mesh through its own router~(in the fine
%layer), then traverse to the router of the coarse mesh, which is
%connected to router~(in fine layer) of the destination cache bank.
%We also assume that the latency of any hop is still 5 cycles. Note
%it also takes one hop to traverse from the fine mesh to the coarse
%mesh. During the simulation, we assume that the core always picks up
%the way with less number of hops.
%
%The fine and coarse meshes are connected exactly the same as those
%of 36 cores. Since we only have 9 cores, each core is connected to
%one router in the fine mesh. The routing methods are the same for 9
%cores. The request is always send to the router in the fine mesh,
%and forwarded through fine/coarse mesh according to different
%configurations.


%\subsection{Workloads}

\textbf{Workloads:} We use a set of workloads from
%SpecInt2006~\cite{spec06} %,
SpecOMP2001\cite{omp2001} and PARSEC~\cite{parsec}.  %Four PARSEC workloads covering the range
%of memory footprints of the whole PARSEC suite are selected. For all
%workloads, we use either sampled reference or native input sets to
%represent a real-world execution scenario.
%In order to reasonably evaluate large cache designs, we construct
%each simulation in three phases with decreasing simulation speed:
%(1) we fast forward to a meaningful application phase, which may
%take 10s - 100s billion of instructions; (2) we warm up the caches
%by 10s billion of instructions; and (3) we simulate the system
%cycle-by-cycle for a few billion of instructions and collect
%simulation results.  %Both performance and power statistics are
%collected from cycle mode execution.  Our cache power model adds the
%static and dynamic power of the caches used by  a workload in the
%simulation. (modify based on our simulation)
%The cores are all in-order. For the shared L2 cache, the requests
%from different cores are also processed in-order.
For each benchmark, we fast forward the benchmark to the program phase of interest and warm up the caches, then 3 billions of cycles are simulated
in detailed mode. The instruction throughput of all
the cores are used as the metric of performance.% In Simics, it takes one
%cycle for the core to process one instruction, and the processor is
%stalled when accessing the
%cache. %The latency of accessing the cache is \\
%\begin{equation}10\ cycles\ +\ 5 * (hop\ numbers)\end{equation}



%fixed total area(640mm2 in 2D), different number of cores, cache
%size, different interconnect topology, number of functional cores.
%Can refer to Rakesh Kumar's ISCA2005 paper, which shows in one case
%that the interconnect has 13\% area overhead in 400mm2 16 cores.
%
%benchmarks: multi-programmed.
%
%case comparison: 2D, 2-layer 3D with/without reconfigurability,
%3-layer/4-layer, interconnect is separate layer or integrate with
%core/cache layer.
%
%interconnect latency can be added to cache latency for the
%evaluation.
