
\subsection{Modeling thermal variation}

For determining the thermal distribution across a 3D stacked multicore, we used the \emph{Hotspot-5.02} tool~\cite{hotspot}.
As shown in~\cite{ieeemicro-llano}, the power budget of a multicore is based on its thermal profile, which in turn, depends on the temperature (or power) distribution across adjacent cores.
This is done in order to take into account the effect of heat dissipation across core boundaries.
Hence, in order to model this phenomenon, we carried out simulations on a system of 3 cores arranged side-by-side, each one operating on a similar workload. The central core will naturally experience the highest temperature distribution, since the surrounding cores operating at high temperature offer limited avenues for heat dissipation. 
Secondary effects beyond multiple core boundaries are negligible and can hence be ignored.
Figure~\ref{fig:hotspot-profile} demonstrates the variation in thermal behavior of three such 4-issue CMOS cores running at 2~GHz. The central cores in each layer have a larger area of peak temperature due to its higher power density as compared to the cores at the boundary. 
This model can be replicated to correspond to several cores on the same layer.
We then consider a 4 layered multicore with a (20~$\mu$m) thin thermal insulating material between each core layer. 
We then assume that this model is extended to 3D, with each layer comprising of a multitude of such cores. 
The additional temperature increase due to transition to 3D is modeled in Hotspot3D~\cite{hotspot3d}.

\begin{figure}[ht!]
 % \centering
    \epsfig{file=figs/hotspot_profile.eps, angle=0, width=1.01\linewidth, height=0.6\linewidth, clip=}
    \caption{\label{fig:hotspot-profile} Variation in thermal hotspots on a layer of a system of 3D stacked cores with shared L3 cache.}
\end{figure}



\subsection{Scheduling diverse workloads on a stacked CMOS-TFET multicore}

We aim to demonstrate the motivation behind using a heterogeneous 3D stacked multicore for efficiently executing a diversity of applications.
This configuration is comprised of a single (top) layer consisting of CMOS cores and remaining layers consisting of TFET cores.
We use a selection of multiprogrammed workloads created from the \emph{Parsec}, \emph{Splash2} and \emph{SPEC CPU2006} suites for this purpose.
In order to carry out a comprehensive study of the different workload characteristics encountered by the system, we statically profile each benchmark and determine their thread-level parallelism and memory utilization.
We then partition them into various subgroups depending on these characteristics.
These benchmarks are characterized as the following:
\begin{enumerate}
\item Multithreaded - High scalability
\item Multithreaded - Limited scalability
\item Single threaded (no scalability) with high memory utilization
\item Single threaded (no scalability) with low memory utilization
\end{enumerate}

Figure~\ref{fig:workloads-mpki} shows the benchmarks that we profiled and used for obtaining representative workloads.

\begin{figure}[ht!]
 % \centering
    \epsfig{file=figs/workloads_mpki.eps, angle=0, width=1.06\linewidth, clip=}
    \caption{\label{fig:workloads-mpki} Characterization of \emph{SPEC CPU2006}, \emph{Parsec} and \emph{Splash2} workloads based on scalability and memory utilization }
\end{figure}
By randomly combining pairs of workloads from these categories, we get several distinct classes of multiprogrammed workloads.
The thermal constraint under which the multicore configuration operates primarily holds when majority of cores are active.
On the other hand, the shared L3 cache, having a much smaller activity factor does not consume as much power as cores and is consequently much cooler. 
This enables us to operate the on-chip caches when majority of cores are turned off, during execution of single threaded applications.
Thus, depending on the workload characteristics, we can either operate the system as a 3D stacked processor multicore or as a single (or multiple) layer of cores with a large stacked L3 cache.




%\subsection{Selective assignment of cores and memory to applications on 3D heterogeneous multicore}

Figure~\ref{fig:blockdig-3d} illustrates the heterogeneous configuration that we propose and the possible states it can operate under for different workloads.
The assignment of cores is as shown in the figure. 
Depending on the memory utilization as determined previously, the cache is partitioned according to the following heuristic, which we term \emph{Heterogeneity Aware Scheduler (HAS)}.

\begin{itemize}
\item The L3 cache local to the core is initially allocated to the application running on that core. 
\item The remaining L3 cache local to unused cores are preferentially allocated to applications depending on whether they are sensitive to cache size or not, as determined by their \emph{MPKI} (Misses per Kilo Instruction).
\item If both applications have high cache utilization, then the un-allocated L3 cache is partitioned equally between applications.
\item If both applications have poor cache utilization, then the un-allocated L3 cache is left unused, in order to preserve locality and reduce access latency.
\item If the applications have differing L3 cache utilization characteristics, the application with higher utilization is allocated the unused cache.
\end{itemize}

We only carry out a binary classification based on cache utilization as either dependent or independent of cache size.
and avoid finer grained comparisons of relative utilization. 
This is because the true working set size of an application can be highly data dependent and the response of the application to increasing or decreasing the cache size may not be deterministic.

\begin{figure}[ht!]
  %\centering
    \epsfig{file=figs/blockdig_3d.eps, angle=0, width=1.01\linewidth,height=0.6\linewidth, clip=}
    \caption{\label{fig:blockdig-3d} Different operating states of the heterogeneous multicore: a) 2 highly scalable parallel applications scheduled on the entire multicore.
	b) 2 completely sequential applications scheduled exclusively on CMOS cores.  
	c) A sequential application running alongside a weakly scaling application. The former is scheduled on a single CMOS core, while the latter is scheduled on either the remaining CMOS cores or TFET cores depending its optimal configuration.
	d) A sequential application running alongside a highly parallel application. The former is scheduled on a CMOS core, while the latter is scheduled on the entire set of TFET cores.}
\end{figure}

Thus, we propose the following algorithm that optimally utilizes the on-chip resources for a wide variety of applications to maximize the thermally constrained performance.
Each application is profiled \emph{a priori} to determine its scalability and memory utilization.

We used a combination of workloads from the \emph{Splash2}, \emph{Parsec}~\cite{parsec} and \emph{SPEC CPU2006} suites.
For our heterogeneous scheduling experiments, we used random combinations of workloads with different scaling and memory utilization characteristics, as described in Section~\ref{sec:technique}.
Table~\ref{tab:workloads} shows the workload mixes that we evaluated.

\begin{table}[ht!]\footnotesize
\centering
\begin{center}
\begin{tabular}{|c|c|c|} \hline
%Workload Type & \\
Workload-mix & W1 characteristic & W2 characteristic 	\\ \hline
& Scaling, MPKI& Scaling, MPKI 	\\ \hline
\emph{mcf-gobmk} &No , high  & No , high \\ \hline
\emph{lbm-scluster}& No , high  &Weakly , high  \\ \hline
\emph{mcf-canneal} & No , high  &Strongly , high  \\ \hline 
\emph{gcc-sphinx} &No , low   &No , high   \\\hline
\emph{barnes-fanim} &Strongly , low   &Strongly , low   \\ \hline
\emph{ocean.nc-raytrace} &Weakly , high   &Strongly , low   \\ \hline
\emph{ocean.c-scluster} &Weakly , high   &Weakly , high   \\ \hline
\emph{canneal-ocean.ncont} &Strongly , high   & Weakly , high  \\ \hline
% & per core; 4MB shared LLC 
% & L1 hit latency: 1 cycles; L2 hit latency: 8 cycles\\ \hline
%Memory & 4GB; DDR2-1600; 1 memory channel; \hline
\end{tabular}
\caption {Configuration of the evaluation platform.}
\label{tab:workloads}
\end{center}
\end{table}
