%results
In this section, we carry out a design space exploration across different architecture, device and system configurations for a diversity of workloads.
We attempt to find the best possible device-architecture co-design for each of these workloads.
Our studies are carried out by varying the core frequency and the number of cores across multiple stacked layers, under the thermal and yield constraints described in Section~\ref{sec:technique}.
We also demonstrate sensitivity results over a range of temperature budgets and microarchitectural configurations.

\subsection{Determination of optimal operating points in the design space}

Figure~\ref{fig:building-collapse-workloads}a) and b) show the various optimal design points that are possible for different sets of applications (\emph{parsec} and \emph{splash2} respectively).
The harmonic mean of the relative speedups of all applications in each benchmark suite (relative to a single core, operating at peak frequency), at every operating point is evaluated for each category.
In addition to the red and blue bars, which signify the operating points exclusive to CMOS and TFET cores respectively, the green colored bars denote all states that can be attained by both core types.
The diversity in the overall scalability of the workload suite is evident in the comparison between Figures ~\ref{fig:building-collapse-workloads}a) and~\ref{fig:building-collapse-workloads} b). 
In order to determine which core configuration is preferred in the green region, we determine the CMOS and TFET power for all states in this region and plot the power savings obtained by using one core over the other, as shown in Figure~\ref{fig:power-workloads}. In this figure, the red and blue regions correspond to those core states where it is more power efficient to use CMOS or TFET respectively.
This plot clearly illustrates that for optimal performing designs in the TFET-preferred region, the power savings can be significant and as shown in Figure~\ref{fig:technology-scaling3b}, the relative gains w.r.t CMOS will increase with subsequent generations.
%The \emph{parsec} applications clearly have an affinity for TFET cores since their optimal operating points lie in the blue and pale green spaces, while \emph{splash2} applications are more inclined to run on CMOS cores since their optimal operating points lie in the red and dark green spaces.


\begin{figure}[ht!]
  \centering
    \epsfig{file=figs/building_collapse_workloads.eps, angle=0, width=0.95\linewidth, clip=}
    \caption{\label{fig:building-collapse-workloads}  a) and b) Mean speedup of different applications in the \emph{splash2} and \emph{parsec} suites respectively. the \emph{splash2} applications, on average, prefer higher frequency and fewer cores to operate (red CMOS space and dark green common space). this is because around 29\% of applications are relatively poor scaling. on the other hand, \emph{parsec} benchmarks operate most efficiently on larger number of cores and fewer frequencies (blue TFET Space and pale green common space), with only around 17\% applications preferring high frequency CMOS Cores}
\end{figure}



\begin{figure}[ht!]
  \centering
    \epsfig{file=figs/power_workloads.eps, angle=0, width=0.9\linewidth, clip=}
    \caption{\label{fig:power-workloads}  a) and b) Mean power savings obtained in \emph{parsec} and \emph{splash2} suites respectively by running workloads on CMOS (red region) or TFET (blue region)cores. This plot is restricted only to the region where both CMOS and TFET cores are capable of operation in order to determine the more efficient core.}
\end{figure}








%\subsection{Variation of thermally constrained performance for a single workload}

Figure~\ref{fig:speedup-4issue-notext} shows the performance comparison of a 4-issue Ivybridge TFET v/s CMOS processor for a range of \emph{Splash2} and \emph{Parsec} benchmarks and compares the best performing configurations in each case. 
The optimal configurations for each processor (frequency, number of layers), subject to thermal and yield constraints are indicated for each data point. All speedups are normalized to a single CMOS core running at peak frequency (3~GHz).
The TFET core configurations outperform the best CMOS configuration by an average of around 17\% for the \emph{Splash2} suite and around 20\% for the \emph{Parsec} suite. The overall speedup is around 18\%.
This performance improvement varies with the temperature budget as shown below.

Table~\ref{tab:bestconfig} shows the best performing configuration under thermal constraints in terms of frequency, number of cores and number of stacked layers for both CMOS and TFET processors.

\begin{table}[ht!]\scriptsize
\begin{center}
\begin{tabular}{|c|c|c|c|c|} \hline
%Workload Type & \\
Benchmark &Technology & Frequency(GHz) & Cores & Layers\\ \hline
\multicolumn{5}{|c|}{\textbf{SPLASH}} \\ \hline
\multirow{2}{*}{\emph{barnes}} & CMOS & 1 & 64 & 8  \\ 
& TFET & 1.25 & 64 & 8  \\	\hline
\multirow{2}{*}{\emph{fmm}} & CMOS & 1 & 64 & 8  \\
\multirow{2}{*}{\emph{ocean.cont}} & CMOS & 1.75 & 32 & 4  \\
& TFET & 1.5 & 32 & 4  \\	\hline
\multirow{2}{*}{\emph{ocean.ncont}} & CMOS & 1.75 & 32 & 4  \\
& TFET & 1.5 & 32 & 4  \\	\hline
\multirow{2}{*}{\emph{radiosity}} & CMOS & 1 & 64 & 8  \\
& TFET & 1.25 & 64 & 8  \\\hline
& TFET & 1.25 & 64 & 8  \\	\hline
\multirow{2}{*}{\emph{water.nsq}} & CMOS & 1.5 & 32 & 4  \\
& TFET & 1.25 & 64 & 8  \\	\hline
\multirow{2}{*}{\emph{water.sp}} & CMOS & 1 & 64 & 8  \\
& TFET & 1.25 & 64 & 8  \\	\hline
\hline
\multicolumn{5}{|c|}{\textbf{PARSEC}} \\ \hline
\multirow{2}{*}{\emph{blackscholes}} & CMOS & 1 & 64 & 8  \\ 
& TFET & 1.25 & 64 & 8  \\	\hline
\multirow{2}{*}{\emph{canneal}} & CMOS & 1 & 64 & 8  \\	
& TFET & 1.25 & 64 & 8  \\	\hline
\multirow{2}{*}{\emph{fanimate}} & CMOS & 1 & 64 & 8  \\
& TFET & 1.25 & 64 & 8  \\	\hline
\multirow{2}{*}{\emph{raytrace}} & CMOS & 1 & 64 & 8  \\
& TFET & 1.25 & 64 & 8  \\	\hline
\multirow{2}{*}{\emph{scluster}} & CMOS & 1.75 & 32 & 4  \\
& TFET & 1.5 & 32 & 4  \\	\hline
\multirow{2}{*}{\emph{swaptions}} & CMOS & 1 & 64 & 8  \\
& TFET & 1.25 & 64 & 8  \\	\hline


\end{tabular}
\caption {Configuration of the evaluation platform.}
\label{tab:bestconfig}
\end{center}
\end{table}

\begin{figure}[ht!]
  %\centering
    \epsfig{file=figs/speedup_4issue_notext.eps, angle=0, width=1.02\linewidth, height=0.6\linewidth, clip=}
    \caption{\label{fig:speedup-4issue-notext} Relative performances of 3D stacked CMOS and TFET configurations using an 8 stacked layers consisting of 64 functioning 4 issue processors. The thermal budget assumed here is 87$^\circ$C.}
\end{figure}

\subsection{Sensitivity to thermal budget}
Figure~\ref{fig:sensitivity-temperature} shows the variation in performance improvement obtained by TFET by comparing the best possible TFET and CMOS core configurations.
TFET cores are clearly the preferred choice for thermal budgets upto around 360K (87$^\circ$C), while the performance difference is negligible upto around 380K (107$^\circ$C).
At higher thermal budgets above a 100$^\circ$C, CMOS cores clearly dominate since, the scope of microarchitectural configurations that they can attain is large enough to offset the increased thermal efficiency of TFET cores.

\begin{figure}[ht!]
  \centering
    \epsfig{file=figs/sensitivity_temperature.eps, angle=0, width=1\linewidth, clip=}
    \caption{\label{fig:sensitivity-temperature} Variation of performance improvement of TFET core as opposed to CMOS cores for different thermal limits. Evaluations are carried out separately for \emph{Splash} and \emph{PARSEC} benchmark suites}
\end{figure}

\subsection{Sensitivity to microarchitecture}
We also carried out experiments that compare CMOS and TFET performance for a range of processor microarchitectures, ranging from a single issue to an 8-issue out-of-order processor, as shown in Figure~\ref{fig:sensitivity-issue-width2}. 
This figure shows the mean speedup of all benchmarks run from the \emph{Splash2} and \emph{Parsec} suites, when compared to a single core baseline running at peak frequency (3~GHz).
The 3D stacked multicore configuration remains the same as the previous experiments and is subjected to the same thermal limit of 360K.
The \emph{Splash} suite of benchmarks are not very sensitive to processor complexity as there is only a minor improvement in speedup with increase in issue width.
On the other hand, in case of the \emph{Parsec} suite, the performance improvement of TFET processors peaks at the 4 issue configuration. This is because the 4-issue TFET processor has sufficient capacity to exploit the inherent ILP of the application. As a result, when combined with 3D stacking, this configuration is able to extract the maximum performance from the application by optimizing both its ILP and TLP.
For lower issue processors, core frequency plays a more important role, which reduces the advantage due to TFET cores.
On the other hand, wider (6 and 8 issue) processors are extremely power hungry and provide limited improvement in ILP over the 4 issue configurations.
As a result, the higher base temperature attained by these cores, severely limits the microarchitural flexibility in both CMOS and TFET cores, leading to lower speedups.
%currently dummy
\begin{figure}[ht!]
  \centering
    \epsfig{file=figs/sensitivity_issue_width2.eps, angle=0, width=0.9\linewidth, clip=}
    \caption{\label{fig:sensitivity-issue-width2} Comparison of performance speedup of CMOS and TFET cores for different microarchitectural configurations. Evaluations are carried out separately for \emph{Splash2} and \emph{Parsec} benchmark suites for issue widths of 1 to 8}
\end{figure}

\subsection{Heterogeneity aware scheduling on a stacked CMOS-TFET multicore}
In addition to the static profiling based results, we also demonstrated the viability of a stacked CMOS-TFET heterogeneous multicore. 
We implemented our \emph{Heterogeneity-Aware Scheduler}, by running the workload mixes in ~\ref{tab:workloads} on the CMOS-TFET multicore and compared the improvement in performance to the both a homogeneous CMOS and a homogeneous TFET multicore. The results are shown in Figure~\ref{fig:scheduling-results}.
All results are weighted speedups normalized to the ideal baseline, i.e the weighted speedup of each application when run individually on the best possible CMOS/TFET configuration.
The heterogeneity aware scheduler results in a 17\% improvement over the best homogeneous configuration.

\begin{comment}

Figs and Points:
\begin{itemize}
\item Memory profiling of SPEC BMs
\item Classification of Benchmarks and creating workload mixes of PARSEC/Splash/SPEC.
\item Scheduling Algorithm
\item Performance/Power numbers
\end{itemize}
\end{comment}

\begin{figure}[ht!]
  \centering
    \epsfig{file=figs/scheduling_results.eps, angle=0, width=1\linewidth, clip=}
    \caption{\label{fig:scheduling-results} Performance comparison of homogeneous CMOS and TFET multicore with a heterogeneous 3D configuration with HAS, consisting of 1 CMOS layer and 7 TFET layers}
\end{figure}


