Modern designs face increasing pressure from power and thermal
constraints. Extending existing CMOS approaches
with the use of energy-efficient low-voltage device technologies such
as TFETs offers a new means for
parallelism to be exploited across threads, within threads, and at
the input data-level.  This section examines techniques for such a
co-design of device and architecture, allied with application mapping
techniques to consume parallel resources appropriate to the degree of
parallelism discovered and to exploit the continued serial benefits of
CMOS where and when they outway the benefits of parallel execution.

\subsection{Exploiting thread-level parallelism with 3D-stacked CMPs}
When considering the space of core-count and core-frequency for a
multi-core processor, every processor is limited by the total power
budget, which restricts the attainable configurations.  This
problem can be mitigated to an extent by various
approaches~\cite{taylor-dac2012} for exploiting \emph{Dark Silicon}
i.e by spatially or temporally reallocating power budgets such that
either subsets of (possibly specialized) cores can operate at peak
frequency or all cores can operate at peak frequency a subset of times
at the expense of darkening/dimming other cores/times.

In addition to power, there are two other key considerations for
understanding which processor configurations are practical. Namely,
yield constraints may restrict the manufacturability of processors
with high core counts and \emph{thermal limitations} due to power
density may come into play even for processors staying within their
aggregate power budget. TFET cores can provide a
more energy efficient alternative to conventional CMOS processors,
especially at near-threshold and sub-threshold voltage -- at
sufficiently low voltages, the steep slope of TFETs makes them
inherently more efficient transistors.  Substituting TFET cores for
CMOS cores lessens the thermal consequences of 3D
stacking. Consequently, stacked TFET cores extend the range of viable
designs in the core count/frequency space.

%Add building collapse fig, HAS fig and results
Figures~\ref{fig:building-collapse-result}a) and b) show the extent of
frequency and core scaling for two applications, \emph{barnes}, which
scales well, and \emph{ocean.cont}, which scales poorly.  The regions
shaded black correspond to the points at which the scaling model
``collapses'', i.e thermal and yield considerations restrict the
design space.  While both applications are affected by the frequency
limitation, only \emph{barnes} is adversely affected by the constraint
on the number of cores.  The two main roadblocks encountered in this
effort are the decrease in yield due to bonding and TSV losses, and
the steady increase in power density as layers are added, leading to
large temperature increases among the internal layers.

\begin{figure}[ht!]
  \centering
    \epsfig{file=figs/building_collapse_result.eps, angle=0, width=1\linewidth,height=0.85\linewidth, clip=}
    \caption{\label{fig:building-collapse-result}  a) and b) Delineation of design space attainable by CMOS(red), TFET(blue), both (green) and neither (black) cores to obtain peak performance, for a scalable (\emph{barnes}) and non-scalable (\emph{ocean.ncont}) application respectively.}
\vspace{-0.1in}
\end{figure}


Using a combination of power-efficient TFETs and high performance CMOS
expands the design space over either in isolation. TFET-based cores
mitigate the thermal constraints sufficently to stack more deeply, and
reach a different portion of the yield curve. Processor yield is
adversely affected both by increasing the chip area and by increasing
the layer count.  To address yield effects at the higher layer counts
thermally achievable with TFETs, we employ \emph{Core
  Sparing}~\cite{emma-3d}, in which a multiprocessor employs redundant
cores in order to boost the overall processor yield. Such techniques
have been used in systems such as the IBM POWER series to
significantly reduce the time-to-market~\cite{emma-3d}.  By addressing
both yield and thermal concerns, the range of viable TFET designs in
the core count/frequency space shifts compared to those viable for
CMOS-based multicores. Moreover, for sufficiently parallel workloads,
the performance optimal design is often in the new TFET portion of the
design space and for the performance-comparable count/frequency points
achievable by both CMOS and TFET designs, the TFET designs are often
much more energy-efficient.

As described above, using TFET-based multicores enable us to trade off
the limitations in peak performance with increased parallelism by
stacking multiple core layers.  However, only those applications with
high thread parallelism would benefit.  In reality, multicore
workloads may involve several diverse applications running
concurrently. These applications vary in characteristics such as
thread scalability, memory footprint and single-threaded performance.
To cater to realistic workloads, we propose a heterogeneous
3D-stacked multicore that comprises a single CMOS core layer and 7
TFET layers. Each individual core is modeled as a 4-issue out-of-order
core with an x86-based ISA with private L1 and L2 caches and an L3
cache shared across multiple layers.


\begin{figure}[htb!]
  \centering
    \epsfig{file=figs/blockdig_3d.eps, angle=0, width=0.95\linewidth, clip=}
    \caption{\label{fig:blockdig-3d} Different operating states of the heterogeneous multicore for different types of workloads
        %: a) 2 parallel applications scheduled on the entire multicore.
        %b) 2 sequential applications scheduled exclusively on CMOS cores.
        %c) A sequential application, scheduled on a CMOS core, running alongside a weakly scaling application, scheduled on a few CMOS/TFET cores.
        %d) A sequential application, scheduled on a CMOS core running alongside a parallel application, scheduled on the entire set of TFET cores. }
        }
    \vspace{-0.1in}
\end{figure}


Figure~\ref{fig:blockdig-3d} illustrates the heterogeneous
configuration that we propose and the possible states it can operate
under for different workloads.  The assignment of cores is as shown in
the figure. The cache is partitioned according to the relative memory
utilization of the constituent applications, depending on the memory
utilization.  We term this technique \emph{Heterogeneity Aware
  Scheduling (HAS)}. Figure~\ref{fig:scheduling-results} shows the
19\% improvements of the HAS-enabled heterogeneous-device system over
a single-device system.

\begin{figure}[htb!]
  \centering
    \epsfig{file=figs/scheduling_results.eps, angle=0, width=0.95\linewidth, clip=}
    \caption{\label{fig:scheduling-results} Performance comparison of homogeneous CMOS and TFET multicore with a heterogeneous 3D configuration with HAS, consisting of 1 CMOS layer and 7 TFET layers.}
    \vspace{-0.2in}
\end{figure}








\subsection{Heterogeneous ILP optimization for heterogeneous devices} 
% Tradeoff 330-350K figure
Architectures for mobile and embedded applications are designed under
tight constraints, particularly in terms of energy and power density.
These constraints directly affect crucial physical attributes of the
device, such as operating temperature and battery life as well as the
overall device reliability and lifetime.  Temperatures ranging from
330-350K are typical limits under which these devices operate.
In such a context, ramping up the supply voltage/frequency to optimize
performance may not always be the best option. Instead, it could be
more useful to exploit the inherent instruction-level parallelism of
typical mobile-based applications instead, by designing wider issue
cores at lower frequencies.
%(3D plot)

Even within the 330-350K thermal constraints, there are many possible
microarchitectural configurations to consider. This includes varying core
complexity in terms of number of instructions fetched per cycle, issue
width, size of register file and issue queue and the number of
execution units.  Depending on the microarchitecture configuration,
the relative contributions of dynamic and leakage energy with respect
to performance vary significantly.  Further, ILP variation among and
within applications also impacts the overall efficiency of the
various microarchitecture selections.

For exploring this dimension of parallelism, we assume a
big.LITTLE-like configuration, with the addition of device-level, as
well as microarchitectural, heterogeneity.  A wide-issue TFET core is
useful for running applications with high ILP, to maximize throughput
even at a reduced frequency, while the CMOS core operates at a higher
frequency, but has lower issue width (1-2). For each of the three
thermal limits considered, we pair the best performing TFET-based
processor under that thermal limit with the best performing CMOS
processor within that limit.

Different applications display varying degrees of instruction-level
parallelism. Profiling these applications can lend insight to decide
the optimal architecture configuration for a given workload.  On the
other hand, runtime characteristics of applications can change
significantly across different phases. In such cases, a more
fine-grained dynamic application mapping scheme may be required. In
this work, we describe static (profiling-based) and dynamic
(instruction-slack) based application mapping schemes that schedule
applications onto device-heterogeneous architectures under different
thermal constraints.

% \begin{figure}[ht!]
% \begin{minipage}[c]{1\linewidth}
% %\centering
% \epsfig{file=figs/mibench_static_perf.eps, angle=0, width=0.99\linewidth, clip=}
% \caption{\footnotesize\label{fig:mibench-static-perf} Speedup on heterogeneous multicore with static mapping over best homogeneous CMOS configuration for thermal limits of 330K, 340K, 350K}
% \end{minipage}
% \vspace{0.3in}
% \begin{minipage}[c]{1\linewidth}
% %\centering
% \epsfig{file=figs/mibench_static_energy.eps, angle=0, width=0.99\linewidth, clip=}
% \caption{\footnotesize\label{fig:mibench-static-energy} Normalized energy in heterogeneous multicore with static mapping over best homogeneous CMOS configuration for thermal limits of 330K, 340K, 350K }
% \vspace{-0.1in}
% \end{minipage}
% \end{figure}

\begin{figure}[ht!]
\begin{minipage}[c]{1\linewidth}
%\centering
\epsfig{file=figs/mibench_dynamic_perf.eps, angle=0, width=0.99\linewidth,height=0.45\linewidth, clip=}
\caption{\footnotesize\label{fig:mibench-dynamic-perf} Speedup on heterogeneous multicore with \emph{DynMap} on best homogeneous CMOS configuration for thermal limits of 330K, 340K, 350K }
\end{minipage}
\vspace{0.3in}
\begin{minipage}[c]{1\linewidth}
%\centering
\epsfig{file=figs/mibench_dynamic_energy.eps, angle=0, width=0.99\linewidth,height=0.45\linewidth, clip=}
\caption{\footnotesize\label{fig:mibench-dynamic-energy} Normalized energy in heterogeneous multicore with \emph{DynMap} on best homogeneous CMOS configuration for thermal limits of 330K, 340K, 350K}
\vspace{-0.3in}
\end{minipage}
\end{figure}
\begin{itemize}

\item \textbf{Static mapping of applications}  %I shall populate
In these evaluations, we determine the best possible operating
point in the frequency-issue-width design space, for each application
for different thermal limits.
We then arrive upon the configuration
preferred by a majority of the applications for each temperature
domain. Each application was run at its optimal frequency for
that configuration.  Depending on the static profiling results, it is
possible to determine whether the application has a higher
affinity for a CMOS or a TFET core.
\item \textbf{Instruction slack-based dynamic mapping}
An instruction can be delayed at most by the number of cycles before the next dependent instruction is executed in its designated cycle. 
This is termed as \emph{Instruction Slack} and is inherently a measure of the ILP of the given workload. In the absence of any dependent instructions in the ROB, this slack would at most equal the ROB size. Figure~\ref{fig:slack-block-dig} illustrates our technique in greater detail.
\end{itemize}

%Figures~\ref{fig:mibench-static-perf} and~\ref{fig:mibench-static-energy} show the speedup and energy of a static scheduling scheme, where the best core configuration is selected for each application.
The static scheduling techniques yield peak improvements of 43\% at 330K with energy savings of 27\%. The benefits of the scheme decrease with increase in peak temperature.
Figures~\ref{fig:mibench-dynamic-perf} and~\ref{fig:mibench-dynamic-energy} show the speedup and energy of the dynamic scheduling scheme \emph{DynMap} described above. \emph{DynMap} obtains improvements of 4\%, 22\% and 14\% over the static scheme at 330K, 340K and 350K respectively.



\begin{figure}[ht!]
\centering
\epsfig{file=figs/slack_block_dig.eps, angle=0, width=0.8\linewidth, height=0.3\linewidth, clip=}
\caption{\label{fig:slack-block-dig} Runtime Slack Estimation}
\vspace{-0.2in}
\end{figure}




\subsection{Examination of specialized accelerators with TFETs}
In addition to general-purpose computing, the use of customized
accelerators for computing has increased drastically.  In such a
scenario, the design of energy efficient accelerators can be
facilitated by the use of steep-slope transistors.  Although the
slower device speed may increase the critical path time in comparison
to CMOS, the reduction in power due to the TFET-based accelerators
enables us to exploit another dimension of application parallelism:
\emph{Data-level parallelism (DLP)}.  The superior power efficiency of
TFETs enables us to operate larger-size or more accelerators within
the same power budget, even though the clock frequency may be lower
than that of a CMOS accelerators.

In this work, we examined 2 different accelerators, a 32-point FFT
computation engine and a convolution engine. The CMOS designs were
synthesized using a 32~nm IBM SOI library and then scaled to 22~nm
FinFET technology using TCAD simulations. An in-house standard cell
library was used
to synthesize the 22nm TFET designs at 0.3V~\cite{frank-iedm11}. 
The TFET-based accelerator was compared with CMOS designs synthesized at different operating
points, as shown in figure~\ref{fig:fft-accelerator}. The points of
comparison include:
\begin{enumerate}
\item An iso-voltage point, where, both the CMOS and TFET accelerators
  operate at at $V_{dd}$ of 0.3V,
\item An iso-delay/performance point, where, both the CMOS and TFET
  accelerators have the same critical path delay, and
\item A peak comparison point, where the TFET design is compared with
  a CMOS design operating at 0.85V.
\end{enumerate}


\begin{figure}[ht!]
\centering
\epsfig{file=figs/fft_accelerator.eps, angle=0, width=0.95\linewidth,height=0.45\linewidth, clip=}
\caption{\label{fig:fft-accelerator} Comparison of delay, power, energy, and EDP across FFT implementations with TFETs, iso-voltage CMOS, iso-performance CMOS, and peak performance CMOS}
\vspace{-0.1in}
\end{figure}

It is evident that the TFET design is clearly preferred for the
iso-voltage case, since CMOS is forced to operate in a sub-optimal
region (around $V_T$) at 0.3V.  The iso-performance point at 0.54V for
CMOS still falls to the left of the crossover point, where TFETs are
more energy efficient, hence the overall energy and EDP of the TFET
design are superior.  The peak-performing CMOS design is faster by a
factor of 2$\times$ than the corresponding TFET design. However, the
overall energy consumed by the TFET accelerator is much less than that
of the CMOS one, by over 20$\times$. In such applications, the gap in
performance can be made up by using multiple TFET accelerators, by
incurring a small area penalty.

Another possible solution in this performance/energy/area design space
across CMOS and TFET accelerator designs is to vary the input data
size.  
Figure~\ref{fig:convolution-accelerator} shows the timing, power
and area for peak-performing CMOS and TFET accelerators for input
kernel sizes of 7x7 and 12x12.  The 12x12 kernel is 20\% slower than
the 7x7 and 2$\times$ larger in area and power, but computes around
$3\times$ the total data as the latter. This makes the larger design
more energy efficient. The TFET cell is similar in size to CMOS,
resulting in the overall area being similar, but with huge differences
in the power and energy.

%FFT plot

\begin{figure}[ht!]
\centering
\epsfig{file=figs/convolution_accelerator.eps, angle=0, width=0.95\linewidth,height=0.45\linewidth, clip=}
\caption{\label{fig:convolution-accelerator} Comparison of delay, power, area and energy for a convolution accelerator with different kernel input sizes}
\vspace{-0.2in}
\end{figure}





