\section{Experimental evaluation}\label{exp}
In this section, we provide comprehensive evaluations to demonstrate the efficiency of our proposed dead-block elimination scheme in reducing backup redundancy. We first give the experiment setup description. Then we give the prediction accuracy, peak-current reduction ratio and energy efficiency analysis. Overheads of our design are also discussed.

\subsection{Experiment Setup}
We implement our nvSRAM-based L1 cache design in a popular computer architecture simulator gem5~\cite{binkert2011gem5}. We configured it to model an ARM Cortex-A8 processor, which has been implemented in several SoCs including Allwinner A1X, Samsung Exynos 3110, etc. To model a fully nonvolatile memory system, we assume that the L1 cache is nvSRAM based and the L2 cache is configured as asymmetric-access STT-RAM based cache. The whole parameters are listed in Table~\ref{tab1}.
 % Table generated by Excel2LaTeX from sheet 'Sheet1'
% Table generated by Excel2LaTeX from sheet 'Sheet1'
\begin{table}[!htbp]
\caption{Simulation setup}\label{tab1}
  \centering
    \begin{tabular}{c|l}
    \toprule
    \hline
    Component & \makecell[c]{Configuration} \\
    \hline
    Processor & 1 core, 1GHz, 2-width issue \\
    \hline
    L1 cache, nvSRAM & \tabincell{l}{32kB+32kB(I\&D) 4-way, 64B block, \\R/W latency: 1/1 cycle, \\ 1.714nJ/2kb store energy, \\ 1.06nJ/2kb restore energy~\cite{chiu2012low}} \\
    \hline
    L2 cache, STT-RAM & \tabincell{l}{1MB, 8-way, 64B line, \\R/W latency: 3/18cycle}  \\
    \hline
    ISA & ARMv7 \\
    \hline
    \bottomrule
    \end{tabular}
\end{table}

All benchmarks come from SPEC CPU 2006. For each benchmark, we sample 50 operation points uniformly and record the prediction results to compare with the ideal results. The training length is set at the 'knee point' of the prediction curve as shown in Figure~\ref{fig5} to reduce prediction energy overhead as well as ensure the prediction accuracy.

\subsection{Prediction Accuracy}

As shown if figure~\ref{fig7}, by using \emph{Cache burst predictors} with \emph{pre-backup scheme}, the prediction accuracy is 85.2\% on average with 12.2\% missing-report rate and 2.6\% false-alarm rate on average. Comparing with the conventional \emph{Cache burst predictors} which achieve 96\% prediction accuracy~\cite{liu2008cache}, we sacrifice prediction accuracy and coverage for reduction of prediction energy by limiting prediction length and sampling cache burst for only one time. However, this accuracy expense is tolerable because it doesn't incur many false-alarm errors, which are harmful to cache performance.

\begin{figure}[!hpt]
\centering
\includegraphics[height=1.5in, width=3.2in]{accuracy.pdf}
\caption{Prediction accuracy for SPEC 2006 benchmarks}\label{fig7}
\end{figure}


\subsection{Inrush Current}
Inrush current incurred by parallel store operations is proportional to the number of cache blocks that are backed up. Figure~\ref{fig8} depicts the inrush current for SPEC 2006 benchmarks. Our baseline is conventional full backup scheme~\cite{chiu2012low}. We observe from figure~\ref{fig8} that PBDE can reduce average inrush current by 56.3\% and peak inrush current by 32.8\%. In practice, peak inrush current draws more attention because it damages control circuits and switch contacts seriously. However, it can be inferred from figure~\ref{fig8} that the peak inrush current for some typical applications such as libquantum is still high. A heuristic solution may involve limiting the maximum number of backups by discarding some \emph{live blocks}. This idea will be verified in our future work.
\begin{figure}[!hpt]
\centering
\includegraphics[height=1.5in, width=3.2in]{current.pdf}
\caption{Inrush current of PBDE compared with the baseline~\cite{chiu2012low}}\label{fig8}
\end{figure}

\subsection{Energy Efficiency}
\begin{figure}[!hpt]
\centering
\includegraphics[height=1.5in, width=3.2in]{power.pdf}
\caption{Energy consumption of PBDE compared with the baseline~\cite{chiu2012low}}\label{fig9}
\end{figure}
Figure~\ref{fig9} shows the energy consumption of the whole backup/restore procedure. Right bars represent the normalized energy of PBDE including the pre-backup energy for prediction training and the actual backup/restore energy. We can notice that the energy consumed by pre-backup process only takes a small portion of the total energy. This is due to the fact that we have chosen a limited training length. We can observe an apparent decline in backup/restore energy compared with the baseline because plenty of needless backups are abandoned. On average, PBDE reduces the energy consumption by 50.6\% compared with the conventional full backup scheme.

\subsection{Performance\& Storage Overheads}
Performance overheads are mainly caused by extra cache misses incurred by \emph{false-alarm errors}. Experiment results show that 13 extra L1 cache misses will be introduced on average if all 512 lines in L1 caches are filled with data after one backup/restore process. This performance overhead can be compensated by using fast-read L2 caches such as STT-RAM based caches. The pre-backup process does not cause performance overhead because the training process has the ability to corporate with normal cache operations and run in the background. Moreover, predictions are made by hashing the predictor table which is of \emph{O(1)} complexity. As for storage overhead, we add 8 PC bits, 1 dead bit and 1 modified bit to each cache block. The predictor table has 2K entries and 2 bits per entry. Hence total storage overhead is $(8+1+1)\times 512+(2\text{K} \times 2)=9\text{Kb}$.
%\begin{table}[!htbp]
%\caption{Energy \& power parameters of nvSRAM cell}\label{tab2}
%\centering
%% Table generated by Excel2LaTeX from sheet 'Sheet1'
%\begin{tabular}{c|l}
%\toprule
%\hline
%Store & 1.714nJ/2kb  \\
%\hline
%Restore &1.06nJ/2kb \\
%\hline
%\bottomrule
%\end{tabular}
%\end{table}
