\section{nvSRAM Based L1 Cache Architecture}\label{architecture}
In this section, we give the details of our dead-block assisted nvSRAM based L1 cache architecture design. We first take an overview of the overall framework and then give the dead-block predictor design. After that, a novel scheme for prediction redundancy reduction named \emph{pre-backup} is introduced. We further propose a block-level redundant store elimination strategy to reduce repeated store operations for live blocks.
\subsection{Overview}

We refer to a popular existing dead-block predictor named \emph{Cache burst predictors} proposed in~\cite{liu2008cache} and design the architecture of dead-block assisted nvSRAM based L1 caches, because \emph{Cache burst predictors} have been demonstrated to tolerate irregularity of individual references in L1 caches well and achieve high coverage rate and high accuracy with little area overhead when used in the L1 caches.

\begin{figure}[!hpt]
\centering
\includegraphics[scale = 0.52]{architecture.pdf}
\caption{Structure of the nvSRAM based L1 cache with dead-block prediction}\label{fig4}
\end{figure}

Figure~\ref{fig4} shows the overall structure of our proposed nvSRAM based L1 cache architecture. As the \emph{Cache burst predictor} is a PC-trace based predictor when used in the L1 caches, s segment of PC needs to be added in the cache block to track the PC which references the corresponding cache block for the last time. The predictor table consisting of saturating counters is used for dead-block prediction. It makes use of the information from the L1 cache and enables \emph{dead bits} which represent whether cache blocks are dead or alive. The backup controller is designed to generate signals enabling parallel backups and pre-writeback of each cache block.

\subsection{Dead-block Predictors Design}
A cache burst begins when a block moves into the MRU position and ends when it moves out of the MRU position. The intuition behind the cache burst predictor design is that a cache block is much likely to be dead after a cache burst occurs. For each cache block, the last instruction referencing that block is stored by saving partial bits of PC into extra spaces (PC fileds) of cache blocks. Statistics results show that the cache block accessed by the same PC is highly possible to be dead when it moves from MRU to non-MRU.

To represent a couple of the memory address and PC, a signature consisting of lowest 3 bits of cache tag and 8 bits of PC (typically 3-10 bits) is used. When a cache block is evicted, the signature is used to index a predictor table with 2K entries and increase the correlated saturating counter (2 bits). This process is used for \emph{prediction training}. After that, for all cache blocks that are in non-MRU positions, predictions are made by looking up the predictor table and compare the number saved in the counter with a threshold (the threshold is set to be 3 in this work). If the number is greater than the threshold, the cache block is predicted to be dead and the dead bit is set. Cache bursts are detected by monitoring the recent used bit which is used by the LRU replacement algorithm. In general, dead bits can be calculated by the following logic expression:
\begin{equation}\label{equa1}
    Dead \ bit = \overline{VB}\| (\overline{MRU} \& (hash(signature) \ge threshold))
\end{equation}

Note that the valid bit (VB) should be checked in advance. $\overline{MRU}$ means the cache block is in a non-MRU position. The \emph{pre-writeback} signal is determined by checking dirty bits. It is expressed as the following logic formula:
\begin{equation}\label{equa1}
    Pre\\-writeback\_enable = Dirty \ bit \ \& \ Dead \ bit
\end{equation}

\subsection{Pre-backup Scheme}
As mentioned in section~\ref{DBP}, based on our problem frames, we just need to know the dead-block information at the backup point. Hence the prediction algorithm just needs to be started some time before the backup point for training the predictor table. This is called a \emph{pre-backup scheme}. Short training time length reduces the amount of updating times for the predictor table, which saves power consumption of prediction modules, but it lowers prediction accuracy for the incompleteness of the predictor table. Therefore, there exists a tradeoff between the training time and prediction accuracy.

To analyze the prediction accuracy, we first introduce two types of prediction errors.

\begin{itemize}
\setlength{\topsep}{-1pt}
\setlength{\partopsep}{-1pt}
\setlength{\itemsep}{0pt}
\item \emph{Type-I error}, or \emph{false-alarm error}: if a cache block will be referenced after restore, but the predictor predicts it to be a dead block. In this case, a false-alarm error is incurred.
\item \emph{Type-II error}, or \emph{missing-report error}: if a cache block is dead but the predictor fails to identify it, this circumstance results in a missing-report error.
\end{itemize}

In practice, false-alarm errors are more serious since it causes additional cache misses due to discarding live cache blocks at the backup point. Short training time leads to more missing-report errors because the predictor table doesn't record enough dead-block information to identify dead blocks.

\begin{figure*}[!hbt]
\centering
\subfigure[gcc]{\label{fig5a}\includegraphics[height=1.4in, width=2.2in]{traininga.pdf}}
\hspace{2mm}
\subfigure[hmmer]{\label{fig5b}\includegraphics[height=1.4in, width=2.2in]{trainingb.pdf}}
\hspace{1mm}
\subfigure[leslie3d]{\label{fig5c}\includegraphics[height=1.4in, width=2.2in]{trainingc.pdf}}
\caption{Accuracy, missing-report rate and false-alarm rate vs. training length}\label{fig5}
\end{figure*}

Figure~\ref{fig5} shows the prediction accuracy, missing-report rate and false-alarm rate with the change of the training length. Training length is defined as the number of accesses that are recorded. Accuracy is measured as the number of correct predictions (both for dead blocks and live blocks) divided by the number of total blocks in a cache. From these subfigures, we can notice that the prediction accuracy appears to be rising while the missing-report rate appears to be decreasing with the increase of training length. These two parameters all tend to be stable when training length is long enough. Therefore, a limited training length can be chosen in order to ensure prediction accuracy and reduce energy consumption. Note that the false-alarm rate is close to zero and the curves exhibits fluctuation due to some accesses irregularity. In the following analysis, we choose the knee point for each benchmark as our selected training length.

We further give the energy efficiency analysis of our proposed pre-backup scheme. Assume that the energy consumption during a backup/restore procedure of conventional full backup is $E_{B/R}$, and the energy consumption in the case of our proposed partial backup/restore procedure is expressed as $E_{B/R}'$. In our design, training energy(denoted as $E_T$) and prediction energy(denoted as $E_P$) overhead are considered. If our proposed PBDE consumes less energy than full backup strategy, than the following inequation should be satisfied.
\begin{equation}\label{equa14}
    E_T+E_P+E_{B/R}'<E_{B/R}
\end{equation}
We have observed that $E_P<<E_T$ because the refresh times and energy of the predictor table is negligible compared with those of PC fields. Therefore, we can re-write inequation~\ref{equa14} as follows.
\begin{equation}\label{equa15}
    N_aE_{pc}+N_b'E_{b/r}<N_bE_{b/r}
\end{equation}
where $N_a$ is the number of recorded accesses to cache blocks, $E_{pc}$ is the energy consumed by writing a PC field, $N_b$ and $N_b'$ are the number of cache blocks that are backed up without and with dead-block elimination respectively. $E_{b/r}$ is the sum of backup and restore energy per cache block. We can further derive the following inequation.
\begin{equation}\label{equa16}
    N_a<(N_b-N_b') \frac {E_{b/r}}{E_{pc}}
\end{equation}
If this inequality is satisfied, PBDE will be more energy efficient than the full backup scheme. Section~\ref{exp} will show that this inequality is always true using the pre-backup scheme.

\subsection{Block-level redundant store elimination(BRSE)}
~\cite{tsai2014leveraging} has proposed a bit-level redundant store elimination(RSE) policy for nvSRAM. RSE compares the data bit in the SRAM and nonvolatile components and write data into the nonvolatile components only when the older data differs from the data in SRAM components. However, the bit-level RSE requires an AND gate integrated in each nvSRAM cell and necessitating a 10\% area overhead.

\begin{figure}[!hbt]
\centering
\includegraphics[scale = 0.33]{BRSE.pdf}
\caption{Basic principle of BRSE}\label{fig6}
\end{figure}

We leverage the spatial locality of the L1 caches to design the block-level redundant store elimination policy. Figure~\ref{fig6} depicts the basic principle of BRSE. Considering a cache block that is predicted to be live and backed up into the nonvolatile components at the first backup point. After restore, if the block is overwritten or replaced by other blocks before the next backup point, then it deserves backup since data in SRAM and the nonvolatile components is inconsistent, as shown in figure~\ref{fig6}(a). Otherwise, the backup operation is needless since is is unmodified, as shown in figure~\ref{fig6}(b).

To implement BRSE, we add a \emph{modified bit} flag in each cache block. Every time after restore, the modified bit is reset. If a cache line is overwritten or replaced, then the modified bit is set to mark the inconsistency between SRAM and the nonvolatile components. Only one bit per cache block is needed for the implementation of BRSE, so the area overhead is negligible compared with RSE. Based on the above analysis, we can finally write the backup\_enable signal as the following expression.
\begin{equation}\label{equa13}
    Backup\_enable = \overline{Dead\ Bit}\ \&\ Modified\ Bit
\end{equation}
