\section{Pre-writeback scheme} 
In this section, we introduce our proposed pre-writeback scheme. We first give an motivation example. Then we show details of the algorithm.

\subsection{Motivation of pre-writeback}
Section~\ref{pbackup} shows the mechanism of backup power reduction. Energy storage capacitance can be reduced due to the reduction of total backup requirement. However, we have notice that the total backup requirement changes over time mainly because the variation of dirty blocks number. It induces a high overflow rate which incurs backup failures and rolling backup to the last backup point is demanded. Fig.~\ref{pback} shows an example of backup overflow.
\begin{figure}[!hpt]
\centering
\includegraphics[scale = 0.25]{pwback.pdf}
\caption{An example of overflow and pre-writeback mechanism}\label{pback}
\end{figure}
In fig.~\ref{pback}, we assume that the maximum backup size, namely the backup budget is 350. At the backup point, total number of dead blocks exceed the the backup budget, hence a backup failure occurs, as shown in the blue line. The red line represents the change of number of dirty blocks if the pre-writeback approach is employed. A backup warning threshold is set to limit the maximum number of dirty blocks. When the number of dirty blocks reaches the threshold, a pre-writeback process is triggered. Then half of dirty blocks of large RUB are written back to lower level caches, or backup into the nonvolatile part. Then they become clean and the overall backup load is reduced below the max backup size line. A successful backup is archived at the backup point.

\subsection{Adaptive pre-writeback algorithm}
Fixed pre-writeback strategies starts a pre-writeback process every time when the number of dirty blocks reaches the backup warning threshold $B_{th}$. However, this process introduces extra wirteback operations if the pre-writebacked cache block is overwritten again before eviction. The extra writeback overhead is especially serious when the power input is stable. Considering a sensor node running the following task periodicity: 100Hz data sampling, 100 point FFT followed by data transmitting (1s+). If the average power on time is more than 1h (eg. stable solar power input), extra data writeback numbers for each task is more than for each task when the backup warning threshold is set to be 300 for a cache with 512 blocks, pre-writeback every time when the backup warning threshold is reached introduces \% performance overhead. Therefore, we should carefully design a adaptive pre-writeback algorithm considering average task processing time and power-off frequency.

\input{pbackalg}
Algorithm~\ref{alg2} is an on-line adaptive pre-writeback algorithm. In this adaptive policy, the predicted power-on time and average task processing time is obtained as a weighted sum of the last power-on time/processing time length and the last prediction as shown in line 4 and 8 in algorithm~\ref{alg2}. In practice, $\alpha$ and $\beta$ are selected to be 0.5. This policy is able to track the variation of processing time and average power-on time. When the backup warning threshold $B_{th}$ is reached, a pre-writeback process is triggered only if $T_{proc}>T_{on}$, which means that at least one power down is predicted to happen during the following task.
 