\section{Evaluation}\label{sec:eval}

We conducted an evaluation of \sys on the applications described in
Section~\ref{sec:apps}.  We compared \sys to two baseline systems. The first
baseline is Mementos~\cite{mementos}, the state of the art research system for
dealing with intermittent power that dynamically checkpoints execution context
and stack memory periodically, but does not checkpoint heap state and does not
support application-specific recovery.  The second baseline is the state of
practice of embedded programming, which is to use no runtime support for
preserving execution state across failures. 

Rather than bind our evaluation to a particular energy harvesting or
intermittent power modality, we emulated intermittent power using a separate
Arduino Uno.  We programmed the microcontroller to intermittently raise a pin
that served as the input power to our system under test.  Using that setup, we
experimented with intermittent power traces that followed several different
profiles.\xxx{what power traces should we use?}  We conducted experiments with
\sys and each baseline system by running it given the same total emulated
energy supply and the same intermittent power trace.

We evaluated the benefits of \sys in our benchmark applications along several
dimensions: {\em progress}, {\em error}, and {\em overhead}.  

We quantified {\em progress} by measuring how much useful computation each
system performed.  For the AR benchmark, we measured progress as the number of
accelerometer time series that were featurized and classified.

We quantified {\em error} by measuring deviation in the program's output that
was due to nv-internal and nv-external inconsistency.\xxx{are we going to
define these?}  To measure error, we instrumented our systems and recorded when
a failure violated some non-volatile memory accesses' atomicity (nv-internal
inconsistency) and when execution context, non-volatile state, and the
execution environment became inconsistent (nv-external and environmental
inconsistency).  For the AR benchmark, we report the error as the deviation in
the sum of per-class classification and the reported total number of
classifications.

We quantified \sys's {\em overhead} by \xxx{How do we do this?}


\subsection{Detailed Evaluation Procedure for AR}

\paragraph{Experimental Setup }
Arduino Uno providing power over a 5V GPIO pin.  60 seconds of pin-high time.
Interruptions in pin-high time at random intervals between 1 and 10 seconds.
Interruptions last 1 second and power is restored.  After 60 seconds of power
are delivered, the pin is set low.

MSP430FR5969 running AR.  Treatment runs w/ DINO recovery that versions total
and per-class count to be incremented.  total and per-class count should be
updated atomically.  Recovery routine for count update task reverts totalCount
and appropriate per-class count to versioned values so re-execution atomically
updates from their original values.  Base case runs w/o DINO recovery.  When
atomicity of updates is violated, values are not reverted and total !=
stationary + walking.

To exercise the application, put setup on a tray and walked around.  Reset
arduino and MSP.  Alternately walked 5s, paused 5s until arduino used its power
budget.  After 60 seconds of Arduino moderated power elapse, MSP is allowed to
run until it has taken 500 samples, then it hangs in a loop, awaiting reset.
The purpose of this is to limit the execution to a known quantity of
classifications before connecting to a debugger and reviewing the resutls.

The hypothesis is that the error using DINO will be lower than the error
without DINO.  That means without DINO, atomicity violations manifested and
corrupted results.

I do not know how to measure for repeat executions of the atomic region, yet.

\paragraph{Results}

%DINO     totalCount | walkingCount | stationaryCount
%Trial 1:        500 |           99 |             401

%No DINO  totalCount | walkingCount | stationaryCount
%Trial 1:        500 |           99 |             401

\begin{table}
\begin{tabular}{l | c}
Configuration & Error (\%) \\ \hline
DINO & 0 \\
Base
\caption{Comparing the error resulting from atomicity violations introduced by continuous checkpointing without DINO and DINO}
\end{tabular}
\end{table}
