\newcount\draft\draft=1 % set to 0 for submission or publication
\newcount\cameraready\cameraready=0

\ifnum\cameraready=0
\documentclass[10pt,preprint,nocopyrightspace]{sigplanconf}
\else
\documentclass[10pt]{sigplanconf}
\fi

\ifnum\draft=1
  \input{revision}
  \usepackage{drafthead}
\fi
\usepackage{xxx}

\usepackage{graphicx}
\DeclareGraphicsExtensions{.pdf,.jpg,.png}
\graphicspath{{./figs/}}

\usepackage{listings}
\lstset{
    language=C,
    emphstyle=\bfseries,
    basicstyle=\ttfamily\small,
    aboveskip=1mm plus 1mm minus 1mm,
    belowskip=1mm plus 1mm minus 1mm,
    mathescape=true,
    xleftmargin=\parindent,
}
\newcommand{\lil}{\lstinline}

\usepackage{xspace}
\newcommand{\sys}{DINO\xspace}
\newcommand{\sysexpand}{Death Is Not an Option}
\hyphenation{DINO}

\usepackage{amsmath}

\title{\sysexpand: Making Power-Failure-Prone Systems Programmable}
%\author{\begin{tabular}{c c}Brandon Lucia & Ben Ransford \\ Microsoft Research & University of Washington\end{tabular}}
\authorinfo{Anonymous for Submission}{}
\date{}

\newcommand{\term}[1]{\emph{#1}}

\begin{document}

\special{papersize=8.5in,11in}
\setlength{\pdfpageheight}{\paperheight}
\setlength{\pdfpagewidth}{\paperwidth}

\maketitle

\abstract{}

\section{Introduction}
\xxx{is IoT too buzzy?}
Emerging applications like the ``internet of things'',  wearable computing, and
implantable medical devices are gaining in importance.  The devices used in
such applications are typically small microcontrollers or general purpose CPUs.
Traditionally, such devices require a fixed power source, like a battery or
wired power connection to operate.  Recent work has shown, however, that it is
possible to power devices by harvesting energy from the environment.  Harvested
energy is much less abundant than the energy available in a battery, but energy
harvesting devices dispatch with the complexity, size, and weight of a battery
and charging circuit.  Energy harvesting devices can use energy from a variety
of sources, including radio frequency energy~\xxx{CITE CRFID Paper}, mechanical
energy~\xxx{piezo harv cite}, and thermal gradients~\xxx{thermal cite}. 


An important challenge faced when designing an energy harvesting system is that
devices are {\em transiently powered}.  Harvested energy may be adequate to
perform some computation, but energy is inconsistent and its availability is
unpredictable.  If the energy supply becomes insufficient, a transiently
powered device is forced to power down.  When energy is again available,
computation resumes.  Devices that are transiently powered experience power
failures frequently when energy is not consistently available, so we call such
devices ``failure-prone''.   On most systems today, failure-prone devices
resume computation after a failure at the beginning of the program
Additionally,  when a failure-prone device experiences a power failure, the
execution context and all volatile state ({\em i.e.}, registers and main
memory) are lost and all non-volatile state ({\em e.g.}, flash, FRAM), retains
its contents.  

Most applications of such devices are limited to short-running computations
that do not need to retain consistent state across power failures, like storing
sensor values or performing user interface actions~\cite{peppermill,ingen}.
Enabling long-running, computations that retain consistent state across
failures opens the door for a large class of new applications, like machine
learning, vision, and analysis over time series data.  Unfortunately, today's
transiently powered systems lack support for writing and executing long-running
applications that need to preserve consistent program state for their duration.

Motivated by that lack of programming and execution model support, the problem
we are addressing in this work is that {\em it is difficult and complex to
write long-running programs for failure-prone devices.}  There are four main
reasons why writing such programs is hard:  

\begin{itemize}
\item{{\bf Control.} Restarts prevent progress unless the execution context is
preserved. With the context preserved, restarts act as implicit control flow from
failure points to restart points, which are hard to reason about.}

\item{{\bf Volatility.} preserving volatile state requires programmer effort
and potentially also runtime support~\cite{mementos,quickrecall}.}

\item{{\bf Persistence.} Non-volatile state updated at one point in an execution may become
inconsistent with volatile state preserved at a different point, requiring the
programmer to reason about and ensure their consistency.}  

\item{{\bf Environment.} The execution environment can change between failure and restart
potentially leaving program state ({\em e.g.} sensor calibration)} inconsistent with the environment.
\end{itemize} 

In this work, we propose \sys (\emph{\sysexpand}), a new programming and
execution model that addresses these challenges.   \sys provides a programming
model in which programmers can insert task boundaries to subdivide long-running
computations into shorter tasks.  Tasks have atomic semantics and the program's
state at a task boundary is guaranteed to be consistent with the completed
execution of the preceding task.  Each task has a recovery routine associated
with it.  Programmers can use recovery routines to implement
application-specific operations that check that pre-conditions of a task are
satisfied and if they are not, takes action to satisfy them.  The \sys execution
model supports its programming model.  At task boundaries \sys stores a
checkpoint of volatile program state.  When the device resumes execution after a
task is interrupted by a power failure, \sys restores volatile state from the
checkpoint, then executes the recovery routine associated with the interrupted
task.  After the recovery routine executes, the task execution resumes from its
start point.

\xxx{Draw a picture that shows how DINO solves the four problems.  I want a figure on the first page!}

The \sys programming and execution model address the four challenges described
above.     Task atomicity means programmers do not need to reason about
preserving execution context and volatile state.  Task-atomic execution makes
it simple to reason about implicit control flow resulting from a failure.  All
implicit control-flow simply flows back to the task boundary for the executing
task.  Per-task recovery routines provide a regular and familiar,
exception-handling-like interface to checking and ensuring environment and
state consistency conditions.  \sys simplifies programming by
providing the following simple guarantee: {\em At a task boundary, program
state is consistent with the completion of the boundary's preceding task and
with the conditions specified in the task's recovery routine}.


To summarize, this work makes the following contributions:
\begin{itemize}
%\item{We articulate four key challenges to creating stateful, long-running,
%applications on energy harvesting devices.}

\item{We present the \sys task-atomic programming model with
application-specific recovery support that addresses those challenges.}

\item{We present the \sys execution model that uses checkpointing and runtime
recovery support to implement the \sys programming model.}

\item{We build a working prototype implementation of \sys, including a compiler
and runtime system that runs on popular embedded, energy harvesting platforms.}

\item{We evaluate \sys and show that it is amenable to implementing new
applications and porting of legacy applications.  We study applications like
activity recognition and environmental context analysis, showing that \sys
provides its guarantees effectively and efficiently. }

\end{itemize}

The remainder of this paper is structured as follows.
Section~\ref{sec:background} describes energy harvesting and failure prone
devices and the programming challenges they pose, in more detail.
Section~\ref{sec:idea} describes the \sys programming and execution model.
Section~\ref{sec:design} presents the \sys design in more detail and discusses
our prototype implementation.  Section~\ref{sec:eval} evaluates the
correctness, programmability, and efficiency of \sys.
Section~\ref{sec:related} contrasts \sys with prior work and
Section~\ref{sec:conc} lays out the main conclusions of this work and future research directions.


\section{Background and Key Challenges}

The motivation for this work is the need to address the four key challenges
faced when writing long-running, stateful applications for transiently-powered
devices.  In this section, we elaborate on those challenges and use examples to
show the kinds of correctness problems that they can cause.  Before describing
those challenges, however, we briefly describe mechanisms from foundational,
prior work on reliablity for transiently powered devices that we use as a
starting point~\cite{mementos, quickrecall} for our design. 

\subsection{Background: Continuous Checkpointing}

Recent work on \term{continuous checkpointing}~\cite{mementos, quickrecall}
made first steps toward making transiently powered devices execute long-running
workloads.  Continuous checkpointing periodically saves the execution context
to persistent storage to prevent it from being lost when power fails.  When a
device resumes execution after a power failure, it resumes from the saved
execution context, rather than from the start of the program.  Continuous
checkpointing is a step in the right direction. Preserving the volatile
execution context enables long-running applications to run from checkpoint to
checkpoint, despite interruptions due to power failures.   


\subsection{Key Challenges}

Building stateful, long-running applications for transiently powered devices presents four key challenges.  

\subsubsection{Control} 

In the absence of system support, a system loses its execution context when the
power fails.  When power is restored, execution resumes at the start of the
program -- {\em i.e.}, the beginning of {\tt main()}. We assume a view of an
execution in which these restarts do not mark the end of one execution and the
beginning of another.  Instead, we view an execution as spanning across many
such interruptions.  With that view of an execution, a power failure signifies
control flow from the point in the execution where a failure occurs, to the
beginning of {\tt main()}. 


\subsubsection{Volatility} preserving volatile state requires programmer effort
and potentially also runtime support~\cite{mementos,quickrecall}.

\subsubsection{Persistence} Non-volatile state updated at one point in an execution may become
inconsistent with volatile state preserved at a different point, requiring the
programmer to reason about and ensure their consistency.

\subsubsection{Environment} The execution environment can change between failure and restart
potentially leaving program state ({\em e.g.} sensor calibration) inconsistent with the environment.










Even assuming state-of-the-art support for continuous checkpointing, there are
several key challenges that arise when programming transiently powered devices
or reasoning about their execution.  First, with or without continuous
checkpointing, programmers are forced to reason about {\em non-linear
control-flow} stemming from asynchronous power failures. Programmers must
consider that control can flow from any arbitrary point where a failure occurs
to any  other arbitrary point where a dynamically determined checkpoint was
taken.  Second, non-linear control flow is complicated by the fact that all
program state is not contained within a checkpoint -- heap and global data may
be stored in volatile memory~\cite{mementos} or in non-volatile
memory~\cite{quickrecall}, neither of which are checkpointed.  On recovery to a
checkpoint, heap data in volatile storage are lost.  Heap or global data in
non-volatile storage that were updated between when the checkpoint was saved
and when the failure occurred are inconsistent with the state saved in the
checkpoint.  In either case, heap data, global data, or both are unusable after
checkpoint recovery and accesses to such data lead to undefined semantics.
Third, if an application interacts with its environment ({\em e.g.}, through
sensors or I/O), those interactions may be repeatedly executed as power fails
and execution resumes.  Repeated execution of those interactions may require
application-specific recovery actions on recovery to ensure the execution
remains correct.

%\section{Background: Failure-prone Systems and the Challenges They Pose}
%
%\xxx{need to describe each of the Problems described above in more detail, each
%with a figure.  Then, maybe we should describe Mementos and QuickRecall.}
%
%In this work we address the challenges posed by {\em failure prone systems},
%like ones that harvest energy.  We ground our work in an execution model with a
%single execution context that proceeds sequentially through program statements.
%Without losing generality, our execution model assumes an architecture with
%registers, stack-based call convention, stack storage, and additional global
%variable and heap memory.   
%
%We study failures in these systems using a model in which failures can occur at
%arbitrary points during an execution.   In our model, any step forward in the
%program either steps to the next sequential statement or results in a {\em
%reboot}.      A reboot has three effects: (1) it resets program control flow to
%resume at the program's entry point; (2) it invalidates all volatile memory
%locations making them inaccessible; (3) it does not clear non-volatile memory
%locations, so they retain their state. Each of these effects complicates
%programming and makes it difficult to reason about program semantics.   
%
%
%\subsection{Challenges of Failure Prone Systems \& Coping Strategies} 
%
%
%{\bf TODO: Add a formalism that describes state and control invalidations
%using havoc and longjmp-like semantics?}
%
%\paragraph{Control-flow Resets.} When the system repeatedly reboots, the
%resultant resetting of control flow to the program's entry point {\em obstructs
%forward progress}. 
%
%{\noindent \bf Coping Strategies:} The programmer may attempt to cope with
%control resets by either making execution fully idempotent or by manually
%tracking an execution's progress and manually resuming to those points after
%reboots.  Pervasive idempotence requires computation to be free of side effects
%({\em e.g.}, I/O, stores to non-volatile memory) and ensuring
%side-effect-freedom is difficult.  Manually tracking progress requires
%programmers to write code that periodically records which statements have been
%executed.  Tracking code must record digests of progress to non-volatile
%storage and that code must itself be robust to reboots.  On a reboot, a
%programmer must then resume execution at some preserved point.  Manually
%resuming and recovering state increases control-flow complexity, which makes it
%more difficult to understand the program and verify its correctness.
%
%\paragraph{Invalid Volatile State.} By default, invalidating volatile
%program state requires a program to recompute all values after a reboot.  If a
%programmer adds code to manually resume a partial execution after a
%reboot, state invalidations can lead to {\em undefined semantics} if the
%resumed execution accesses invalid volatile state.  
%
%{\noindent  \bf Coping Strategies:} The programmer may attempt to cope with
%state invalidations and prevent undefined semantics by preserving some data in
%non-volatile storage as the program executes.  The programmer must decide which
%data to preserve so that no invalid state is accessed after a reboot and they
%must preserve it at points in the program where execution may resume.  Deciding
%which data to preserve is difficult because, from one program point, complex
%control flow can obscure which data will be important at other, future program
%points. As with manually tracking progress, code to preserve state is
%complicated by the need to be robust to reboots.
%
%\paragraph{Unusable Non-volatile State.} 
%
%Data in non-volatile storage is preserved across reboots, but may become {\em
%unusable} when execution resumes.   When execution resumes after a reboot, the
%environment in which the program is executing may have changed.  Non-volatile
%data stored in the environment before the reboot may be subject to implicit
%correctness constraints that no longer hold in the new environment.  
%
%{\bf TODO: better example?} Figure~\ref{fig:unusablenvstate} illustrates how
%unusable non-volatile state can be problematic.  The figure shows a simple
%program that takes temperature readings and identifies anomalies -- a
%simple harvested-energy application.  Sensor data are initially stored in
%volatile memory, but the rolling average value, used to detect anomalies, is
%preserved in non-volatile state.  The programmer has included code to manually
%recover execution to the anomaly detection routine (not listed).
%
%The execution shown gathers temperature readings, but suffers a power failure
%and reboots.  Execution then resumes to the anomaly detection loop, but the
%temperature of the execution environment has changed. The readings after the
%reboot, though normal, differ from the stored rolling average.  The difference
%leads to the system flagging a false anomaly, which is incorrect.  There is an
%implicit correctness constraint, imposed by the environment: stored rolling
%average values remain relevant over short time spans, but should not be used
%for anomaly detection over larger time spans.  That constraint means that when
%power fails only briefly, non-volatile data remain usable, but longer power
%failures render data unusable. 
%
%
%{\noindent \bf Coping Strategies:} The programmer may attempt to cope with
%unusable data in non-volatile storage by manually associating metadata with
%non-volatile data.  Metadata can explicitly record otherwise implicit,
%application-specific data usability constraints, like the temporal constraint
%on the usability of the rolling average temperature in
%Figure~\ref{fig:unusablenvstate}.  These constraints are not necessarily
%temporal.  Instead, any environmental change ({\em e.g.}, changes in sensor or
%device state, location changes, change of user) can affect the usability of data across reboots.
%The programmer can check metadata as the program executes to ensure that
%non-volatile data remain usable.  These metadata checks are fundamentally
%application specific and programmers must implement them {\em ad hoc} at each
%use of the associated data.  At each such check, the programmer must reason
%about when failures might occur and how the environment might change.  That
%reasoning is difficult and adds complexity to
%the program's control flow.
%
%
%\subsection{Continuous checkpointing helps, but is insufficient.}
%
%One strategy to guard against power failures is to checkpoint state at
%specified time intervals or at statically determined program points (e.g., when
%a function returns)~\cite{mementos}.  We call this strategy \term{continuous
%checkpointing}.
%
%A run-time mechanism periodically collects non-volatile checkpoints of
%execution context---registers, the stack, and global variables.  After a
%reboot, the most recent complete checkpoint is reloaded and execution continues
%from that context.  Continuous checkpointing partially addresses the three main
%challenges posed by reboots.  
%
%\paragraph{Unusable non-volatile state.}
%
%Continuous checkpointing does not help with the unusable non-volatile state
%problem.  The application-specific correctness constraints on the usability of
%non-volatile data are orthogonal to checkpointing.
%
%\paragraph{Control-flow resets.}
%Continuous checkpointing helps address the control-flow reset problem by
%resuming after a reboot to a checkpointed execution context.  An important
%advantage of continuous checkpointing is that it can collect checkpoints at any
%point in the execution, without requiring the programmer to explicitly identify
%these points.
%
%%\xxx[BL]{Problem of note not mentioned: dealing with mixed heap, stack, and non-volatile state.  See comment in tex file for snippet illustrating problem}
%
%
%%nonvolatile or heap int x;
%%
%%void foo(){
%%
%%  x = 17;
%%
%%  for(...){
%%
%%    y = x; //Read X
%%
%%    x = y * 1000; //Read X, Write X
%%
%%    y = another_value(x); //If this func experiences a power failure....
%%
%%  } 
%%
%%}
%%
%%
%%x = 10
%%
%%for(...){ //checkpoint here
%%
%%  y = x;  //y <- 10
%%
%%  oldx = x
%%  
%%  x = y * 1000; // x <- 10000
%%
%%  y = another_value(x) //FAILS
%%
%%  y = x; //IF NV: y <- 100000 --a never seen value!!!  IF HEAP: y <- garbage
%%
%%  x = oldx
%%
%%  y = another_value(x) // computes new value using 100000/garbage, not 10000
%%
%%}
%%
%%//The program produces the wrong value for y at the end.
%%
%%x = 10
%%
%%for(...){ //checkpoint here
%%
%%  y = x;  //y <- x.initialvalue 
%%
%%  oldx = x //oldx <- x.initialvalue
%%  
%%  x = y * 1000; // x <- 10000
%%
%%  y = x; //IF NV: y <- 100000 --a never seen value!!!  IF HEAP: y <- garbage
%%
%%  x = oldx
%%
%%}
%
%
%
%%\xxx{aroma of stop-the-world GC-ish problem---sometimes the program may pause
%%to checkpoint when the programmer does not expect it}
%
%The main drawback of continuous checkpointing is that after a reboot, the
%program's semantics are ambiguous.  The program may resume at any statement
%between the point of the failure and the previous checkpoint. 
%
%%\xxx[br]{I am not
%%sure what you mean by the previous two sentences.  The point of storing
%%checkpoints that capture program context is that you can resume from precisely
%%that context, including the PC, registers, and so on.  It \emph{is} true,
%%however, that the \emph{environment} may have gotten out of sync with the
%%execution context\dots}  
%
%%\xxx[bl]{The main issue is that a failure-then-checkpoint can jump to a
%%non-deterministic point earlier in the execution.  At a given point in the
%%execution, the recovery point for the most recent checkpoint is a
%%non-deterministic point because it depends on the energy availability of a
%%particular execution.  Non-deterministic semantics are ambiguous -- varying
%%from one execution to the next.  However, they are not always problematic.  If
%%the code is idempotent -- e.g., accesses only stack/reg/glob and does no I/O or
%%NV stuff, all is well.   When code is non-idempotent -- e.g., accesses NV state
%%or does I/O -- or mixes heap and other accesses, the semantics are ambiguous or
%%wrong.  If I/O, then the I/O is repeated, devices might get weird, users might
%%get peeved, data might get wrong.  If NV state, later values might persist and
%%be used earlier in the code, where the checkpoint recovers.  The NV problem is
%%addressed separately later in this section.  If heap vars, those vars will have
%%no values on recovery, causing wrong computation. }  
%
%Non-idempotent code may (or may not!) be re-executed, which is problematic.  At
%a cost in complexity, a programmer may deal with non-idempotent code in a
%continuous checkpointing system by manually tracking and guarding the execution
%of idempotent operations.  The ambiguous semantics of continuous checkpoints
%also makes static analysis difficult or impossible.  Resuming a checkpoint is
%effectively a non-linear flow of control, from the point where power failed, to
%the point where the checkpoint resumed.  The failure and resume points are not
%statically identifiable, so each statement must be treated as both a potential
%control-flow source and target.  Such pervasive non-linear control flow renders
%many static analyses ineffective.  
%
%\paragraph{Invalid volatile state.}
%Continuous checkpointing helps address the invalid volatile state problem by
%preserving execution context -- including register, stack, and global state --
%across reboots.  The main benefit of preserving the execution context is that
%data in the checkpointed execution context are valid, so the semantics of
%accesses to those data after a reboot are defined. 
%
%A main drawback of continuous checkpointing is that it uniformly captures
%register, stack, and global state.  This drawback manifests in two ways.
%First, data stored in heap variables is lost.  After a reboot, accesses to heap
%variables have undefined semantics, unless the programmer reasons about how
%checkpoints are collected and recomputes those variables.   Second,
%checkpointing costs energy proportional to the amount of data checkpointed and
%all data in the execution context may not be necessary.  Continuous
%checkpointing may waste energy preserving data that is not used after a reboot,
%like local variables that are popped at a function return.


\section{DINO: Programmable Robustness to Power Failures} 

\subsection{A Power-Failure-Tolerant Programming Model}
\xxx[bl]{be sure to define "failure-prone" and "failure-robust" earlier}

DINO is a programming and execution model that makes failure-prone systems
failure-robust.  The DINO programming model simplifies writing programs for
failure-prone devices by providing well-defined, intuitive program semantics in
the presence of power failures and structured recovery mechanisms.  The DINO
execution model implements the DINO programming model. The model ensures
executions exhibit predictable program behavior and supports recovery.  

\subsubsection{Programming Model}
We assume a C-like base language for programming failure-prone devices.  This
assumption is reasonable because embedded applications are often developed in
C-like languages and, additionally, our results are likely to generalize to other
imperative base languages like C\#/.Net or Java.  

DINO adds two main features to the programming model.  The first thing DINO
adds is {\em task-based execution}.  A task is a region of code demarcated by
the programmer.  The language semantics for tasks are that a task executes
atomically, even if it is interrupted by a power failure.  The programmer
describes tasks by inserting {\em task boundaries} into their program.  All
instructions in a program execute within a task and boundaries are
``two-sided'': Each boundary marks the end of the previous task and the
beginning of the next task.  Using two-sided boundaries implies that tasks do
not nest.  A boundary means the end of the task comprised of the code prior to
the boundary and the start of the next task comprised of the code following the
boundary.  Each task boundary is associated with a globally unique identifier.

The second feature DINO adds to the programming model is {\em programmable
recovery blocks}.  Each DINO task is associated with a recovery block.  If a
task fails to execute atomically -- {\em e.g.}, if power fails -- the task's
recovery block should execute immediately after the failure, before any other
code.  Effectively, there is a control-flow edge from every instruction in a
task to the start of the task's recovery block.  These edges are only followed
when a failure occurs.  The code in a recovery block does three things.  First,
it executes the task's {\em persistent data recovery handler} to return
persistent data to a usable state.  Second, it executes the task's {\em
environment recovery handler} to return the program to a state that is
compatible with its execution environment.  Third, it redirects control flow to
the {\em recovery target}.  The three functions of recovery blocks are
discussed in more detail in Section~\ref{sec:recoveryblocks}.

There are some restrictions on code in a recovery routine. The value of control
registers (stack, frame, and instruction register) must be the same when the
recovery routine completes as when it starts.  Recovery routines do not execute
in the context of their associated task, but rather in their own context.  In
order for a recovery routine to access variables named in the scope of the task
(like locally scoped variables), the variables accessed in the recovery routine
are declared at the task boundary.  These declarations allow names to be
bound to out-of-scope variables when the recovery routine executes.

The combination of task-atomic execution and application-specific recovery
allows us to provide the following strong, language-level guarantee: With
appropriate recovery routines, programs that adhere to our programming model
are in a consistent execution state at all task boundaries.  The next section
describes the DINO execution model that implements our programming model,
providing that guarantee.

\subsection{Execution Model}

The DINO execution model supports task-based execution and programmable
recovery.

To support task-based execution, DINO uses standard checkpointing support (like
~\cite{mementos}) to checkpoint program state at task boundaries.  The
checkpoint preserves registers, stack variables and global variables in
non-volatile storage.  Note that there is only one valid checkpoint and no
checkpointing occurs, except at task boundaries.  

%Unlike prior work, DINO also checkpoints heap variables used in
%a task.  By checkpointing relevant heap variables, DINO addresses the invalid
%volatile state problem. 

DINO also provides execution support for programmable recovery blocks.   When
execution encounters a task boundary and takes a checkpoint, DINO records the
identifier of the executing task in a non-volatile {\em Task ID register}.
When a failure occurs, the device reboots.  On reboot, rather than start at
{\tt main()}, DINO executes a {\em startup handler} that is statically inserted
before {\tt main()}.  The startup handler first disables devices interrupts.
Next it reloads the checkpoint, except for the instruction, stack, and frame
pointer registers.  The startup handler then checks the Task ID register to
determine which task's recovery block should execute.  The startup handler uses
the Task ID to look up the recovery routine for the failed task in a table of
recovery routines, keyed by their Task IDs.  The startup handler then jumps to
the recovery routine.  Before the programmer-defined recovery code executes,
DINO first creates named bindings to variables from the checkpoint, using names
specified for those variables in the recovery routine.  The recovery routine
then executes.  When the recovery routine finishes its execution, DINO restores
the stack, frame, and instruction register, resuming the program's execution. 


\subsection{Recovery Support}

As discussed in Section~\ref{sec:idea:programmingmodel}, recovery routines do
three things: restore persistent data to a usable state, restore the execution
to a state compatible with the post-failure execution environment, and resume
execution at the recovery target. 

\subsubsection{Making Persistent Data Usable} 
A persistent data recovery handler ensures that data stored in persistent
storage is usable after a reboot.  Recall from
Section~\ref{sec:background:unusableNV} that restoring a checkpoint may render
persistent data unusable because the data were written by computation that was
interrupted and rolled back.  If an update to a data structure in persistent
storage is interrupted, the data structure is left inconsistent and unusable.
Updates to persistent storage by rolled-back code are not undone, leaving
values in persistent store that were written by code that may never execute.
Automatically correcting data structure inconsistency is difficult because
recovery is typically application and data-structure-specific.     

DINO's persistent data recovery handlers let programmers write appropriate
checking and recovery code.  Unlike systems that do not provide persiistent
data recovery support, programmers using DINO do not need to check persistent
data structure consistency at each access to the data structure.  Instead, all
data structure consistency code is located in one place per task.  DINO's
execution model ensures that it executes only and always when necessary.

\subsubsection{Reconciling with the Environment}
\xxx[bl]{Let's write this! Filling in our great discussion about changes in context, changes in accel reading, changes in temperature, etc -- programmer needs to be able to check.  needs access to data in the checkpoint. needs access to persistent storage.  needs access to arbitrary code.}

\subsubsection{Setting the Recovery Target}
{\bf \em \noindent Recovery target.} By default, a recovery block's recovery
target is the beginning of its associated task.  The programmer can specify a
different recovery target elsewhere in the program.\xxx[bl]{when, why, how?}.

\xxx[bl]{Do we also need to talk about some kind of non-purpose-built handler code?  This is where a programmer could shove failure/recovery logging, for example.  I think we should have that.  I think that it should be non-required.  In fact, I think ERHs and PDRHs can both be empty-by-default.  In the absence of consistency requirements, environment checks, or custom recovery code, recovery blocks are no-ops and just go right back to the task top.}


\xxx[bl]{We need to talk about what we're going to do with non-idempotent operations.  The main thing is to buffer them until the end of tasks, giving the illusion of task atomicity.}


\subsection{Discussion}

\subsubsection{How does this help with the unusable NV state problem?}

The problem is when we have non-volatile live outs in a task.  Some might get
written and then the task fails to finish.  In that case, we have inconsistent
NV values for those NV memory locations that get used during the task's
re-execution.  The results are undefined.  We can deal with this by requiring a
few simple things of writes to Non-Volatile Liveouts (NVLs) in Dino Tasks.

\begin{enumerate}
\item{NVLs should be written only once per task}
\item{NVL writes should be moved into the checkpointing code so that they execute atomically with the taking of the checkpoint.}
\item{Our Implementation: Use a shadow memory/double buffer for each task's NVLs switch between version of those variables at the same time as the checkpoint buffer switches (i.e., 1 big switch for both).  Use compiler support to move NVL writes into the checkpointing code.  During recovery, based on the checkpoint base address, decide which version of the NVLs to use.  Assign a corresponding volatile pointer to the correct version for each NVL.  Finish recovery and resume.}
\end{enumerate}

The tradeoff is that programmers don't have fine-grained control over where NV
state is written, like when they try to write their own checkpointing code.
Instead, we provide the guarantee that the state of the execution at checkpoint
boundaries is consistent and recovery correctly, but we do not provide any
guarantees between task boundaries.

\subsubsection{Why Not Buffer and Commit?}
DINO uses eager, in-place memory updates within tasks and provides recovery
semantics for ensuring state is correct when execution resumes after a failure.  An alternative to this
design is to buffer memory updates using copy-on-write and provide commit
semantics at the end of each task. There are several reasons we chose not to use 
buffer and commit in DINO.

First, buffer and commit requires additional work proportional to the number of
locations written in a task.  Like DINO, buffer and commit checkpoints
execution context at task boundaries.  In addition to checkpointing, program
writes write to a copy buffer during a task and when the task completes, the
system must copy the data in the buffer to the written program locations,
incurring the full time and energy cost of a second write operation.

Second, buffer and commit consumes additional working memory during a task to
hold written copies and preserve initial values.

Third, buffer and commit ({\em without} recovery semantics) requires tasks to
be idempotent as they may be executed repeatedly before eventually committing.
Between re-executions and after commit, such a system would be unable to
address non-idempotent effects, like unusable non-volatile variables and 
environmental effects.  

The need for idempotence in a buffer and commit system is a central motivation
behind not using it in our design because making tasks fully idempotent is
often difficult.  Recent work~\cite{memcachedtm} showed that, in particular,
I/O ({\em e.g.}, sensors) and library code ({\em e.g.}, C standard library) add
complexity to making regions of code idempotent.  

Prior work~\cite{memcachedtm} found that I/O often makes it difficult to
express idempotent computations.  Unless interactions with the external
environment can be repeated arbitrarily and remain correct, code regions
including I/O cannot be made idempotent.  DINO's recovery mechanism supports
application-specific ({\em e.g.}, sensor-specific) recovery after
non-idempotent I/O occurs.  DINO does not try to prevent irrevocable
actions from being performed multiple times. However, unlike a system that relies on
task idempotence, DINO recovery blocks let a program ``clean up''
between repeated executions of irrevocable actions ({\em e.g.}, to recalibrate
a sensor, to restart a protocol state machine, {\em etc}.).

Prior work~\cite{memcachedtm} also showed that, like I/O, library code can pose
a barrier to idempotence. The key problem with libraries is that a programmer
may not know whether a library function is idempotent because libraries lack
idempotence specifications.  In the absence of a specification, the programmer
has three options. The first option is to  conservatively assume the function
is not idempotent, which is always correct, but prohibits including those
functions in tasks that may be re-executed due to failures.  The second option
is to study the library implementation, eliminate non-idempotent operations,
and verify that it is safe to use in tasks.  This option is onerous and becomes
impossible when library source code is unavailable.  The third option is to
unsafely use potentially non-idempotent library functions in code regions that
should be idempotent, which is tolerable in some applications, but unsafe in
general.

A buffer and commit design has higher data copying overheads than DINO, would
require programmers to ensure task idempotence, and even then would still
require a DINO-like recovery mechanism  to deal with non-idempotent I/O
effects.  DINO elides the additional overhead and the difficulties posed by
making tasks idempotent, pushing the complexity of both into recovery routines. 


{\bf \em \noindent Simpler Checkpoint Control-flow} Checkpointing at task
boundaries addresses the control-flow reset problem.  By checkpointing only at
statically specified code points, all possible non-linear control flow due to
checkpoint restores is known statically.  Tasks reduce the complexity of the
control-flow graph by reducing the number of potential control-flow targets
each code point has because of checkpoint restoration.
Figure~\ref{fig:controlflowdino} compares DINO's control-flow complexity to the
control-flow complexity of continuous checkpointing.  As  with continuous
checkpointing   

{\bf \em \noindent Addresses Unusable Persistent State} This work is the first
to our knowledge to address the problem presented by state that persistes
across checkpoints, but is unusable.

DINO's recovery handlers
give programmers a place to put  


makes reasoning about checkpoint restoration control-flow simpler.  Reasoning
is simpler because the control-flow targets are {\em explicit} in the program
({\em i.e.}, task boundaries) rather than implicit ({\em i.e.}, any return
statement or back-edge target), as with continuous checkpointing.

\input{apps}

\input{impl}

\input{eval}

\input{related}

\section{Conclusions \& Future Work}

\bibliographystyle{abbrv}
\bibliography{harvsim}



\end{document}

