\section{SAT-based Temporally Expressive Planning}\label{sec:encoding}

In this section, we formulate temporal planning using a SAT-based
approach~\cite{Kautz99}. Our overall procedure is in
Algorithm~\ref{overall}. Each iteration of the procedure increases
the time span by a fixed step size.  A set of partial order
variables for the goals is used to indicate that the goal state is
achieved at time $t$~\cite{Ray2008}. A modified SAT solver solves
the instance by assuming one set of the goal variables to be true.
As such, we solve the problem with multiple time spans at each
iteration.

Note that in \cite{Ray2008}, a planning graph is first generated to
help estimate the step size. Although there exists earlier research
in applying planning graph to temporal planning, all existing works
that we are aware of either have limited
expressiveness~\cite{Smith99} or are unable to optimize time
span~\cite{Long2003}. These two shortcomings may lead to an
overestimation on step size even for the first iteration of the
planning.  Therefore, in this work we will use a predefined constant
for the step size. To estimate a better step size is an interesting
open problem and will be part of our future work.

\begin{algorithm}[ht]
\label{overall} \linesnumbered \caption{ SAT-based Temporally
Expressive Planning (STEP)}

\KwIn{A temporally expressive planning problem}

\KwOut{A solution plan}

transform durative actions into simple ones\;

% set $\delta$ as the step size\;

$N\leftarrow 0$\;

$Z \leftarrow $ max number of time spans\;

\Repeat{a solution is found or $N>Z$}{
    $N\leftarrow N + 1$\;
    encode the problem \; %with partial order goal variables between $N-\delta+1$ and $N$\;
    solve the encoded SAT instance\;
}

\eIf{a solution found}{
    decode the solution and return\;
}{
    return with no solution\;
}
\end{algorithm}

\subsection{Transform durative actions}

First of all, each durative action $o$ is converted into two simple
actions plus one propositional fact, written as
$\Psi(o)=(o_{\vdash},o_{\dashv},f^o)$. These two simple actions
indicate the starting and ending event of $o$.  The fact $f^o$, when
it is true, indicates that $o$ is executing. We denote the set of
all such $f^o$ as $F^o=\{ f^o \mid o\in O\}$. We further denote
$pre(o)$, $add(o)$ and $del(o)$ as the set of preconditions, the set
of add-effects and the set of del-effects of a simple action $o$,
respectively.

We transform a planning problem $(I,F,O,G,A)$ into
$(I,F^s,O^s,G,A)$. Here, $F^s$ is $F\cup F^o$, and $O^s$ is
$\{a_{\vdash},a_{\dashv}\mid \forall a \in O \} \cup
\{Noop\;Action\;for\;f\mid \forall f\in F^s\}$.



The idea of transforming durative actions was proposed
in~\cite{Long2003}. It has several advantages.  For example, some
techniques from classical planning can be applied without
sacrificing the completeness.


Given the above planning problem representation, it is necessary to
encode action mutual exclusion (mutex) constraints to ensure the
correctness of solutions. Several algorithms were proposed to detect
the mutexes between durative actions in temporal
planning~\cite{Smith99}. Here we compute those required action
mutexes for all transformed actions $o \in O^s$, and use them in the
encoding.


\subsection{Encoding in each iteration}

We extend the encoding of propositional planning in planning graph
to temporal planning using the above transformation. Given a time
span $N$ and a problem instance $(I,F^s,O^s,G,A)$, we define the
following variables for the encoding.
\begin{enumerate}
\item action variables $U_{o,t}$, $0\le t \le N, o\in O^s$.
\item fact variables $V_{f,t}$, $0\le t \le N, f\in F^s$.
%\item goal set variables $W_{t}$, $N-\delta+1 \le t \le N$.
\end{enumerate}
We also need the following clauses for the encoding.
\begin{enumerate}
\item Initial state (for all $f\in I$):
$V_{f,0}$

%\item Partial order goal states (for all $t\in[N-\delta+1,N]$):\\
%$W_t \rightarrow \bigwedge_{\forall f\in G} V_{f,t}$

\item Preconditions (for all $o\in O^s$,$0\le t \le N$):\\
$U_{o,t}\to \bigwedge_{\forall f, f\in pre(o)}V_{f,t}$

\item Add-Effects (for all $f\in F^s$,$0\le t \le N$):\\
$V_{f,t+1}\to \bigvee_{\forall o, f\in add(o)} U_{o,t}$

\item Delete-Effects (for all $o\in O^s$,$0\le t \le N$):\\
$U_{o,t}\to \bigwedge_{\forall f, f\in del(o)} \neg V_{f,t+1}$

\item Durative action information ($\forall o,t, o\in O$,$0\le t \le N$):\\
$ U_{o_{\vdash},t } \leftrightarrow U_{o_{\dashv},t+\rho-1} $\\
$ U_{o_{\vdash},t} \rightarrow \bigwedge_{t<t'<t+\rho-1} ( V_{ f^o,
t'} \wedge \bigwedge_{f \in o_{\leftrightarrow}} V_{f,t'} )$
%$ U_{o_{\vdash},t} \rightarrow \bigwedge_{t<t'<t+\rho-1,f\in
%o_{\leftrightarrow}  } V_{ f^o, t'}$

\item Axioms (for each $a\in A$,$0\le t\le N$): \\
    $\bigwedge_{f\in pre(a)}(V_{f,t})\rightarrow \bigwedge_{f'\in eff(a)}(V_{f',t})$

\item Action mutex: \\
    for all mutex actions $(o_1,o_2)$, $U_{o_1,t}\rightarrow \neg U_{o_2,t}$

\item Fact mutex: \\
    for all mutex facts $(f_1,f_2)$, $V_{f_1,t}\rightarrow \neg V_{f_2,t}$
\end{enumerate}





% R.Huang Without partial order stuff, this Proof is unnessary any more.
%\begin{theorem} {\em (Optimality)}
%The STEP algorithm in Algorithm~\ref{overall} always finds the
%solution with the minimal time span, if such a solution exists.
%\end{theorem}
%
%Starting with 0, each iteration of Algorithm~\ref{overall} increases
%the time span by $\delta$, and then encodes and solves the SAT
%instance. Therefore, if there exists a solution with the minimal
%time span, Algorithm~\ref{overall} will always find it as the first
%solution it encounters. The partial order of goal variables is
%enforced in the SAT solver so that the first solution found will
%give the minimum time span. Therefore, Algorithm~\ref{overall} is
%optimal in total time span.


%RH2 These 3 paragraphs are new

\subsubsection{Expressiveness of parallelism}

Our approach is not only powerful enough to handle the temporally
expressive semantics, but also capable of handling some other
attributes regarding parallelism in temporal planning. According to
the analysis in \cite{Rintanen07:AAAI}, whether a temporal planning
problem can be compiled into a classical planning problem in
polynomial time is determined by whether self-overlapping is
allowed.  Our approach supports some of the self-overlappings.

For a given action $o$ and time $t$, we have variables
$U_{o_{\vdash},t}$ and $U_{o_{\dashv},t}$ representing the starting
and ending actions of $o$, respectively.  Suppose action $o$ has two
instances, starting at time $t$ and time $t'$ ($t<t'$),
respectively. For the starting action $o_{\vdash}$, we have
different variables $U_{o_{\vdash},t}$ and $U_{o_{\vdash},t'}$ to
indicate the different starting times of the two instances. Those
$f^o$ facts, along with all related conditions, will be enforced to
be true from $t$ to $t'+\rho$. Thus, these invariant conditions of
the two action instances
%WZ2: what are these conditions? you mean different starting times?
do not exclude each other's existence.

However, the current encoding cannot handle the case when the starting
times of the two instances are the same, or the ending times of the
two instances are the same.  This is because given a simple action
$o$, for each time point $t$, we have only one binary variable to
indicate if $o$ is executed at $t$.

%RH2 end of new paragraphs;
%WZ2: i don't see how the title of the subsection matches to the context of the 3 paragraph.
%     what is the theme of the discussion?  these seem to be just some random comments.  do i miss anything here?










\section{Optimizing Action Cost Using SAT Constraints}


Many complete SAT solvers are based on the DPLL
algorithm~\cite{zhang:CAV-02}~\cite{Davis62}, along with other
techniques such as clause learning~\cite{Marques96} and boolean
constraint propagation~\cite{Moskewicz:DAC-01}. We first briefly
review the DPLL algorithm and then propose our algorithm that can
optimize an objective subject to SAT constraints.


\subsection{The DPLL algorithm}

%***********************************************************************
% *******************               main procedure of DPLL
%***********************************************************************
\begin{algorithm}[t]
\label{dpll} \linesnumbered \caption{ DPLL($S$) }

\KwIn{ SAT problem $S$ }

\KwOut{ a satisfiable solution }

{initialize the solver\;}

\While{ $true$ }{

    $conflict \leftarrow$ propagate()\;

    \If{$conflict$}{

        $learnt \leftarrow$ analyze($conflict$)\;
        add $learnt$ to the clause database\;

        \eIf{top-level conflict found}{
            return UNSAT\;
        }{
           backtrack()\;
       }
    }\Else{

        \eIf{all variables are assigned}{
            return SAT\;
        }{
            decide()\;
%            assume a variable $x$ to be $true$\;
        }
    }
}
\end{algorithm}


The DPLL algorithm is shown in Algorithm~\ref{dpll}, which largely
follows the implementation of the MiniSat solver~\cite{Een:TAST-04}.
DPLL() is a conflict-driven procedure. It keeps calling
propagate()~\cite{Een:TAST-04} which propagates the first literal
$p$ in the propagation queue and returns a conflict if there is any
(Line 3). If no conflict occurs and all literals are assigned, a
solution is found and it returns SAT (Line 13).

If no conflict occurs but there are unassigned literals, it calls
decide() to select an unassigned variable $p$, assign it to be
$true$, and insert it into the propagation queue (Line 15).
Consequently, those clauses related to variable $p$ will also be
propagated,leading to more variable assignments. Each literal has a
decision level. Those newly assigned literals have the same decision
level as $p$. Starting from zero, the decision level is increased by
one each time decide() is called.

Once a conflict occurs, the procedure analyze() analyzes the
conflict to get a learned clause ~\cite{Een:TAST-04} and it calls
backtrack() to cancel the assignments that lead to the conflict
(Line 10). The backtrack() procedure keeps undoing the variable
assignments until exactly one of the variables in the learned clause
becomes unassigned.



\subsection{The DPLL-OPT algorithm}

Unlike the standard DPLL algorithm which stops whenever a solution
is found, DPLL-OPT searches the whole space to optimize the
following object.

Given a SAT instance for makespan $k$, the associated
SAT-constrained optimization (SCO) problem is to minimize the
objective:
$$ cost(V) = \sum_{\forall x\in{V_a}}{c(x)v(x)},$$
subject to the SAT clauses. Here, $c(x)$ is a positive cost
associated with each action represented by $x$, $v(x)=1$ if $x$ is
\emph{true} and $v(x)=0$ otherwise.

%Q.lv  the definition of cost(V): "$v(x)=0$ otherwise" already include the free variables.
More generally, for a partial assignment where some of the variables
are $free$, we can still calculate $cost(V)$ by excluding the
unassigned variables in the summation.

We remark that the problem here is different from the MAX-SAT
problem whose goal is to minimize the number of unsatisfied clauses,
and different from weighted MAX-SAT which assigns weights to
unsatisfied clauses.

%\vspace{+0.1in}

Algorithm~\ref{modifiedsolve} shows the DPLL-OPT algorithm, which
largely follows the DPLL framework but includes three changes.
First, we load the learned information produced by the last call to
DPLL-OPT(), which can speedup the search (Line 2). Second, each time
a satisfying solution is found, we do not terminate the search but
add a pruning clause to prevent the search from yielding previously
found solutions (Line 22). Third, we employ a pruning strategy that
fathoms a search node whenever the summation of the lower-bound
estimation of future cost and the current cost exceeds the cost of
the incumbent solution.


%***********************************************************************
% *******************  main procedure of DPLL-OPT   ************
%***********************************************************************
\begin{algorithm}[t]
\label{modifiedsolve} \linesnumbered \caption{ DPLL-OPT($S$) }

\KwIn{SAT problem $S$}

\KwOut{a solution with minimum $cost$ }

initialize the $h$ cost;

load learnt clauses\;

$\tau \leftarrow \infty$ \;

$num \leftarrow 0$

\While{ $true$ }{
    $conflict \leftarrow$ propagate()\;

    \eIf{$conflict$}{
        $learnt \leftarrow$ analyze($conflict$)\;
        \eIf{$conflict$ is of top-level}{
            return $num > 0$ ? SAT:UNSAT\;
        }{
            add $learnt$ to the clause database\;
            backtrack()\;
        }
    }{
        let $cost(V)$ be the cost of the current partial assignment $V$\;

        $h \leftarrow $cost\_propagate()\;

        \eIf{$cost(V)+h \geq \tau$}{
            add a pruning clause\;
            backtrack();
        }{

            \eIf{all variables are assigned}{
                add a pruning clause \;
                backtrack()\;
            }{
                decide()\;
            }
        }
    }
}
\end{algorithm}


\subsubsection{Londex-based $h$ function}

Londex constraints~\cite{Chen:IJCAI-07}~\cite{Chen:AIJ-09}, a recent
work gives lower bounds on the time steps between facts and actions
can give significant improvement for reducing planning time.
Unfortunately, the original londex does not consider action cost.
Here, we define a new \textbf{Fact londex cost} (resp.
\textbf{Action londex cost}), denoted as $\beta(f,g)$ (resp.
$\beta(a,b)$) for two facts $f$ and $g$ (resp. actions $a$ and $b$)
in the same DTG, which represents the minimal londex cost between
facts $f$ and $g$ (resp. actions $a$ and $b$).
%and \textbf{Action
%londex cost}, denoted as $\beta(a,b)$ for two actions $a$ and $b$.

A SAS+~\cite{ Backstrom&Nebel95}~\cite{jonsson&Bm98} problem
consists of a number of multi-valued variables, each variable leads
to a DTG~\cite{helmert:jair-2006}. A DTG is a directed graph where
each vertex is a value of the variable (corresponding to a fact in
STRIPS) and there is an edge between two facts if there is an action
that can transit one fact to another. There could be multiple edges
between two facts. We assign each edge a weight as the cost of the
corresponding action. Given two facts $f$ and $g$, which are both
vertices a DTG $G$, we compute the shortest path between them in
$G$.
% and denoted length of which as $\delta(f,g)$.

\begin{defn}
\label{def:fact-distance} \em (\textbf{Fact distance}). The fact
distance from a fact $f$ to another fact $g$ in a DTG $G$, denoted
as $\delta(f,g)$ is the length of the shortest path from $f$ to $g$
in $G$.
\end{defn}

\begin{defn}
\label{def:fact-londex} \em (\textbf{Fact londex cost}). The londex
cost from a fact $f$ to another fact $g$ in a DTG $G$, denoted as
$\beta(f,g)$ is the cost of the shortest path from $f$ to $g$ in
$G$. Note that $\beta(f,g) = 0$ if $f$ and $g$ are not in the same
DTG.
\end{defn}
The key property is that, if $f$ is made true at any time step, then
we need to add at least $\beta(f,g)$ to the total cost of the plan
to make $g$ true at a later step.

\begin{ex}{\em \label{example:fact-londex}
Consider the example~\ref{example:blocksworld},
Figure~\ref{fig:blocksworld-dtg} illustrates part of the DTG of the
block B. The shortest path from fact ``ON-TABLE B" to fact ``ON B A"
have two transitions ``pick-up B" (with action cost 2) and ``stack B
A" (with action cost 2), thus $\delta$(ON-TABLE\ B, ON\ B\ A) $=2$
and $\beta$( ON-TABLE\ B, ON\ B\ A) $=4$. }
\end{ex}

We can similarly obtain the action londex cost denoted as
$\beta(a,b)$ for two actions $a$ and $b$. For simplicity, we say
that an action $a$ is associated with a fact $f$ if $f$ appears in
$pre(a)$, $add(a)$, or $del(a)$.

\begin{defn}
\label{def:action-londex} \em (\textbf{Action londex cost}). For all
fact pairs $f,g$ which actions $a$ and $b$ are associated with, we
define the londex cost of action $a$ to action $b$ as follows:
\[ \beta(a,b) = \max \left  \{
\begin{array}{llll}
    \beta(f,g) - c_b,       & \mbox{ if $~f \in add(a)$, $~g \in add(b)$ }  \\
    \beta(f,g),             & \mbox{ if $~f \in add(a)$, $~g \in pre(b)$} \\
    \beta(f,g) - c_a - c_b, & \mbox{ if $~f \in pre(a)$, $~g \in add(b)$ } \\
    \beta(f,g) - c_a,       & \mbox{ if $~f \in pre(a)$, $~g \in pre(b)$}\\
\end{array} \right. \]

\nop{
\begin{enumerate}
%Let $add(a)$ be the add effects and $pre(a)$ the preconditions of $a$.
\item $~f \in add(a)$, $~g \in add(b)$, $\beta(a,b)$ is at least $\beta(f,g) -
c_b$;

\item $~f \in add(a)$, $~g \in pre(b)$, $\beta(a,b)$ is at least
$\beta(f,g)$;

\item $~f \in pre(a)$, $~g \in add(b)$, $\beta(a,b)$ is at least $\beta(f,g) - c_a -
c_b$;

\item $~f \in pre(a)$, $~g \in pre(b)$, $\beta(a,b)$ is at least $\beta(f,g) -
c_a$.
\end{enumerate}}

\end{defn}

For each action pair, we enumerate all related fact pairs $f,g$ and
take the maximum cost of above cases as $\beta(a,b)$.

\nop{If $f \in add(a)$ and $g \in pre(b)$, $\beta(a,b)$ is at least
$\beta(f,g)$. If $f \in pre(a)$ and $g \in pre(b)$, $\beta(a,b)$ is
at least $\beta(f,g) - c_a$. If $f \in add(a)$ and $g \in add(b)$,
$\beta(a,b)$ is at least $\beta(f,g) - c_b$. If $f \in pre(a)$ and
$g \in add(b)$, $\beta(a,b)$ is at least $\beta(f,g) - c_a - c_b$.}

The action londex cost is easy to prove. For example, if $a$ is
executed at time $t(a)$, then $f$ is valid at time $t(a) + $1. Since
the fact distance and londex cost from $f$ to $g$ is $\delta(f,g)$
and $\beta(f,g)$, $g$ cannot be true until time $t(a) + 1 +
\delta(f,g)$. Then, since $g$ is an add-effect of $b$, $b$ cannot be
excuted until time $t(a) + \delta(f,g)$ and the cost from $a$ be
executed to $b$ to be true is at least $\beta(f,g) - c_b$. Other
cases can be shown similarly.

\begin{ex}{\em \label{example:action-londex}
In example~\ref{example:blocksworld}, consider two actions ``pick-up
B" and ``unstack B A", fact ``ON-TABLE B" is in $pre(pick-up\ B)$
and fact ``ON B A" is in $pre(unstack\ B\ A)$ (the fourth condition
of Action londex cost's definition).
Example~\ref{example:fact-londex} shows that $\beta$( ON-TABLE\ B,
ON\ B\ A) $=4$, considering all other associated facts, we can
assure that $\beta(pick-up\ B, unstack\ B\ A)=\beta( ON-TABLE\ B,
ON\ B\ A)-c_{pick-up\ B}=2$. }
\end{ex}


In DPLL-OPT, in order to estimate the lower bound of the cost to
reach a goal state from a partial assignment of variable during
search, we dynamically maintain an estimation function $h(x)$ for
each variable $x \in V_a\cup V_f \cup V_g$.

At each decision point during search, the functions $h(x)$ is
maintained as a lower bound of the following quantity: the minimum
action cost of any plan in which the variable $x$ is $true$ and the
current partial assignments are kept.


\begin{defn}
\label{def:h-fact} \em
 For each fact variable $x_{t,f}\in{V_f}$, we have:
\begin{eqnarray*}
 h(x_{t,f}) &=&  \max \bigg\{ \min\limits_{\{a |
   f\in{add(a)\}}} h(x_{t-1,a}),
   \max\limits_{\{g | g\in{ldx(f)}, x_{t',g} = true\}}^{t'=t-\delta(g,f)} {h(x_{t',g}) +
  \beta(g,f)}
    \bigg\}
\end{eqnarray*}
\end{defn}
where $ldx(f)$ is the set of facts in the same DTG as $f$.
Intuitively, we consider all the actions at level $t-1$ that may
leads to $t$. We also consider the londex cost. If any fact $g \in
ldx(f)$ at a previous level $t'$ is set to be true, then it requires
at least $\beta(g,f)$ cost to transit $g$ to $f$.


\begin{defn}
\label{def:h-action} \em
 For each action variable $x_{t,a}\in{V_a}$, we have:
\begin{eqnarray*}
%h(x_{t,a}) = c_{a} + \max\limits_{\forall{f}\in{pre(a)}}h(x_{t,f})
h(x_{t,a}) &=& c_{a} + \max \bigg\{
\max\limits_{\forall{f}\in{pre(a)}}h(x_{t,f}),
    \max\limits_{\{b |
    b\in{ldx(b)}, x_{t',b} = true, \}}^{t'=t-\delta(b,a)} { h(x_{t',b}) + \beta(b,a)}  \bigg\}
\end{eqnarray*}
\end{defn}
%Intuitively, we consider all the preconditions of action $a$.
where $ldx(a)$ is the set of actions related to $a$ by londex.
Intuitively, we consider all the preconditions of action $a$ which
as well as all those actions that are related to $a$ by londex
constraints and are set to true at a previous level.



\begin{defn}
\label{def:h-goal} \em For each goal variable $x_{t,g}\in{V_g}$, we
have:
\begin{eqnarray*}
   h(x_{t,g}) &=&  \max\limits_{\forall{f}\in{G}}h(x_{t,f})
\end{eqnarray*}
\end{defn}
where $h(x_{t,g})$ responses the cost of achieving all goal fact at
$t$ time step, which equal to maximal cost of all goal facts
$x_{{t,f}, \forall{f\in{G}}}$.

\nop{ $h(x_{t,a})$, $\forall {x_{t,a}\in{V_a}}$ is lower bound of
the cost to achieve action $a$ at time step $t$, which equal to the
sum of the maximal cost of satisfying the preconditions of the
action and the cost of this actions. $h(x_{t,v})_{x_{t,v}\in{V_f}}$
responses the cost of achieving the fact $v$ at $t$ time step, which
equal to minimal $h(x_{t-1,a})$ of actions which additions includes
fact $v$. $h(x_{t,g})$ responses the cost of achieving all goal fact
at $t$ time step, which equal to maximal cost of all goal facts
$x_{{t,f}, \forall{f\in{G}}}$. }



%***********************************************************************
%                     procedure Cost_Init()
%***********************************************************************
\begin{algorithm}[t]
\label{costinit} \linesnumbered\caption{cost\_init()}

\KwIn{$P$, $SAT$-$SCAN_k$}

\KwOut{$h$}

\For{all $x_{0,f}\in{V_f}$}{
    \eIf{$f\in{I}$}{
        $h(x_{0,f}) \leftarrow 0$\;
    }{
        $h(x_{0,f}) \leftarrow {\infty}$\;
    }
}

\For{t=0 to k-1}{
    \For{all $x_{t,a}\in{V_a}$}{
      compute  $h(x_{t,a})$ using \textbf{Definition ~\ref{def:h-action}}\;
    }

    \For{all $x_{t+1,f}\in{V_f}$}{
       compute  $h(x_{t+1,f})$ using \textbf{Definition ~\ref{def:h-fact}}\;
    }
}

$h \leftarrow \max\limits_{\forall{f}\in{G}}h(x_{k,f})$\;

\end{algorithm}


To initialize the cost function $h(x)$, given the initial state $I$,
at level 0 we set $h(x_{0,f})=0$ if $f\in{I}$ and
$h(x_{0,f})={\infty}$ otherwise. Then we construct the initial
values for variables from level 0 to $k$ following Definitions 1 and
2 (see Algorithm \ref{costinit}).

\nop{ From level 0 to goal level $k$, the cost of executing an
action $a$ at level t($h(x_{t,a})$) can be initialized as the
potential cost of reaching the most expensive precondition of $a$ at
level t-1 plus action cost $c_a$(Line 8), and the cost of an fact at
level t($h(x_{t,f})$) can be initialized as the potential cost of
reaching the minimal action cost in level t-1 which add the
fact(Line 10). Finally, the overall h value for DPLL-OPT produce is
computed as the maximal cost of reaching goal state in level k. }


%***********************************************************************
%                     procedure Cost_Propagate()
%***********************************************************************
\begin{algorithm}[t]
\label{costprop} \linesnumbered\caption{cost\_propagate()}

\KwIn{$P$, $SAT$-$SCAN_k$}

\KwOut{$h$}

initialize $U$ as a priority queue indexed by $t$\;

\While{ $U\neq{\emptyset}$}{
    get $x$ from $U$, $U \leftarrow U\backslash\{x\}$\;
    \If{$x=x_{t,a}\in{V_a}$}{

        \lIf{$v(x_{t,a})$=false}{
            $newcost \leftarrow \infty$\;
        }\Else{
        %Q.lv
            %$newcost \leftarrow \max\limits_{{f}\in{pre(a)}}h(x_{t,f})$\;
            compute $newcost$ using \textbf{Definition ~\ref{def:h-action}}\;

            \If{$v(x_{t,a})$=true}{
                $newcost \leftarrow newcost - c_a$\;
            }
        }
        \If{$newcost\neq{h(x_{t,a})}$}{
        %Q.lv
            $h(x_{t,a}) \leftarrow newcost$\;
            \For {all $f\in{add(a)}$}{
                $U \leftarrow U + \{x_{t+1,f}\}$\;
                }

            \If{$v(x_{t,a})$=true}{
                \lFor {\emph{\textbf{all}} $b \in{ldx(a)}$ and $t'$ = $t+ \delta(a,b)$}{
                   %\For {$t'$=$t+\delta(a,b)$ to $k$}{
                        $U \leftarrow U + \{x_{t',b}\}$ \;
                       % $h(x_{t',b}) \leftarrow \max(h(x_{t',b}), newcost + \beta(a,b)$)\;
                   }
                }


        }
    }\lElse{
        \If{$x=x_{t,f}\in{V_f}$}{
            \lIf{$v(x_{t,f})$=false}{
                $newcost \leftarrow \infty$\;
            }\lElse{
                compute $newcost$ using \textbf{Definition ~\ref{def:h-fact}}\;

\nop{                $cost1 \leftarrow \min\limits_{\{a |
   f\in{add(a)\}}} h(x_{t-1,a})$\;
                $cost2 \leftarrow \max\limits_{\{g |
    g\in{ldx(f)}, x_{t',g} = true,\}}^{t' =t - \delta(g,f)} h(x_{t',g}) + \beta(g,f)
    \bigg\}$\;
                $newcost \leftarrow \max\{cost1, cost2\}$\;
               %$newcost \leftarrow \min\limits_{\{a|
                %f\in{add(a)}\}}h(x_{t-1,a})$\;
}
            }
            \If{$newcost\neq{h(x_{t,f})}$}{
            %Q.lv
                $h(x_{t,f}) \leftarrow newcost$\;
                \For {all $a$ such that $f\in{pre(a)}$}{
                        $U \leftarrow U + \{x_{t,a}\}$\;
                    }

                \If{$v(x_{t,f})$=true}{
                    \lFor {\emph{\textbf{all}} $g \in{ldx(f)}$ and $t'$=$t+\delta(f,g)$}{
                        $U \leftarrow U + \{x_{t',g}\}$ \;

                    }
                }
           }
        }
    }
}

$h \leftarrow \max\limits_{\forall{f}\in{G}}h(x_{k,f})$

\end{algorithm}

Each time the solver finishes propagate() and does not cause a
conflict, the $h$ value is updated by Algorithm ~\ref{costprop}. The
set $U$ initially includes variables that have been assigned a value
since the last call to cost\_propagate() (Line 1). While $U$ is not
empty, we dequeue a variable $x$  from $U$ (Line 3), re-calculate
its $h$ value, and insert the variables whose $h$ value may be
affected by $x$ into the queue. \nop{If $x$ is a fact variable
$x_{t,f}$ in $V_f$, we compute the new cost of $x$. If the new cost
is not equal to $h(x)$, we should update $h(x)$ as the new cost and
add the all fact variables in addition of action $a$ which may need
to be updated later(Line 11); 2) If $x$ is fact variable $x_{t,f}$
in $V_f$, we compute the new cost of $x$(Line 13 and 14). If the new
cost is not equal to $h(x)$, we should update $h(x)$ as the new
cost(Line 17) and add the all action variables that the fact $f$ is
in the precondition of these actions(Line 18).} We repeat this
process until $U$ is empty. The final value $h$ is the maximum of
the costs of the goal facts at level $k$ (Line 25). It requires at
least $h$ additional cost to reach the goal from the current partial
assignment. $h$ is used in Line 17 of DPLL-OPT() for pruning.


\vspace{0.02in}
\subsubsection{Blocking clauses}

\nop{Each time a solution is found, we add a corresponding blocking
clause to the clause database. By representing the negation of a
satisfying solution, the blocking clause guarantees that the visited
solution will not be found again. According to our experiments,
there are usually a large quantity (up to hundreds of thousands) of
satisfiable solutions to each individual SAT instance. As a result,
we need to prune the search tree.}

Each time a solution is found, we update the incumbent value $\tau$
with its cost value and add a blocking clause to the clause database
so that the visited solution will not be found again. Given a
SAT-SCAN problem $SAT$-$SCAN_{k}=(k,V_a,V_f,V_g)$, suppose the
current assignment is $V$, we define $\Phi$ to be the set of
variables that have been selected by decide() ($\Phi \subseteq V$).
We synthesize a blocking clause $M$ as $M =\bigvee_{\forall x\in
\Phi}{literal(x)}$ where $$ literal(x) = \left\{
\begin{array}{ll}
 \overline{x},     & \mbox{ $v(x) = true$  }  \\
           x,      & \mbox{ $v(x)= false$ } \\
        %   false    & \mbox{ $v_i = free$ } \\
\end{array} \right.$$

The blocking clause ensures that any future solution found by the
search will differ from the current solution. For example, if a
solution to a 5-variable SAT-SCAN problem is
$V=\{x_1,x_2,x_3,x_4,x_5\}|_{v(x)=(true,false,true,true,false)}$ and
$\Phi=\{x_2,x_4\}$, we add the blocking clause ${x_2}\vee
\overline{x_4}$.


%***********************************************************************
%                     procedure solver_addBlockClause()
%***********************************************************************
\begin{algorithm}[t]
\label{algorithm:addBlockingClause}
\linesnumbered\caption{add\_blocking\_clause($V$) }

\KwIn{a solution $V$}

%\KwOut{solution}

$num$++\;

$\tau \leftarrow cost(V)$\;

update the incumbent solution\;

create a blocking clause $M$\;

add $M$ to the clause database\;

backtrack()\;

\end{algorithm}



Algorithm~\ref{algorithm:addBlockingClause} gives details of
add\_block\_clause(), which is invoked when the solver finds a new
solution. It starts by increasing the counter of the number of
solutions $num$ (Line 1) and updating the minimum cost $\tau$ with
the cost of the new solution (Line 2). Then, a solution plan will be
recorded (Line 3). After that, we create a blocking clause (Line 4)
and add it to the clause database (Line 5) to ensure that the solver
will never generate the same solution later.
% The clause will provide
%an additional constraint that enhances constraint propagation. Using
%the blocking clause, a learnt clause will be produced by analyze()
%(Line 6) and added to the clause database (Line 5).
%We can do this because the current solution violates the blocking
%clause and thus can be treated as a conflict.
Finally, the procedure will undo assignments until precisely one of
the literals of the learnt clause becomes unassigned (Line 6).


\vspace*{0.02in}
\subsubsection{Pruning clauses}

Once a solution is found, we update the threshold $\tau$ with its
cost value. In the following search, if the summation of the cost of
the current partial assignment $V$ and the bound estimation $h$
exceeds $\tau$ ($cost(V)+h \ge \tau$), we consider it a deadend
since no further assignments could make the cost any lower. In this
case, we call pruning() to stop propagating and backtrack. This
pruning technique is crucial for speeding up the search. Without
this pruning, we found that finding all the solutions for a give SAT
instance is prohibitively expensive.

Given a SAT-SCAN problem $SAT$-$SCAN_{k}=(k,V_a,V_f,V_g)$, suppose
the current assignment is $V$, we define $\Phi$ to be the set of
variables that have been selected by decide() ($\Phi \subseteq V$).
We synthesize a pruning clause $M$ as $M =\bigvee_{\forall x\in
\Phi}{literal(x)}$ where $$ literal(x) = \left\{
\begin{array}{ll}
 \overline{x},     & \mbox{ $v(x) = true$  }  \\
           x,      & \mbox{ $v(x)= false$ } \\
          false    & \mbox{ $v_i = free$ } \\
\end{array} \right.$$

For example, suppose we have an partial assignment of a 5-variable
SAT-SCAN problem:
$V=\{x_1,x_2,x_3,x_4,x_5\}_{v(x)=(true,false,free,false,true)}$ and
the set $\Phi=\{x_1,x_4\}$, we add the pruning clause
$\overline{x_1}\vee{x_4}$.

Algorithm~\ref{algorithm:pruning} gives the details of the prune()
procedure. It starts by creating a pruning clause given the partial
assignment $V$ (Line 1). Then, it generates a learnt clause from the
pruning clause (Line 2) and adds it to the clause database (Line 3).
Finally, we do backtracking to undo the assignments (Line 4), since
it is considered a deadend due to the upper bound $\tau$.


%***********************************************************************
% *******************      procedure pruning()      *************
%***********************************************************************
\begin{algorithm}[t]
\label{algorithm:pruning} \linesnumbered \caption{pruning($V$)}

\KwIn{A partial assignment $V$}

%\KwOut{}

create a pruning clause $M$\;

add clause $M$ to the clause database\;

backtrack()\;

\end{algorithm}



 \nop{
=================================================================
\vspace{0.02in}
\subsubsection{Pruning clauses}

Each time a solution is found, we update the incumbent value $\tau$
with its cost value and add a pruning clause to the clause database
so that the visited solution will not be found again. Given a
SAT-SCAN problem $SAT$-$SCAN_{k}=(k,V_a,V_f,V_g)$, suppose the
current assignment is $V$, we define $\Phi$ to be the set of
variables that have been selected by decide() ($\Phi \subseteq V$).
We synthesize a pruning clause $M$ as $M =\bigvee_{\forall x\in
\Phi}{literal(x)}$ where $$ literal(x) = \left\{
\begin{array}{ll}
 \overline{x},     & \mbox{ $v(x) = true$  }  \\
           x,      & \mbox{ $v(x)= false$ } \\
        %   false    & \mbox{ $v_i = free$ } \\
\end{array} \right.$$

 The pruning clause ensures that any future solution  will differ
from the current solution. For example, if a solution  is
$$V=\{x_1,x_2,x_3,x_4,x_5\}|_{v(x)=(true,false,true,true,false)}$$ and
$\Phi=\{x_2,x_4\}$, we add the pruning clause ${x_2}\vee
\overline{x_4}$.

In DPLL-OPT, after a pruning clause is added, we call backtrack()
since the current variables assignment can be treated as a conflict
given the new clause. The backtrack() procedure will undo
assignments until precisely one of the literals in the pruning
clause becomes unassigned.

In the following search of finding some solutions, if the summation
of the cost of the current partial assignment $V$ and the bound
estimation $h$ exceeds $\tau$ ($cost(V)+h \ge \tau$), we will stop
searching the subtree by adding pruning clause which can ensures
that current subtree will be excluded from future search.

\nop{
%***********************************************************************
%                     procedure solver_addBlockClause()
%***********************************************************************
\begin{algorithm}[t]
\label{addBlockingClause}
\linesnumbered\caption{add\_blocking\_clause($V$) }

\KwIn{a solution $V$}

%\KwOut{solution}


update the incumbent solution, $\tau \leftarrow cost(V)$ \;

add a blocking clause $M$ to the clause database\;

backtrack()\;

\end{algorithm}
}




\vspace*{0.02in}
\subsubsection{Pruning clauses}

Once a solution is found, we update the incumbent value $\tau$ with
its cost value. In the following search, if the summation of the
cost of the current partial assignment $V$ and the bound estimation
$h$ exceeds $\tau$ ($cost(V)+h \ge \tau$), we stop searching the
subtree by adding a pruning clause $M=\bigvee_{\forall x\in
\Phi}{literal(x)}$,
%Q.lv
where $ literal(x) = \overline{x}$ if $v(x) = true$  and $literal(x)
= x$ if $v(x) = false$.

 %where $ literal(x) = \left\{
%\begin{array}{ll}
% \overline{x},     & \mbox{ $v(x) = true$  }  \\
%           x,      & \mbox{ $v(x)= false$ } \\
        %   false    & \mbox{ $v_i = free$ } \\
%\end{array} \right.$.

When solving a SAT instance, suppose the current partial assignment
is $V=\{x|x\in{V_a\cup{V_f}\cup{V_g}},v(x)\in\{true,false,free\}\}$
and the selected variables set is $  U=\{x|x\in V\ and\ x\ was\
selected\ by\ decide()\} $, we synthesize a pruning clause $M$ as
$M=\bigvee_{\forall x\in U}{literal(x)}$, where $ literal(x) =
\left\{
\begin{array}{ll}
 \overline{x},     & \mbox{ $v(x) = true$  }  \\
           x,      & \mbox{ $v(x)= false$ } \\
          % false    & \mbox{ $v_i = free$ } \\
\end{array} \right.$.
}

\nop{ For example, suppose we have an assignment:
$V=\{x_1,x_2,x_3,x_4,x_5\}_{v(x)=(true,false,free,false,true)}$ and
the set $U=\{x_1,x_4\}$the pruning clause will be
$\overline{x_1}\vee{x_4}$. }

\nop{ After the pruning clause is added, current solution becomes a
conflict and DPLL-OPT calls backtrack() to undo the assignments. The
pruning clause ensures that current subtree will be excluded from
future search. }

\nop{
%***********************************************************************
% *******************      procedure pruning()      *************
%***********************************************************************
\begin{algorithm}[t]
\label{pruning} \linesnumbered \caption{pruning($V$)}

\KwIn{A partial assignment $V$}

%\KwOut{}

create a pruning clause $M$\;

add clause $M$ to the clause database\;

backtrack()\;

\end{algorithm}
}
%R.Huang I believe we can write more about the active pruning things here.
%Q.Lv the meaning of active pruning is very simple, how to add some??

%R.Huang What does (or a less ) mean?
%Q.Lv "a" respect finding the first solution.
%R.Huang  I still don't understand what this is, so I just revised it based on my own understanding this time;

\vspace{0.02in}
\subsubsection{ DPLL-OPT Heuristic }
The heuristic of SAT-SCAN was developed from
MiniSat~\cite{Een:TAST-04}~\cite{zhang:CAV-02} whose SAT heuristic
ia an improved version of VSIDS(Variable State Independent Decaying
Sum)~\cite{Moskewicz:DAC-01}. VSIDS is quite competitive compared
with other branching heuristics on the number of branches need to
solve a problem. In cost-optimal planning, the SAT instance contains
a cost for each action variable, which MiniSat does not consider. We
thus add action cost as an important factor to our heuristic
mechanism.

Weak dependencies~\cite{Arbelaez:SAC-09} is a simplified form of
functional dependencies between variables. These relations can be
used to rank the importance of each variable. More precisely, each
time a variable $y$ gets instantiated as a result of the
instantiation of $x$, a \textbf{weak dependency $(x,y)$} is
recorded. And then, the weight of $x$ is raised, so the variable may
be selected with higher priority later.

Integrating action cost with VSIDS and weak dependencies, we have
the following synthetic heuristic for DPLL-OPT:

\begin{enumerate}
\item Initialize an priority increment $\delta_{p}$ to 1.

\item Each variable $x$ has an priority value $p(x)$. Initialize
$p(x)$ as follows:

$$ p(x) = \left\{
\begin{array}{rl}
    c_a,     & \mbox{ if $x=x_{t,a}\in{V_a}$  } \\
    0,      & \mbox{ $otherwise$ } \\
\end{array} \right.$$

\item In function decide(), select an uninstantiated variable $x$, if
any, randomly in a constant probability, if failed, select the
uninstantiated variable the highest priority value.

\item Whenever a variable $y$ gets instantiated as a result of the
instantiation of $x$(\textbf{weak dependency $(x,y)$}) in function
propagate(), increase the priority value of $x$ as follows:

$$ p(x) = \left\{
\begin{array}{rl}
    p(x)+c_a*\delta_p,     & \mbox{ $x=x_{t,a}\in{V_a}$  } \\
    p(x)+\delta_p,      & \mbox{ $otherwise$ } \\
\end{array} \right.$$

\item Whenever a learnt clause was generated by analyze() in DPLL-OPT,
increase the priority values $p(x)$ as follows:

$$ p(x) = \left\{
\begin{array}{rl}
    p(x)+c_a*\delta_p,     & \mbox{ $x=x_{t,a}\in{V_a}$  } \\
    p(x)+\delta_p,      & \mbox{ $otherwise$ } \\
\end{array} \right.$$
for each variable $x$ in the new learnt clause. After that, multiply
$\delta_p$ by a constant slightly more than 1.

\item Periodically divide all of priority values by a big constant and
then reset $\delta_p$ to 1, as implemented in MiniSat.

\end{enumerate}

The main idea of this synthetic heuristic is that two kinds of
variables should be branched with higher priority. One includes the
variables which have higher action costs, branching them earlier
will result in higher $cost(V)$ and result in potential violation of
cost constraints, thus leading to an earlier backtrack and a speedup
of the search. Another includes the variables that more likely to
``propagate" large number of variables, and weak dependencies just
depict this relationship, and our experiments on large set of
problems show that it can lead to a good speedup of the solver.

