\section{A Branch-and-Bound Algorithm for Solving MinCost SAT}
\label{sec:opt}

\nop{
%***********************************************************************
% *******************               main procedure of DPLL
%***********************************************************************
\begin{algorithm}[t]
\LinesNumbered
\caption{ DPLL($\Phi$) }
\label{dpll}

\KwIn{ SAT problem $\Phi$ }

\KwOut{ a satisfiable solution or UNSAT}

{initialize the solver\;}

\While{ $true$ }{

    $conflict \leftarrow$ propagate()\;

    \If{$conflict$}{

        $learnt \leftarrow$ analyze($conflict$)\;
        add $learnt$ to the clause database\;

        \eIf{top-level conflict found}{
            return UNSAT\;
        }{
           backtrack()\;
       }
    }\Else{

        \eIf{all variables are assigned}{
            return SAT\;
        }{
            decide()\;
%            assume a variable $x$ to be $true$\;
        }
    }
}
\end{algorithm}
}

In this section, we develop a branch-and-bound based
DPLL~(BB-DPLL) algorithm for optimally solving MinCost SAT problems.
Based on the standard branch-and-bound procedure, we introduce two
key planning specific techniques: a cost bounding mechanism based on
solving relaxed planning problems and a variable branching scheme
based on action costs. Together, these two techniques significantly
improve the problem solving efficiency.


%\subsection{The DPLL algorithm}
Here we give an overview of the BB-DPLL procedure, which integrates two popular schemes, DPLL and branch-and-bound search procedures.
Many complete SAT solvers are based on the DPLL algorithm~\cite{Davis62,zhang:CAV-02}, along with other techniques
such as clause learning~\cite{Marques96} and constraint propagation~\cite{Moskewicz:DAC-01}.
Although branch-and-bound has been studied and applied to SAT solving~\cite{Planes03,Fu06,Larrosa09}, planning-specific
bounding and variable ordering techniques have not been extensively studied before in a SAT solver.
%Branch-and-bound has been extensively studied and applied to SAT solving~\cite{Planes03,Fu06,Larrosa09}.

\nop{The DPLL procedure performs a
systematic depth-first search on the space of variable assignments.
At each search node, a new variable is assigned and constraint
propagation is executed. If a conflict occurs during the search, the
procedure performs clause learning and backtracking~\cite{Marques96}.
A description of the DPLL procedure is shown in
Algorithm~\ref{dpll}.
}

The BB-DPLL procedure is shown in Algorithm~\ref{bb-dpll}.
%, which was used in the implementation of the MiniSAT solver~\cite{Een:TAST-04}.
The algorithm uses a propagation queue that contains all literals
pending propagation and also contains a representation of the
current assignment. In the procedure, a variable is \emph{free} if
it has not been assigned a value. Initially, all variables are free.

The algorithm repeatedly propagates the literals in the propagation
queue and returns a conflict if there is any (Line 5). Once a
conflict occurs, the procedure analyze() checks the conflict to
generate a learned clause ~\cite{Een:TAST-04} (Line 7); after that, it calls
backtrack() to undo the assignment until exactly one of the literals
in the learned clause becomes unassigned (Line 12). If no conflict
occurs, it calls the cost\_propagate() procedure to estimate the lower bound of current assignment (Line 15).
It prunes a search node if the lower bound of its cost exceeds $\tau$, the cost of the
incumbent (currently best) solution (Line 17-18), or calls decide() to select a free variable, assigns it to
be $true$ or $false$, and inserts it into the propagation queue (Line 25).
Then a new iteration takes place.

Each time a satisfying solution is found when there is no free variable any more (Line 20), it updates the incumbent
solution, including solution number $num$ and threshold $\tau$, and then backtracks~(Line 21-23).
BB-DPLL keeps searching the whole space until all satisfiable solutions are either visited or pruned, in order to find the one that minimizes $cost()$, the objective function of the MinCost SAT problem.
The procedure stops when a top level conflict is found (Line 8-9).

\nop{
In the first case, it returns SAT that indicates that a solution has been found,
while in the second case it returns UNSAT that means that no
solution exists for the SAT problem.
}

\nop{ Consequently, those clauses related to variable $x$ will also
be propagated, leading to more variable assignments. Each literal
has a decision level. Starting from zero, the decision level is
increased by one each time decide() is called. Those literals newly
assigned via propagation have the same decision level as $x$'s.}

\nop{
To optimally solve a MinCost SAT problem, we adopt a
branch-and-bound algorithm based on the DPLL procedure, denoted as BB-DPLL.
}

\nop{
%Instead of exiting at the first solution as DPLL procedure does,

To reduce the time complexity of such a complete search, we
use a  to prune the search space and a
variable branching scheme to guide the search. In the following
we will describe these two schemes in depth.
}

%Algorithm~\ref{bb-dpll} shows the BB-DPLL algorithm, which largely follows the DPLL framework except for two changes.
\nop{
Second, if the constraint propagation
does not cause a conflict, it calls a cost\_propagate() procedure to
estimate the lower bound of current assignment and prunes a search
node if the lower bound of its cost exceeds $\tau$, the cost of the
incumbent (currently best) solution (Line 15-18).
}




%\subsection{The BB-DPLL algorithm for solving MinCost SAT}

%***********************************************************************
% *******************  main procedure of BB-DPLL   ************
%***********************************************************************
\begin{algorithm}[t]
\label{bb-dpll}
\LinesNumbered
\caption{ BB-DPLL($\Phi^c$) }

\KwIn{MinCost SAT problem $\Phi^c$}

\KwOut{a solution with minimum $cost$ }

cost\_init() \;

$\tau \leftarrow \infty$ \;

$num \leftarrow 0$ \;

\While{ $true$ }{
    $conflict \leftarrow$ propagate()\;

    \eIf{$conflict$}{
        $learnt \leftarrow$ analyze($conflict$)\;
        \eIf{$conflict$ is of top-level}{
            return $num > 0$ ? SAT:UNSAT\;
        }{
            add $learnt$ to the clause database\;
            backtrack()\;
        }
    }{
          cost\_propagate() \;

        $g(\psi) \leftarrow $cost($\psi$) \;

    %    $h(\psi) \leftarrow $cost\_propagate() \;



        \eIf{$g(\psi)+h(\psi) \geq \tau$}{
            %pruning() \;
            backtrack()\;
        }{

            \eIf{all variables are assigned}{
                $num$++ \;
                $\tau \leftarrow cost(\psi)$ \;
                backtrack()\;
            }{
                decide()\;
            }
        }
    }
}
\end{algorithm}
%*************    End of algorithm   ************************************
In the following, we describe the details of two key techniques, a
lower bounding scheme and a variable branching scheme.

\subsection{Lower bounding based on relaxed planning}
\label{sec.relax}


The lower bounding function is one of the most important components
in a branch-and-bound solver. Given a partial variable assignment
$\psi$, we can compute a lower bound of the costs of any solutions
based on this partial assignment. A typical lower bounding function
is $f(\psi)=g(\psi)+h(\psi)$, where $g(\psi)$ is the total action costs of those
variables already assigned to be true in $\psi$, and $h(\psi)$ is a lower bound on the
pending cost that will incur by those variables unassigned in $\psi$.
In a basic BB-DPLL algorithm, we simply set $h(\psi)$ to zero.
In such a setting, the lower bound is exactly $g(\psi)$.

However, this basic scheme produces a loose lower bound. Our lower
bounding function is based on the idea of integrating max-heuristic
rule with additive/sum-heuristic rule when facts are
additive/independent. These heuristics have been extensively studied
in state space search planners (such as HSP-r~\cite{Bonet01} and
AltAlt~\cite{Nguyen02}) and SAT solvers (such as
MinCostChaff~\cite{Fu06} and DPLL$_{BB}$~\cite{Larrosa09}). Our
lower bounding function has two differences compared to previous
work. First, we implement an admissible bounding function which
integrates max-heuristic and sum-heuristic rules, while HSP-r and
AltAlt either utilize admissible max-heuristic or inadmissible
sum-heuristic as a guidance of expanding actions in the state space
search. Second, our lower bounding function is customized for CSTE
planning based on the relaxed planning
graph~\cite{Blum97,Hoffmann01}, while MinCostChaff and DPLL$_{BB}$
implement a generic sum-heuristic, which is less effective on our
problems as we will show in the experimental results.


%a sum-heuristic as a lower bounding used for branch-and-bound
%pruning by computing independent clauses sets.

We first construct a relaxed planning graph and
compute the lower bound cost $h(x)$ of each variable $x$ in the
graph. Then, we compute $h(\psi)$ for a partial assignment based on $h(x)$.
We also prove that this lower bounding function is admissible, specifically, we show that $h(\psi)$ is always a lower bound of the pending cost of any $\psi$.
A simple relaxed planning graph is shown in Figure~\ref{fig:plangraph} as a running example for our discussion.


\begin{figure}[tp]
 \centering
\scalebox{0.7}{\includegraphics{./figure/plangraph-c.eps}}
\parbox{5in}{
\caption{\label{fig:plangraph}\small  A relaxed planning graph for a
simple example with 4 actions and 7 facts. For simplicity, no-ops
are represented by dots and some action nodes in levels 1 and 2 are
ignored. $\mu(a_1, a_2,a_3,a_4)=(10,10,15,5)$.}}
\end{figure}

%Qiang 3.15
\subsubsection{Theory}
%\vspace{0.1in}\noindent\textbf{Theory.}~
We first define two sets, contribution set and additive set, which are useful for
defining an accurate bounding function.
Then, we define a lower bounding function $h(x)$ for each variable $x$ and a function $h(\psi)$ for any partial assignment $\psi$.
After that, we prove that $h(\psi)$ is always a lower bound of the pending cost of any $\psi$.


For each variable $x \in V$, the contribution set $cont(x)$ (formally defined below) is
%the set of all actions that are relevant to reaching the assignment $v_{\psi}(x)=1$.
the set of all possible actions in any solution plan
that reaches the assignment $v_{\psi}(x)=1$ from the initial state $I$.


%CSTE problem $\Pi=(I,F,O,G)$, the
\begin{defn}\label{def:cont}
\em Given a problem $\Pi^s=(I,F^s,O^s,G)$ transformed from a CSTE problem $\Pi=(I,F,O,G)$, and the corresponding MinCost SAT problem $\Phi^c=(V,C,\mu)$ with
makespan $N$, the contribution set \textbf{cont}($x$) of variable $x$ is
defined as:
\begin{itemize}
 \item \label{def:f-cont}  if $x=x_{f,t}\in{V}(0 \le t \le N)$:
\[ cont(x_{f,t}) =\left \{
\begin{array}{cl}
\bigcup\limits_{\{a | f\in{add(a)}\}} cont(x_{a,t-1}), & \mbox{ $t > 0$}\\
\emptyset, & \mbox{ $t = 0$}\\
\end{array} \right. \]

\item \label{def:o-cont}
 if $x=x_{a,t}\in{V}(0 \le t < N)$:
\[ cont(x_{a,t}) =\left \{
\begin{array}{ll}
\bigcup\limits_{f \in{pre(a)}} cont(x_{f,t}),& \mbox{if~a~is~a~no-op~action}\\
\bigcup\limits_{f \in{pre(a)}} cont(x_{f,t}) \ \cup\ \{a\},&
\mbox{ otherwise} \\
\end{array} \right. \]


\end{itemize}
\end{defn}

The above definition gives rise to an effective algorithm for
computing $cont(x)$ for every variable $x \in V$ in a
preprocessing phase.

\begin{defn}
\label{def:indepset} \em A variable set $X=\{x_{1,t_1},x_{2,t_2},...,x_{n,t_n}\}$ is
an \textbf{\emph{additive set}}, denoted as \textbf{adt}(X), if
and only if for each variable pair $(x_{i,t_i},x_{j,t_j})$, $x_{i,t_i},x_{j,t_j} \in X$ and
$i \neq j$, it satisfies:
$$cont(x_{i,t_i})\cap cont(x_{j,t_j})=\emptyset.$$
\end{defn}


Since the contribution set of a variable $x$ contains all actions in
any possible plan that reaches $x$ from the initial state, for an
additive set $X=\{x_{1,t_1},x_{2,t_2},...,x_{n,t_n}\}$, there is no common action
in any two plans reaching $x_{i,t_i}$ and $x_{j,t_j}$, respectively. In other words,
given an additive set $X$ and two variables $x_{i,t_i}, x_{j,t_j}
\in X (i \neq j)$, if there exists an action $a$ as a common
action in two plans reaching $x_{i,t_i}$ and $x_{j,t_j}$, then $a\in cont(x_{i,t_i})
\cap cont(x_{j,t_j})$, which contradicts the definition of the additive set.
For example, in Figure~\ref{fig:plangraph},
$cont(x_{{f_4},2}) \cap cont(x_{{f_5},2})=\{a_1\} \cap \{a_2\} = \emptyset$, thus
$\{x_{{f_4},2}, x_{{f_5},2}\}$ is an additive set.

At each decision point (corresponding to a partial assignment $\psi$) during the search, for each
variable $x$ (both assigned and unassigned in $\psi$ are included), $h(x)$ is maintained as a
lower bound of the following quantity: the minimum total action costs
of any solution plan that: 1) reaches the assignment $v_{\psi}(x)=1$ from the
initial state $I$, and 2) is consistent with the partial assignment
$\psi$.

\begin{defn}
\label{def:h-x}\em Given a partial assignment $\psi$, for each variable $x\in V$, the lower
bounding function $h(x)$ is defined as:
\begin{itemize}
\item \label{def:h-neg}if $v_{\psi}(x)=0$, then $h(x)=\infty$;

\item if  $v_{\psi}(x)=1$ or $x$ is unassigned, then:
\label{def:h-fact}
\[h(x_{f,t}) = \left \{
\begin{array}{cl}
\min\limits_{\{a |f\in{add(a)\}}} h(x_{a,t-1}), & \mbox{$t > 0$}\\
0, & \mbox{$t = 0$ and $f\in I$}\\
\infty, & \mbox{$t=0$ and $f \notin I$}\\
\end{array} \right.\]
\label{def:h-action}
\[ h(x_{a,t}) = \mu(x_{a,t})\alpha(x_{a,t}) +
\left \{
\begin{array}{cl}
    \sum\limits_{{f}\in{pre(a)}}h(x_{f,t}),       & \mbox{ if \emph{adt}($\{x_{f,t}| f \in pre(a)$\})}  \\
    \max\limits_{{f}\in{pre(a)}}h(x_{f,t}),          & \mbox{otherwise}\\
\end{array} \right. \]
\end{itemize}
where $\alpha(x_{a,t})=0$ if $v_{\psi}(x_{a,t})=1$, otherwise
$\alpha(x_{a,t})=1$.
\end{defn}

For a variable $x$ assigned to be false, since no solution
plan satisfying $\psi$ can reach $v_{\psi}(x)=1$, we have $h(x)=\infty$. The
lower bound of a non-false assignment fact variable $x_{f,t}$ is the
minimum estimated value of action variables $\{x_{a,t-1}|f\in
add(a)\}$. The necessary condition for $x_{a,t}$ to be true is that
all of $a$'s precondition variables are true. Thus, a lower bound
for $h(x_{a,t})$ is the maximum of the $h$ values of $a$'s
precondition variables. Further, if $a$'s precondition set is
\emph{additive}, the lower bound can be improved by summing up
the $h$ values of $a$'s precondition variables. $\alpha(x)$ makes
the variable cost $\mu(x)$ only be counted once in $h(x)$ or $g(\psi)$.

Based on the $h$ values for variables, we now define the
\textbf{lower  bounding function} $h(\psi)$ for any partial assignment
$\psi$. $h(\psi)$ is computed as
\begin{align}\label{def:h-goal}
h(\psi) = \left \{
\begin{array}{cl}
    \sum\limits_{{f}\in{G}}h(x_{f,N}),       & \mbox{ if \emph{adt}($\{x_{f,N} |f \in{G}\}$)}  \\
    \max\limits_{{f}\in{G}}h(x_{f,N}),          & \mbox{otherwise}\\
\end{array} \right.
\end{align}

For example, in Figure~\ref{fig:plangraph}, since
\emph{adt($\{x_{{f_4},2}, x_{{f_5},2}\}$ )},
$h(x_{{a_3},2})=\mu(x_{{a_3},2})\alpha(x_{a_3,2})+h(x_{f_4,2})+h(x_{f_5,2})$
and
$h(x_{{a_4},2})=\mu(x_{{a_4},2})\alpha(x_{a_4,2})+h(x_{f_4,2})+h(x_{f_5,2})$.
At the beginning of search, $\alpha(x_{a,t})=1$ for all $x_{a,t}\in
V$. Then, $h(x_{a_3,2})=35$ and $h(x_{a_2,2})=25$.
$h(\psi)=max\{h(x_{f_6,3}), h(x_{f_7,3})\}=max\{h(x_{a_3,2}),
h(x_{a_2,2})\}=35$. Note that we get worse lower bound without considering additive set,
in which case we will get $h(x_{a_3,2})=25$, $h(x_{a_2,2})=15$ and $h(\psi)=25$.


Now, we prove that $h(\psi)$ is a lower bound of the actual
minimum pending cost of any partial assignment $\psi$. Given a partial
assignment $\psi$ and any variable $x \in V$, we use $\mu^p(x)$ to
represent the \textbf{pending cost} (the total cost of only those
action variables that are unassigned in $\psi$) of any solution plan
that assigns $v_{\psi}(x)=1$ and agrees to $\psi$. $\mu^p(x)$ is unbounded if
$v_{\psi}(x)=1$ cannot be reached in any solution within the predefined makespan bound consistent with $\psi$, denoted as
$\mu^p(x)=\infty$; otherwise, $\mu^p(x)$ is bounded.

\begin{lemma}\em
\label{lemm:h-x} \em Given a partial assignment $\psi$, for each fact
variable $x_{f,t} \in V(0 \leq t \leq N)$ such that $\mu^p(x_{f,t})$
is bounded, we have $\mu^p(x_{f,t}) \geq h(x_{f,t})$ .
\end{lemma}
\begin{proof} See \ref{appendix:proofs}.
\end{proof}



\nop{ Intuitively, the pending cost of a partial assignment $\psi$ is
no less than the maximum estimation value of all goal variables
$(x_{f,N} \in{G})$. If goal variables set is \emph{additive}, we
can improve it to the summation of lower bounding value of all goal
fact variables. }

For any solution plan $p$ reaching all goal variables
$\{x_{f,N}|f\in G\}$ from the initial state $I$ and satisfying the
current partial assignment $\psi$, we denote the \textbf{pending cost}
of the plan as $\mu^p(\psi)$, which is the total cost of all the
actions that are not assigned in $\psi$. The \textbf{minimum pending
cost} for any partial assignment $\psi$, denoted as $h^r(\psi)$, is the
minimum $\mu^p(\psi)$ over all possible solution plans that are
consistent with $\psi$, i.e. $h^r(\psi) = \min_{p} \mu^p(\psi)$.

%CSTE problem $\Pi=(I,F,O,G)$, the corresponding
\begin{thm}\em
\label{the:h-v} \em Given a problem $\Pi^s=(I,F^s,O^s,G)$ transformed from a CSTE problem $\Pi=(I,F,O,G)$, and the corresponding MinCost SAT problem
$\Phi^c=(V,C,\mu)$, for any partial assignment $\psi$ of $\Phi^c$, we
have $h(\psi)\leq h^r(\psi)$.
\end{thm}
\begin{proof}
 See \ref{appendix:proofs}.
\end{proof}

Theorem~\ref{the:h-v} shows that $h(\psi)$ is always less than or equal to the
actual minimum pending cost of any partial assignment $\psi$. Hence,
during the search, we can use $g(\psi) + h(\psi)$ as a lower bound of the
total cost of any solution plan satisfying $\psi$.

%***********************************************************************
%                     procedure Cost_Init()
%***********************************************************************
\begin{algorithm}[t]
\LinesNumbered
\caption{cost\_init()}
\label{costinit}

\KwIn{$\Pi^s=(I,F^s,O^s,G)$, $\Phi^c=(V,C,\mu)$, $N$}

%\KwOut{$h(\psi)$}

\For{all $x_{f,0}\in{V}$}{
    set $h(x_{f,0})=0$ if $f \in I$ and $h(x_{f,0})=\infty$ otherwise \;  }

\For{t=0 to $N$}{
    \For{all $x_{a,t}\in{V}$}{
      compute  $h(x_{a,t})$ using \textbf{Definition ~\ref{def:h-action}}\;
    }

    \For{all $x_{f,t+1}\in{V}$}{
       compute  $h(x_{f,t+1})$ using \textbf{Definition ~\ref{def:h-fact}}\;
    }
}

%compute  $h(\psi)$ using \textbf{formula~(\ref{def:h-goal})}\;

\end{algorithm}


%***********************************************************************
%                     procedure Cost_Propagate()
%***********************************************************************
\begin{algorithm}[t]
\label{costprop} \LinesNumbered\caption{cost\_propagate()}

\KwIn{$\Pi^s=(I,F^s,O^s,G)$, $\Phi^c=(V,C,\mu)$}

%\KwOut{$h(\psi)$}

initialize $U$ as a priority queue sorted by $t$\;

\While{ $U\neq{\emptyset}$}{
    get $x$ from $U$, $U \leftarrow U\backslash\{x\}$\;
    \If{$x=x_{a,t}\in{V}$}{

        \lIf{$v_{\psi}(x_{a,t})$=false}{
            $newcost \leftarrow \infty$\;
        }\Else{
        %Q.lv
            %$newcost \leftarrow \max\limits_{{f}\in{pre(a)}}h(x_{f,t})$\;
            compute $newcost$ using \textbf{Definition ~\ref{def:h-action}}\;

        }
        \If{$newcost\neq{h(x_{a,t})}$}{
        %Q.lv
            $h(x_{a,t}) \leftarrow newcost$\;
            \For {all $f\in{add(a)}$}{
                $U \leftarrow U \cup \{x_{f,t+1}\}$\;
                }
        }
    }\lElse{
        \If{$x=x_{f,t}\in{V}$}{
            \lIf{$v_{\psi}(x_{f,t})$=false}{
                $newcost \leftarrow \infty$\;
            }\lElse{
                compute $newcost$ using \textbf{Definition ~\ref{def:h-fact}}\;
            }
            \If{$newcost\neq{h(x_{f,t})}$}{
            %Q.lv
                $h(x_{f,t}) \leftarrow newcost$\;
                \For {all $a$ such that $f\in{pre(a)}$}{
                        $U \leftarrow U \cup \{x_{a,t}\}$\;
                }
            }
        }
    }
}

%update $h(\psi)$ for affected $\psi$ using \textbf{formula~(\ref{def:h-goal})}\;

\end{algorithm}

\subsubsection{Implementation}
%\vspace{0.1in}
%\noindent\textbf{Implementation.}~
The algorithms for initializing and maintaining the $h(x)$ values for all $x \in V$ are shown in
Algorithms~\ref{costinit} and~\ref{costprop}, respectively. To initialize the cost
function $h(x)$, we first set $h(x_{f,0})=0$ if $f\in{I}$ and
$h(x_{f,0})={\infty}$ otherwise. Then, we set the initial values for
variables from level 0 to $N$ following
Definition~\ref{def:h-fact}.
%In Example~\ref{ex:rlxpg}, $h(x_{{a_3},2},
%x_{{a_4},2},x_{{f_6},3},x_{{f_7},3})$ is initialized as $(35, 25,
%35, 25)$ and $h(\psi)=35$.

In our implementation, for each action variable $x_{a,t}$, we pre-compute whether its precondition variables are additive.
We also pre-compute whether all goal variables are additive.
%contribution sets of all variables in the relaxed planning graph and find additive sets for all actions' precondition variables.
Then we decide whether to use $max$ or $\sum$ based on Definition~\ref{def:h-x} and Equation~(\ref{def:h-goal}).
%Our results show that the time of
%computing additive sets can be ignored.

\nop{In Example~\ref{ex:rlxpg}, $x_{{a_3},2}$ and $x_{{a_4},2}$ have
an additive precondition set \emph{adt($\{x_{{f_4},2},
x_{{f_5},2}\}$)}}


Algorithm~\ref{costprop} updates the $h$ values each time if no
conflict occurs during the search. It uses a priority queue $U$ to store all
variables whose $h$ values need to be updated after a constraint propagation.
Since the variables in $U$ are ordered by the time step $t$, the variables will
be updated in an increasing order of $t$.
When $h(x)$ values are properly maintained, $h(\psi)$ for
any partial assignment $\psi$ can be computed easily using Equation~(\ref{def:h-goal}).
The updated $h(\psi)$ will be used in Line 17 of the BB-DPLL() procedure.
For the example in Figure~\ref{fig:plangraph}, if $x_{a_1,1}$ is assigned a value, then the
$h$ values of $x_{f_4,2}, x_{a_3,2}, x_{a_4,2},x_{f_6,3}$ and
$x_{f_7,3}$ will be updated.

\nop{ Algorithm~\ref{costprop} updates the $h$ values each time if
no conflict occurs. The priority queue $U$ initially includes the
variables that is assigned a value in last round of propagation.
While $U$ is not empty, we dequeue a variable $x$ from $U$~(Line 3),
re-calculate its $h(x)$ value, and insert the variables whose $h$
value may be affected by $x$ into $U$. We repeat this process until
$U$ is empty. The process stops when $U$ is empty. }

%lower bounding function $h(\psi)$ is computed as
%formula~(\ref{def:h-goal})~(Line 19). $h(\psi)$ is used in Line 17 of
%BB-DPLL() for pruning.

%Note that all of our development are based on the simple temporal planning problem aa


\subsection{Action cost based variable branching}
\label{sec.branch}

In our basic BB-DPLL procedure, the variable branching scheme is the same as that in
MiniSat~\cite{zhang:CAV-02,Een:TAST-04}, an improved version of VSIDS (Variable State Independent Decaying Sum)~\cite{Moskewicz:DAC-01}.
VSIDS works as follows:

\begin{enumerate}
%\begin{itemize}
\item Each variable $x$ has a priority value $p(x)$, initialized
to 0.

\item $\delta_{p}$ is a priority increment that is initialized to 1.

\item In decide(), with a constant probability $P_0$, randomly select an unassigned variable $x$,  and with probability $1-P_0$, select the
unassigned variable with the highest priority value. Assign the selected variable to $true$.

\item Whenever a learnt clause is generated by analyze() in BB-DPLL,
for each variable $x$ in the new learnt clause, we update the
priority values $p(x)$ as following:
$$ p(x) = p(x)+\delta_p.$$
%After that, multiply $\delta_p$ by a constant slightly greater than 1.
After that, multiply $\delta_p$ by a constant $\theta>1$.
% (We use $\theta =1.2$)

\item Periodically divide all priority values by a large constant $\gamma$
and reset $\delta_p$ to 1.
%\end{itemize}
\end{enumerate}

In MiniSat, $P_0=0.02$, $\theta_p=1.2$, and $\gamma=100$.

VSIDS is competitive comparing to other variable branching
heuristics for SAT solving~\cite{Een:TAST-04}. We present a
branching scheme that is customized and more effective than VSIDS for CSTE
planning.

MinCost SAT problems differ from SAT problems in that they have an
optimization goal of minimizing the total variable costs. Hence, the
variable branching mechanism can be improved by considering the
variable costs. We integrate action costs into the VSIDS branching scheme.
\nop{
In addition, we enhance it by evaluating weak
dependencies~\cite{Arbelaez:SAC-09}, a simplified form of functional
dependencies between variables, which ranks how important each
individual variable is. More precisely, each time a variable $y$
gets assigned as a result of the assignment of $x$, a weak
dependency $(x,y)$ is recorded. The weight of $x$ is consequently
raised so that it obtains a higher priority.
}

%Integrating action costs and weak dependency into the VSIDS heuristic, we have the following variable branching rule for BB-DPLL:
Integrating action costs into the VSIDS heuristic, we have the following variable branching rule for BB-DPLL:
\begin{enumerate}
\item Each variable $x$ has a priority value $p(x)$. Initialize
$p(x)$ as follows:
$$ p(x) = \left\{
\begin{array}{cl}
    \mu(a),     & \mbox{ if $x=x_{a,t}\in{V}$  } \\
    0,      & \mbox{ $otherwise$ } \\
\end{array} \right.$$

\item $\delta_{p}$ is a priority increment that is initialized to 1.


\item In decide(), with a constant probability $P_0$, randomly select an unassigned variable $x$,  and with probability $1-P_0$, select the
unassigned variable with the highest priority value. Assign the selected variable to $false$. %(We use $P_0=0.02$)

\nop{
\item Whenever a variable $y$ gets assigned as a result of the
assignment of $x$ in function
propagate(), increase the priority value of $x$ as follows:
$$ p(x) = \left\{
\begin{array}{cccl}
    p(x)&+&\mu(a)\delta_p,     & \mbox{if $x=x_{a,t}\in{V}$  } \\
    p(x)&+&\delta_p,      & \mbox{ otherwise } \\
\end{array} \right.$$
}

\item Whenever a learnt clause is generated by analyze() in BB-DPLL, for each variable $x$ in the new learnt clause,
increase the priority values $p(x)$ as follows:
$$ p(x) = \left\{
\begin{array}{cccl}
    p(x)&+&\mu(a)\delta_p,     & \mbox{if $x=x_{a,t}\in{V}$  } \\
    p(x)&+&\delta_p,      & \mbox{otherwise} \\
\end{array} \right.$$
After that, multiply $\delta_p$ by a constant $\theta>1$. %(We use $\theta =1.2$)

\item Periodically divide all priority values by a large constant $\gamma$
%(we use $\gamma = 100$)
  and reset $\delta_p$ to 1.
\end{enumerate}

In our implementation, we set $P_0=0.02$, $\theta_p=1.2$, and $\gamma=100$.

Comparing to the VSIDS heuristics, our BB-DPLL heuristic gives
higher priority to special variables with higher costs.
%The first type covers those action variables with higher costs.
There are two advantages to branch on those high-cost action variables earlier and assign them to
be \emph{false}: 1) It will result in a search space with a lower
existing cost $g(\psi)$ that is more likely to lead to solution
plans with lower costs; 2) It will result in higher $h(\psi)$ values and
more potential violations of the bounding constraints, leading to
earlier backtracking and stronger pruning.

\nop{
The second type of variables with high priority are those variables that are more likely to trigger effective constraint
propagation and result in large numbers of variable assignments.
Weak dependency aims exactly at capturing this relationship; and the
experimental results in~\cite{Arbelaez:SAC-09} show that it usually leads to
substantial speedup.
}
