


\section{Optimizing Total Costs by Solving MinCost SAT Formula}
\label{sec:opt}

In this section, we first briefly review the DPLL algorithm and then
propose our branch-and-bound based DPLL(BB-DPLL) algorithm that
solves MinCost SAT problem. In addition to a typical
branch-and-bound algorithm, we propose two key techniques: relaxed
temporal planning based lower bounding function and BB-DPPP
heuristc, which can largely improve the efficiency. Further, to
guarantee the correctness of BB-DPLL algorithm, we present two
additional procedures: pruning() and add\_blocking\_clause().


\subsection{The DPLL algorithm}

%***********************************************************************
% *******************               main procedure of DPLL
%***********************************************************************
\begin{algorithm}[t]
\label{dpll} \linesnumbered \caption{ DPLL($S$) }

\KwIn{ SAT problem $S$ }

\KwOut{ a satisfiable solution }

{initialize the solver\;}

\While{ $true$ }{

    $conflict \leftarrow$ propagate()\;

    \If{$conflict$}{

        $learnt \leftarrow$ analyze($conflict$)\;
        add $learnt$ to the clause database\;

        \eIf{top-level conflict found}{
            return UNSAT\;
        }{
           backtrack()\;
       }
    }\Else{

        \eIf{all variables are assigned}{
            return SAT\;
        }{
            decide()\;
%            assume a variable $x$ to be $true$\;
        }
    }
}
\end{algorithm}

Many complete SAT solvers are based on the DPLL
algorithm~\cite{zhang:CAV-02,Davis62}, along with other
techniques such as clause learning~\cite{Marques96} and boolean
constraint propagation~\cite{Moskewicz:DAC-01}.


The DPLL algorithm is shown in Algorithm~\ref{dpll}, which largely
follows the implementation of the MiniSat solver~\cite{Een:TAST-04}.
DPLL() is a conflict-driven procedure. It keeps calling
propagate()~\cite{Een:TAST-04} which propagates the first literal
$p$ in the propagation queue and returns a conflict if there is any
(Line 3). If no conflict occurs and all literals are assigned, a
solution is found and it returns SAT (Line 13).

If no conflict occurs but there are unassigned literals, it calls
decide() to select an unassigned variable $p$, assign it to be
$true$ or $false$, and insert it into the propagation queue (Line
15). Consequently, those clauses related to variable $p$ will also
be propagated,leading to more variable assignments. Each literal has
a decision level. Those newly assigned literals have the same
decision level as $p$. Starting from zero, the decision level is
increased by one each time decide() is called.

Once a conflict occurs, the procedure analyze() analyzes the
conflict to get a learned clause ~\cite{Een:TAST-04} and it calls
backtrack() to cancel the assignments that lead to the conflict
(Line 10). The backtrack() procedure keeps undoing the variable
assignments until exactly one of the variables in the learned clause
becomes unassigned.



\subsection{MinCost SAT problem and BB-DPLL algorithm}


We approach the MinCost SAT problem by using general branch and
bound algorithms, and planning specialized algorithms. The
branch-and-bound search prunes the search space by using a lower
bounding function which is usually defined as $f=g+h$. $g$ is the
sum of the already incurred cost at current search node and $h$ is
the minimum possible cost from current node to goal node. Our
BB-DPLL is a branch-and-bound based DPLL algorithm. Unlike the
standard DPLL algorithm which stops whenever a solution is found,
BB-DPLL searches the whole space to solve the following MinCost SAT
problem.


%\begin{defn} {\em (Temporal MinCost SAT Problem)} Given a temporal planning problem
%$\Pi_{T}=(I,F,O,G,A)$ , the corresponding simple temporal planning
%problem $\Pi_{T^s}=(I,F^s,O^s,G,A)$ and its SAT encoding problem
%$S=(V,C)$ with time span $N$, find a variable assignment satisfying
%$C$ and minimize the objective:
%$$ cost(V) = \sum_{\forall x\in{V}}{c(x)v(x)},$$
%Here, $v(x)=1$ if $x$ is \emph{true} and $v(x)=0$ otherwise in the
%assignment. $c(x)$ is non-negative cost associated with variable
%$x$.
%\[ c(x) = \left  \{
%\begin{array}{ll}
%    c(o),       & \mbox{ if $x=x_{o_{\vdash},t },\ o \in O \ and\ o_{\vdash} \in O^s $ }  \\
%    0,          & \mbox{otherwise}\\
%\end{array} \right. \]
%\end{defn}

%Note that each action $o$ in $\Pi_{T}$ is translated into two
%actions $o_{\vdash}$ and $o_{\dashv}$. Thus we assign
%$c(o_{\vdash})=c(o)$ while $c(o_{\dashv})=0$.

%Q.lv  the definition of cost(V): "$v(x)=0$ otherwise" already include the free variables.
%More generally, for a partial assignment where some of the variables
%are $free$, we can still calculate $g=cost(v)$ by excluding the
%unassigned variables in the summation.

We remark that the MinCost SAT problem here is different from the
MAX-SAT problem whose goal is to minimize the number of unsatisfied
clauses, and different from weighted MAX-SAT which assigns weights
to unsatisfied clauses.

%\vspace{+0.1in}

\vspace{0.02in}
\subsubsection{Base BB-DPLL}

Algorithm~\ref{modifiedsolve} shows the branch-and-bound
DPLL(BB-DPLL) algorithm, which largely follows the DPLL framework
except for two changes. First, each time a satisfying solution is
found, we do not terminate the search but add a pruning clause to
prevent the search from yielding previously found solutions (Line
21). Second, we employ a pruning strategy that prunes a search node
whenever the summation of the lower-bound estimation of future cost
and the current cost exceeds the cost of the incumbent solution(Line
17).


%***********************************************************************
% *******************  main procedure of BB-DPLL   ************
%***********************************************************************
\begin{algorithm}[t]
\label{modifiedsolve} \linesnumbered \caption{ BB-DPLL($S$) }

\KwIn{MinCost SAT problem $S$}

\KwOut{a solution with minimum $cost$ }

initialize the $h$ cost \;

$\tau \leftarrow \infty$ \;

$num \leftarrow 0$ \;

\While{ $true$ }{
    $conflict \leftarrow$ propagate()\;

    \eIf{$conflict$}{
        $learnt \leftarrow$ analyze($conflict$)\;
        \eIf{$conflict$ is of top-level}{
            return $num > 0$ ? SAT:UNSAT\;
        }{
            add $learnt$ to the clause database\;
            backtrack()\;
        }
    }{
        $g \leftarrow $cost($V$) \;

        $h \leftarrow $cost\_propagate() \;

        \eIf{$g+h \geq \tau$}{
            pruning() \;
            backtrack()\;
        }{

            \eIf{all variables are assigned}{
                add\_blocking\_clause() \;
                backtrack()\;
            }{
                decide()\;
            }
        }
    }
}
\end{algorithm}


%\vspace{0.02in}
%\subsubsubsection{Blocking clauses}
\paragraph{Blocking clause}
Each time a solution is found, we update the incumbent value $\tau$
with its cost value and add a blocking clause to the clause database
so that the visited solution will not be found again. Given a
MinCost SAT problem $S=(V,C)$, suppose the current assignment is
$V'$, we define $\Phi$ to be the set of variables that have been
selected by decide() ($\Phi \subseteq V'$). We synthesize a blocking
clause $M$ as $M =\bigvee_{\forall x\in \Phi}{literal(x)}$ where $$
literal(x) = \left\{
\begin{array}{ll}
 \overline{x},     & \mbox{ $v(x) = true$  }  \\
           x,      & \mbox{ $v(x)= false$ } \\
        %   false    & \mbox{ $v_i = free$ } \\
\end{array} \right.$$

The blocking clause ensures that any future solution found by the
search will differ from the current solution. For example, if a
solution to a 5-variable SAT-SCAN problem is
$V=\{x_1,x_2,x_3,x_4,x_5\}|_{v(x)=(true,false,true,true,false)}$ and
$\Phi=\{x_2,x_4\}$, we add the blocking clause ${x_2}\vee
\overline{x_4}$.


%***********************************************************************
%                     procedure solver_addBlockClause()
%***********************************************************************
\begin{algorithm}[t]
\label{algorithm:addBlockingClause}
\linesnumbered\caption{add\_blocking\_clause($V'$) }

\KwIn{a solution $V'$}

%\KwOut{solution}

$num$++\;

$\tau \leftarrow cost(V')$\;

update the incumbent solution\;

create a blocking clause $M$\;

add $M$ to the clause database\;

backtrack()\;

\end{algorithm}



Algorithm~\ref{algorithm:addBlockingClause} gives details of
add\_block\_clause(), which is invoked when the solver finds a new
solution. It starts by increasing the counter of the number of
solutions $num$ (Line 1) and updating the minimum cost $\tau$ with
the cost of the new solution (Line 2). Then, a solution plan will be
recorded (Line 3). After that, we create a blocking clause (Line 4)
and add it to the clause database (Line 5) to ensure that the solver
will never generate the same solution later.
% The clause will provide
%an additional constraint that enhances constraint propagation. Using
%the blocking clause, a learnt clause will be produced by analyze()
%(Line 6) and added to the clause database (Line 5).
%We can do this because the current solution violates the blocking
%clause and thus can be treated as a conflict.
Finally, the procedure will undo assignments until precisely one of
the literals of the learnt clause becomes unassigned (Line 6).


%\vspace*{0.02in}
%\subsubsubsection{Pruning clauses}
\paragraph{Pruning clause}
Once a solution is found, we update the threshold $\tau$ with its
cost value. In the following search, if the summation of the cost of
the current partial assignment $v'$ and the bound estimation value
$h$ exceeds $\tau$ ($g+h \ge \tau$), we consider it a deadend since
no further assignments could make the cost any lower. In this case,
we call pruning() to stop propagating and backtrack. This pruning
technique is crucial for speeding up the search. Without this
pruning, we found that finding all the solutions for a give SAT
instance is prohibitively expensive.

Given a MinCost SAT problem $S=(V,C)$, suppose the current
assignment is $v'$, we define $\Phi$ to be the set of variables that
have been selected by decide() ($\Phi \subseteq V'$). We synthesize
a pruning clause $M$ as $M =\bigvee_{\forall x\in \Phi}{literal(x)}$
where $$ literal(x) = \left\{
\begin{array}{ll}
 \overline{x},     & \mbox{ $v(x) = true$  }  \\
           x,      & \mbox{ $v(x)= false$ } \\
          false    & \mbox{ $v_i = free$ } \\
\end{array} \right.$$

For example, suppose we have an partial assignment of a 5-variable
SAT-SCAN problem:
$V=\{x_1,x_2,x_3,x_4,x_5\}_{v(x)=(true,false,free,false,true)}$ and
the set $\Phi=\{x_1,x_4\}$, we add the pruning clause
$\overline{x_1}\vee{x_4}$.

Algorithm~\ref{algorithm:pruning} gives the details of the prune()
procedure. It starts by creating a pruning clause given the partial
assignment $V'$ (Line 1). Then, it generates a learnt clause from
the pruning clause (Line 2) and adds it to the clause database (Line
3). Finally, we do backtracking to undo the assignments (Line 4),
since it is considered a deadend due to the upper bound $\tau$.


%***********************************************************************
% *******************      procedure pruning()      *************
%***********************************************************************
\begin{algorithm}[t]
\label{algorithm:pruning} \linesnumbered \caption{pruning($V'$)}

\KwIn{A partial assignment $v'$}

%\KwOut{}

create a pruning clause $M$\;

add clause $M$ to the clause database\;

backtrack()\;

\end{algorithm}

\paragraph{Lower bounding function}
In BB-DPLL, in order to estimate the lower bound of the cost to
reach a goal state from a partial assignment of variable during
search, we dynamically maintain $g$ and $h$. Since $g$ is the sum of
incurred cost of current partial assignment, we define $g$ as:
$$ g = \sum_{\forall x\in{V}}{c(x)v(x)}.$$
$c(x)$ is the cost of variable x. $v(x)=1$ if $x$ is true and
$x(x)=0$ if $x$ is false or unassigned. In base BB-DPLL algorithm,
we simply assign $h$ always be $0$.


\paragraph{Heuristic}
The heuristic of base BB-DPLL is the same as
MiniSat~\cite{Een:TAST-04,zhang:CAV-02} whose SAT heuristic ia an
improved version of VSIDS(Variable State Independent Decaying
Sum)~\cite{Moskewicz:DAC-01}. VSIDS is quite competitive compared
with other branching heuristics on the number of branches need to
solve a problem.

The synthetic heuristic of MiniSat has the following rules:

\begin{enumerate}
\item Each variable $x$ has an priority value $p(x)$. Initialize
$p(x)$ to 0.

\item In function decide(), select an uninstantiated variable $x$, if
any, randomly in a constant probability, if failed, select the
uninstantiated variable the highest priority value, and assign it to
be true.

\item Whenever a learnt clause was generated by analyze() in BB-DPLL,
increase the priority values $p(x)$ as follows:
$$ p(x) = p(x)+\delta_p.$$
for each variable $x$ in the new learnt clause. After that, multiply
$\delta_p$ by a constant slightly more than 1. $\delta_{p}$ is a
priority increment which is initialized to 1.

\item Periodically divide all of priority values by a big constant and
then reset $\delta_p$ to 1, as implemented in MiniSat.
\end{enumerate}


\subsubsection{Relaxed temporal planning graph based lower bound function}

The lower bounding function is one of the most important components
in a bound-and-bound solver. In order to guarantee the search to be
complete, an overestimation of the lower bound is never allowed.
Further the efficiency of computing the bound is a key performance
factor since the lower bounding functions are executed very
frequently.

To compute $h$, we first dynamically maintain an estimation function
$h(x)$ for each variable $x \in V$. At each decision point during
search, the functions $h(x)$ is maintained as a lower bound of the
following quantity: the minimum action cost of any plan in which the
variable $x$ is $true$ and the current partial assignments are kept.

%Before giving the definition of estimation function $h(x)$, we
%define a contribution set of variable in $S$.
Given a simple temporal planning problem $\Pi^s=(I,F^s,O^s,G,A)$
and MinCost SAT problem $S=(V,C,c)$ with time span $N$, we define the
contribution set $cont(x)$ and estimation function $h(x)$ of
variable $x$ as follows:

\begin{defn}
\label{def:f-cont} \em
 For each fact variable $x_{f,t}\in{V}(0 \le t \le N)$, we have:
\begin{eqnarray*}
cont(x_{f,t})&=&\bigcup\limits_{\{o | f\in{add(o)}\}}
cont(x_{o,t-1}).
\end{eqnarray*}
\end{defn}

Note that $cont(x_{f,t}|t=0)=\O , \forall f \in F^s$.

\begin{defn}
\label{def:o-cont} \em
 For each action variable $x_{o,t}\in{V}(0 \le t < N)$, we have:
\begin{eqnarray*}
cont(x_{o,t})&=&\bigcup\limits_{\forall f \in{pre(o)}} cont(x_{f,t})
\ \bigcup\ \{o\}.
\end{eqnarray*}
\end{defn}

The contribution set of variable $x$ represents the action set which
includes all actions in all possible search path from initial
variables $\{x_{f,t | f \in I and t=0}\}$to $x$.

\begin{lema}
\label{lema:cont} \em All variables in variable set
$X=\{x_1,x_2,...,x_n\}$ are \emph{totally independent} iff. for each
variable pair $(x_i,x_j),x_i,x_j \in X$ satisfied:
$$cont(x_i)\bigcap cont(x_j)=\emptyset.$$
\end{lema}
Variables $\{x_1,x_2,...,x_n\}$ are \emph{totally independent} means
there is no common action in any two paths from initial stats to any
two variables $x_i$ and $x_j$.

\begin{defn}
\label{def:h-fact} \em
 For each fact variable $x_{f,t}\in{V}(0 \le t \le N)$, we have:
\begin{eqnarray*}
 h(x_{f,t}) &=&   \min\limits_{\{o |
   f\in{add(o)\}}} h(x_{o,t-1}).
\end{eqnarray*}
\end{defn}
The lower bound of a fact variable $x_{f,t}$ is the minimal
estimation value of the actions which add the fact.

\begin{defn}
\label{def:h-action} \em
 For each action variable $x_{o,t}\in{V}(0 \le t < N)$, we have:
\[ h(x_{o,t}) = c(o) + \left \{
\begin{array}{ll}
    \sum\limits_{\forall{f}\in{pre(o)}}h(x_{f,t}),       & \mbox{ if all precondition fact variables of $x_{o,t}$ are totally independent }  \\
    \max\limits_{\forall{f}\in{pre(o)}}h(x_{f,t}),          & \mbox{otherwise}\\
\end{array} \right. \]
\end{defn}
Intuitively, we consider all the preconditions of action $o$. The
necessary condition of action variable $x_{o,t}$ to be true is all
of its precondition to be true. Thus, the lower bound of $x_{o,t}$
is the maximal estimation value of its precondition facts. Further,
if all of its precondition facts are totally independent, the lower
bound of the action variable can be improved to the summation of all
the estimation value of its precondition facts.

Definition~\ref{def:h-fact}and~\ref{def:h-action} define the
estimation function of unassigned variables. If a variable is
assigned to false in search progress, $h(x)$ should be assigned to
$\infty$. If a action variable $x_{o,t}$ is assigned to $true$,
$h(x_{o,t})$ should minus $c(o)$.

\begin{defn}
\label{def:h-goal} \em Based on $h(x_{f,t})$, the estimation bound
current assignment of $V'$ is defined as:
\[ h = \left \{
\begin{array}{ll}
    \sum\limits_{\forall{f}\in{G}}h(x_{f,N}),       & \mbox{ if all fact variables $\{x_{f,N} |f \in{G}\}$ are totally independent}  \\
    \max\limits_{\forall{f}\in{G}}h(x_{f,N}),          & \mbox{otherwise}\\
\end{array} \right. \]
\end{defn}

The real bound of current assignment $V'$ is no less than the
maximal estimation value of all goal fact variables $(x_{f,N}
\in{G})$. If all goal fact variables are totally independent, we can
improve it to the summation of estimation value of all goal fact
variables.

Since we can pre-compute the contribution set of each variables in
$S$(only compute one time), we can pre-decide to use ``max'' or
``sum'' in Definition~\ref{def:h-action} for each action variable
and in Definition~\ref{def:h-goal} for the estimation bound of
current assignment. The external computing time of contribution sets
can be ignored.


 \nop{ $h(x_{o,t})$, $\forall {x_{o,t}\in{V}}$ is lower bound of
the cost to achieve action $a$ at time step $t$, which equal to the
sum of the maximal cost of satisfying the preconditions of the
action and the cost of this actions. $h(x_{t,v})_{x_{t,v}\in{V}}$
responses the cost of achieving the fact $v$ at $t$ time step, which
equal to minimal $h(x_{t-1,a})$ of actions which additions includes
fact $v$. $h(x_{t,g})$ responses the cost of achieving all goal fact
at $t$ time step, which equal to maximal cost of all goal facts
$x_{{f,t}, \forall{f\in{G}}}$. }



%***********************************************************************
%                     procedure Cost_Init()
%***********************************************************************
\begin{algorithm}[t]
\label{costinit} \linesnumbered\caption{cost\_init()}

\KwIn{$\Pi_{T^s}$, $S$, $N$}

\KwOut{$h$}

\For{all $x_{f,0}\in{V}$}{
    \eIf{$f\in{I}$}{
        $h(x_{f,0}) \leftarrow 0$\;
    }{
        $h(x_{f,0}) \leftarrow {\infty}$\;
    }
}

\For{t=0 to k-1}{
    \For{all $x_{o,t}\in{V}$}{
      compute  $h(x_{o,t})$ using \textbf{Definition ~\ref{def:h-action}}\;
    }

    \For{all $x_{f,t+1}\in{V}$}{
       compute  $h(x_{f,t+1})$ using \textbf{Definition ~\ref{def:h-fact}}\;
    }
}

compute  $h$ using \textbf{Definition ~\ref{def:h-goal}}\;

\end{algorithm}


To initialize the cost function $h(x)$, given the initial state $I$,
at level 0 we set $h(x_{f,0})=0$ if $f\in{I}$ and
$h(x_{f,0})={\infty}$ otherwise. Then we construct the initial
values for variables from level 0 to $k$ following
Definitions~\ref{def:h-fact} and~\ref{def:h-action} (see Algorithm
\ref{costinit}).

\nop{ From level 0 to goal level $k$, the cost of executing an
action $a$ at level t($h(x_{o,t})$) can be initialized as the
potential cost of reaching the most expensive precondition of $a$ at
level t-1 plus action cost $c_a$(Line 8), and the cost of an fact at
level t($h(x_{f,t})$) can be initialized as the potential cost of
reaching the minimal action cost in level t-1 which add the
fact(Line 10). Finally, the overall h value for BB-DPLL produce is
computed as the maximal cost of reaching goal state in level k. }


%***********************************************************************
%                     procedure Cost_Propagate()
%***********************************************************************
\begin{algorithm}[t]
\label{costprop} \linesnumbered\caption{cost\_propagate()}

\KwIn{$\Pi_{T^s}$, $S$, $N$}

\KwOut{$h$}

initialize $U$ as a priority queue indexed by $t$\;

\While{ $U\neq{\emptyset}$}{
    get $x$ from $U$, $U \leftarrow U\backslash\{x\}$\;
    \If{$x=x_{o,t}\in{V}$}{

        \lIf{$v(x_{o,t})$=false}{
            $newcost \leftarrow \infty$\;
        }\Else{
        %Q.lv
            %$newcost \leftarrow \max\limits_{{f}\in{pre(a)}}h(x_{f,t})$\;
            compute $newcost$ using \textbf{Definition ~\ref{def:h-action}}\;

            \If{$v(x_{o,t})$=true}{
                $newcost \leftarrow newcost - c(o)$\;
            }
        }
        \If{$newcost\neq{h(x_{o,t})}$}{
        %Q.lv
            $h(x_{o,t}) \leftarrow newcost$\;
            \For {all $f\in{add(o)}$}{
                $U \leftarrow U + \{x_{f,t+1}\}$\;
                }
        }
    }\lElse{
        \If{$x=x_{f,t}\in{V}$}{
            \lIf{$v(x_{f,t})$=false}{
                $newcost \leftarrow \infty$\;
            }\lElse{
                compute $newcost$ using \textbf{Definition ~\ref{def:h-fact}}\;
            }
            \If{$newcost\neq{h(x_{f,t})}$}{
            %Q.lv
                $h(x_{f,t}) \leftarrow newcost$\;
                \For {all $o$ such that $f\in{pre(o)}$}{
                        $U \leftarrow U + \{x_{o,t}\}$\;
                }
            }
        }
    }
}

compute  $h$ using \textbf{Definition ~\ref{def:h-goal}}\;

\end{algorithm}

Each time the solver finishes propagate() and does not cause a
conflict, the $h$ value is updated by Algorithm ~\ref{costprop}. The
priority queue $U$ initially includes variables that have been
assigned a value since the last call to cost\_propagate() (Line 1).
Since the variables in $U$ are indexed by the order of time level
$t$, it can avoid double-computing $h(x)$. While $U$ is not empty,
we dequeue a variable $x$ from $U$ (Line 3), re-calculate its $h$
value, and insert the variables whose $h$ value may be affected by
$x$ into the queue. \nop{If $x$ is a fact variable $x_{f,t}$ in $V$,
we compute the new cost of $x$. If the new cost is not equal to
$h(x)$, we should update $h(x)$ as the new cost and add the all fact
variables in addition of action $a$ which may need to be updated
later(Line 11); 2) If $x$ is fact variable $x_{f,t}$ in $V$, we
compute the new cost of $x$(Line 13 and 14). If the new cost is not
equal to $h(x)$, we should update $h(x)$ as the new cost(Line 17)
and add the all action variables that the fact $f$ is in the
precondition of these actions(Line 18).} We repeat this process
until $U$ is empty. The final value $h$ is computed as
Definition~\ref{def:h-goal}(Line 21). It requires at least $h$
additional cost to reach the goal from the current partial
assignment. $h$ is used in Line 17 of BB-DPLL() for pruning.


\vspace{0.02in}
\subsubsection{BB-DPLL Heuristic}
In cost-optimal planning, the SAT instance contains a cost for each
action variable, which MiniSat does not consider. We thus add action
cost as an important factor to our BB-DPLL heuristic mechanism.

Weak dependencies~\cite{Arbelaez:SAC-09} is a simplified form of
functional dependencies between variables. These relations can be
used to rank the importance of each variable. More precisely, each
time a variable $y$ gets instantiated as a result of the
instantiation of $x$, a \textbf{weak dependency $(x,y)$} is
recorded. And then, the weight of $x$ is raised, so the variable may
be selected with higher priority later.

Integrating action cost with MiniSat heuristic and weak
dependencies, we have the following synthetic heuristic rules for
BB-DPLL:

\begin{enumerate}
\item Each variable $x$ has an priority value $p(x)$. Initialize
$p(x)$ as follows:

$$ p(x) = \left\{
\begin{array}{rl}
    c(o),     & \mbox{ if $x=x_{o,t}\in{V}$  } \\
    0,      & \mbox{ $otherwise$ } \\
\end{array} \right.$$

\item In function decide(), select an uninstantiated variable $x$, if
any, randomly in a constant probability, if failed, select the
uninstantiated variable the highest priority value, and assign it to
be false.

\item Whenever a variable $y$ gets instantiated as a result of the
instantiation of $x$(\textbf{weak dependency $(x,y)$}) in function
propagate(), increase the priority value of $x$ as follows:

$$ p(x) = \left\{
\begin{array}{rl}
    p(x)+c(o)*\delta_p,     & \mbox{ $x=x_{o,t}\in{V}$  } \\
    p(x)+\delta_p,      & \mbox{ $otherwise$ } \\
\end{array} \right.$$

\item Whenever a learnt clause was generated by analyze() in BB-DPLL,
increase the priority values $p(x)$ as follows:

$$ p(x) = \left\{
\begin{array}{rl}
    p(x)+c(o)*\delta_p,     & \mbox{ $x=x_{o,t}\in{V}$  } \\
    p(x)+\delta_p,      & \mbox{ $otherwise$ } \\
\end{array} \right.$$
for each variable $x$ in the new learnt clause. After that, multiply
$\delta_p$ by a constant slightly more than 1. $\delta_{p}$ is a
priority increment which is initialized to 1.

\item Periodically divide all of priority values by a big constant and
then reset $\delta_p$ to 1, as implemented in MiniSat.

\end{enumerate}

Compared to MiniSat heuristic, our BB-DPLL heuristic is more likely
to make two kinds of variables be branched with higher priority. One
includes the variables which have higher costs, branching them
earlier has two advantages. First, since higher cost action
variables have higher priority, branching them(assign to be false)
earlier will result in searching lower $g$ space which may leads to
find lower total action costs solution. Second, it will result in
higher $h$ and result in potential violation of cost constraints,
thus leading to an earlier backtrack and a speedup of the search.
Another includes the variables that more likely to ``propagate"
large number of variables, and weak dependencies just depict this
relationship, and our experiments on large set of problems show that
it can lead to a good speedup of the solver.

\subsection{Experimental Results}
We implement our MinCost SAT solver(MC-SAT) using BB-DPLL based on
MiniSAT solver. We studied the performance of MC-SAT and compare the
efficiency of different techniques: base BB-DPLL algorithm(base),
BB-DPLL using lower bounding function(+h), BB-DPLL using heuristic
function(+heu) and using both(all). We run our experiments in four
domains: P2P, driverlog, matchlift and matchlift-variant. The
results are shown in Figure~\ref{fig:cost-1} and~\ref{fig:time-1}.

Figure~\ref{fig:cost-1} shows the minimal total action costs of
solution finding by different strategies. ``no-opt" represents the
total costs of solution found by MiniSAT. In
Figure~\ref{fig:cost-1}, we can see that base BB-DPLL
algorithm(base) finds solutions with much less total costs than
``no-opt". Using lower bounding function or BB-DPLL heuristic ,
MC-SAT gets even less total costs solution than base BB-DPLL
algorithm, which means they are helpful to prune search space and
guide the search to find minimal total costs solution earlier. In
matchlift and matchlift-variant domains, it's pretty easy to find
the minimal total costs solution. Thus all of the strategies of
MC-SAT find the same minimal solution.

Figure~\ref{fig:time-1} shows the search time of all solutions which
represent the search progress. x coordinate is the search time of
finding solutions, y coordinate is the total action costs of each
solution. In order to compare all solutions' costs of different
instances in a graph, we translate the distribution of solutions'
costs into [0,1]. ``0" represents the minimal total costs solution
and ``1" represents the maximal total costs solution in each
instance. In each sub-figure of Figure~\ref{fig:time-1}, a point is
a solution and a line connecting some points represents a search
progress. Compare the search progress of different strategies, we
can learn that using BB-DPLL heuristic(+heu), the first solution
found by MC-SAT has much less total costs than base BB-DPLL. The
time of finding optimal solution is also smaller. Using lower
bounding function, MC-SAT can find a less total costs solution than
base BB-DPLL even though it spends external time to compute lower
bound. Using  both BB-DPLL heuristic and lower bounding function
gets the best results: the minimal total costs solution and the
least search time.

Based on the above analysis, we can see that both lower bounding
function and BB-DPLL heuristic can lead important improvement to
MC-SAT solver. Further more, BB-DPLL heuristic plays a key role in
the branch-and-bound search of MC-SAT.

The detailed results of costs and time for all domains are presented
in Table.~\ref{tb:p2p}, ~\ref{tb:driverlog},~\ref{tb:matchlift} and
~\ref{tb:matchliftv} in Appendix~\ref{appendix:cost-time}.

\begin{figure}[htp]
  %\centering
\begin{center}
    \subfigure{\includegraphics[scale=0.42]{./figure/p2p-cost-c.eps}}
    \hspace{2mm}
    \subfigure{\includegraphics[scale=0.42]{./figure/driverlog-cost-c.eps}}
    \vspace{2mm}
    \subfigure{\includegraphics[scale=0.42]{./figure/matchlift-cost-c.eps}}
    \hspace{2mm}
    \subfigure{\includegraphics[scale=0.42]{./figure/matchliftv-cost-c.eps}}
\caption{\label{fig:cost-1}  Comparison the total action costs with different techniques.}
\end{center}
\end{figure}

\begin{figure}[htp]
  %\centering
\begin{center}
    \subfigure{\includegraphics[scale=0.42]{./figure/p2p-time-c.eps}}
    \hspace{2.5mm}
    \subfigure{\includegraphics[scale=0.42]{./figure/driverlog-time-c.eps}}
    \vspace{2mm}
    \subfigure{\includegraphics[scale=0.42]{./figure/matchlift-time-c.eps}}
    \hspace{2.5mm}
    \subfigure{\includegraphics[scale=0.42]{./figure/matchliftv-time-c.eps}}
\caption{\label{fig:time-1}  Comparison the time of all solutions with different techniques.}
\end{center}
\end{figure}
