\chapter{Solution Assessment}
\label{ch:Assessment}

%\REMARK{MS}{
%Ca' co.p mong muon:
%1.  viet them ve Evaluators: co 2 loai:
%   - co the encode duoc trong file PDDL:
%        + yeu cau: bla bla nhieu CONG THUC TOAN HOC, liet ke day du nhung parameter ma minh da dung de danh gia
%        + hence, our contributions, new: limit the range of alternatives space that an agent can choose among, khong can lam local evaluate [cite] nua , boi vi local evalution da duoc encode cung voi bai toan
%   - khong encode trong PDDL ma chi tinh toan duoc sau khi co solution, gom co:
%        + CBQ
%        + Risk assessment (employ from Yudis[cite]) for future work
%        + others: sau nay co the them vao
%other. xem lai da ghi ve tieu chi "length of the solution" hay chua
%}

Solution Assessment, as aforementioned, plays a critical role to the success of our framework. A good and informative assessment will support designers at design time as well as users at runtime when deciding an appropriate configuration for their STS. For automatically self-reconfigurable STS, the Solution Assessment becomes more important because the STS has to make decision by itself about which configuration should be applied.

There are two types of assessment: \emph{qualitative} and \emph{quantitative}. While the former suits only for human, the later is applicable for automatic self-reconfiguration. In our framework, we mainly support the later assessment. However, it is not difficult to extend the framework to accept qualitative assessment.

From the technical point of view, quantitative evaluators could be felt into two groups: \emph{early evaluators} and \emph{late evaluators}. The formers are used by the AI planner to drive the planning process. Only solution that is better than the previous one, in sense of applying a particular evaluator, is considered. This kind of evaluators needs to be hard-code in the PDDL script. In the meanwhile, the later evaluators, instead of hard-code script, are java classes. These evaluators are used to evaluate generated solutions.

The early evaluators can accessed properties of organizational objects (actors, actor's capabilities and goals) by encoded functions in the PDDL script. As discussed in section \ref{ch:ODM}, each objects' property descriptor has a corresponding PDDL predicate. This predicate is used to encode the property values. The early evaluators need to integrated into PDDL actions' preconditions and effects. Therefore, there is a commitment of properties' name in the PDDL script and the ODM.

The late evaluators are also divided into two categories: \emph{simple evaluator} and simulating-based evaluator. The \emph{simple evaluator} takes the input solution, performs analysis on this solution and computes the final measurement. During analyzing process, the evaluator can access the ODM to retrieve information associated with goals or actors e.g., an \emph{effort}, \emph{time} which an actor have to pay to accomplish a task. The \emph{simulating-based evaluator}, on the other hand, simulates the input solution and analyzes the simulation process to calculate the assessment value. A simple example of this kind of evaluator is the \emph{Execution Time} evaluator. This evaluator simulates a given solution and measure the time needed for complete the whole solution. Notice that, the execution time of a solution is different from its length. A longer solution might complete earlier than a shorter one. This behavior happens because solution's actions can be performed simultaneously. When precondition of an action is satisfied, it is immediately executed without waiting for the prior action in the solution.

For the sake of clarity, our framework supports three different types of evaluators:
\begin{itemize}
    \item \emph{Early evaluators} which are hard-coded in the PDDL script,
    \item \emph{Simple evaluators} which are scalar functions taking a solution and computing the measurement,
    \item \emph{Simulating-based evaluators} which are also scalar functions, but they takes the simulation result returned by the event-based simulation of a given solution.
\end{itemize}

The design of this framework accepts runtime addition of late evaluators. Extra evaluators are imported to the framework by registering with the \emph{Evaluator Registry}. While registering an evaluator, users are able to declare parameters using by this evaluator. Each parameter has a name, data type and default value. The default value can be changed later. Currently, four primitive types of parameter are accepted: \emph{Integer, String, Date time} and \emph{Boolean}. Besides, the \emph{composite parameter} type allows user to declare structural values.

The simple evaluator should implement interface \mono{IEvaluator} (figure \ref{inf:IEvaluator}). This interface provides two methods: one for computing assessment value and one for combining two assess value. In method \mono{evaluate()}, we can access to other services provided by the framework (context), ODM (model), the parameters of this evaluator declared in registry entry (item) and list of actions to be evaluated (actions). This method returns an \mono{EvaluatorResult} which contains the assessment value as well as all necessary values to recalculate it.

\begin{figure}
    \centering
    \fbox{
    \begin{algo}
        \Interface IEvaluator
            \State \function\ evaluate(context: IJasimContext, model:ODM, entry: EvaluatorEntry, actions: list of PddlAction): EvaluatorResult;
            \State \function\ combine(res1, res2: EvaluatorResult): EvaluatorResult;
        \EndInterface
    \end{algo}}
    \caption{IEvaluator interface.}
    \label{inf:IEvaluator}
\end{figure}

The second method in this interface is used to combine two separated results into one. It is extremely useful in simulating-based evaluator. During event-considered simulation, if an event happens and causes the simulator to re-plan, the original solution is not completely executed, and is substituted by a new one. Therefore, to assess the original solution, we should evaluate the executed part of the original solution, and the substitution.

\section{Simulation Engine}
The simulating-based evaluators carry our assessment by analyzing the execution of a given solution. This is done thank to the \emph{simulation engine}. There are two kind of simulation supported by the simulation engine: \emph{plain simulation} and \emph{event-based simulation}. The \emph{plain simulation} takes a solution and simulates the whole solution, meanwhile the \emph{event-based simulation} take into account events which may happen during the simulation process. The effects of an event may cause simulating solution invalid, and the solution needs to be replanned at the time it gets corrupted. Simulating solution in this manner shows the resilience of the original solution regard to a given set of events. This assessment point of view is important since these events can happen in the real, and the selected solution should be good enough to resist the environmental changes. The simulation engine provides two simulators called \emph{plain simulator} and \emph{event-based simulator} to support the plain simulation and event-simulation, respectively. These simulators are described in the following sections.

\subsection{Plain Simulator}
The objective of the simulation is to create an environment in which all actions of a solution are executed as they are in the real. That is each action requires a period of time to accomplish, and consumes some resources. At a certain time, one agent can only perform one action, but actors can carry out actions in parallel unless there are dependencies among jobs. A good simulator should have an internal optimizer so that there are as much as possible actions are performs at a same moment to accomplish the solution in the shortest time.

Each action has a particular execution time. The unit of time is not important in the simulation; it thus can be minute, hour, day or whatever. Instead, we emphasize on the relative between them i.e., the goal $\goal_1$ is five times longer that goal $\goal_2$ to be accomplished.  The table \ref{tbl:simul_actions} shows the execution time for supported actions in a solution. Particularly, the \mono{SATISFIES} action's duration depends on what goal is satisfied as well as who is doing the goal, meanwhile, other actions are assumed to be accomplished in a constant time. For other actions not supported, the simulator simply drops them out.

\begin{table} [h]
    \centering
    \begin{tabular}{|c|c|}
        \hline
        % after \\: \hline or \cline{col1-col2} \cline{col3-col4} ...
        \textbf{Action} & \textbf{Exec. Time} \\ \hline
        \mono{AND/OR\_DECOMPOSE} & 1 \\
        \mono{PASSES} & 1 \\
        \mono{SATISFIES} & vary\\
        \hline
    \end{tabular}
    \caption{Supported actions by the solution simulator and their execution time.}
    \label{tbl:simul_actions}
\end{table}

The heart of the simulator is the algorithm illustrated in figure \ref{algo:simul}. This algorithm is based on the idea of the JASON simulator \cite{BORD-HUBN-WOOL-07-AS} in which the simulation time are divided into slots. In each timeslot, actions whose precondition is matched are executed. The simulation stops when no more action is executed.

\begin{figure}
    \fbox{
    \begin{algo}
        \State \textkeyword{function} simulate(actions: list of PddlAction)
        \Var
            \State \var{clock}: integer; \var{terminated}: boolean;
            \State \var{activeActions}: list of PddlAction;
        \EndVar
        \Begin
            \State initializeAgentsList(\var{actions}); \label{simul:1}
            \State \var{clock} = 0;  \label{simul:2}
            \State \var{terminated} = \false; \label{simul:3}
            \While{terminated = \false} \label{simul:while}
                \State \var{activeActions} = getListOfActiveActions(); \label{simul:getActActions}
                \If {\var{activeActions} \keyword{is not} empty}
                    \State \var{slot} = create new time slot for the current clock;
                    \ForEach{\var{action}: Action \textkeyword{in} \var{activeActions}} \label{simul:foreach}
                        \State \var{slot}.add(\var{action});
                        \State performAction(\var{action}, \var{clock});
                    \EndFor
                \Else
                    \State \var{terminated} = \true;
                \EndIf
            \EndWhile
            \State \Return \var{clock};
        \End
    \end{algo}}
    \caption{Solution simulation algorithm.}
    \label{algo:simul}
\end{figure}

The simulation algorithm starts by a call to \mono{InitializeAgentsList()} (line \ref{simul:1}). This procedure scans through the solution for the list of actors and their corresponding actions. It also assigns the initial goals for these actors based on the initial requests. When a goal is assigned to an actor, its corresponding actions will be put in the action queue of this actor. In line \ref{simul:2} to \ref{simul:3}, it resets the virtual clock to 0 and sets the termination condition to false. The virtual clock is used to measure the duration of an action as aforementioned. The simulation actually starts in loops until its termination condition is matched in the While loop found in line \ref{simul:while}. In this loop, the simulator builds a list of active actions (line \ref{simul:getActActions}). In a clock circle, there only one action is active for an actor. The active action for an actor is selected from its action queue, and should be in ready-to-execute status which means the execution precondition for action is satisfied. For action \mono{AND\_DECOMPOSE}, \mono{OR\_DECOMPOSE}, the precondition is the to-be decomposed goal is the active goal of an appropriate actor. For action \mono{PASSES}, the precondition is the delegator actor is available and free. Alternatively, action \mono{PASSES} is always true, and each actor has its own goal queue to keep a list of goals to be fulfilled. For action \mono{SATISFIES}, the precondition is more complex. It is the actor who will satisfy this goal is available and free, all prerequisite goals are fulfilled, all resources required by this goal are available and ready to use.

If the active action list is not empty, the simulator creates a new time slot which holds a list of actions and actors who are in charge of doing them in a specific time circle. After being added to the time slot, each of active actions is started executing (line \ref{simul:foreach}). Base on action's type, the simulator performs different behaviors as following:

\begin{itemize}
    \item \mono{AND/OR\_DECOMPOSE}: the simulator looks up the ODM for the sub goals and then replaces the top goal by its sub goals in the actor's goal queue.
    \item \mono{PASSES}: the simulator remove the goal in the goal queue of the actor who delegates and add this goal to the delegate's goal queue.
    \item \mono{SATISFIES}: the simulator looks up the model the simulator does nothing as assuming the actor is doing something to archive the goal.
\end{itemize}

%\begin{figure}
%    \centering
%    \fbox{
%    \begin{algo}
%        \State \textkeyword{procedure} initializeAgentsList(actions: list of PddlAction)
%        \Begin
%
%        \End
%    \end{algo}}
%    \caption{The \mono{InitializeAgentsList} method.}
%    \label{algo:initAgentsList}
%\end{figure}

\subsection{Simulation event} \label{sec:simulation-event}
As discussed, a simulation event has three part: \emph{precondition}, \emph{post-condition} and \emph{parameters}. The event \emph{precondition} express whenever an event happens, and its effects are described in \emph{post-condition}. Meanwhile, the \emph{parameters} are values used by the reactions composing the \emph{post-condition}.

Currently, there three preconditions are supported:
\begin{itemize}
    \item \entityname{AbsoluteTime} is triggered when the simulator's clock reachs to a given time value,
    \item \entityname{ActionRelativeTime} is triggered in a given time circle after a specific action happens,
    \item \entityname{EventRelativeTime} is triggered in a given time circle after a specific event happens.
\end{itemize}

At runtime, when an event reaction is executing, beside parameters that could be valued before simulating, it may also need other contextual information at the moment the event happens e.g., the goals, actors affected by this action. The reaction can get this information thank to the \mono{Parameters} section of the event definition and the magic of the simulator which will be detailed in section \ref{sec:EventBasedSimulator}.

\begin{table}
  \centering
  \begin{tabular}{|p{12.5cm}|}
    \hline
    % after \\: \hline or \cline{col1-col2} \cline{col3-col4} ...
   \mono{ModifyObjectReaction} \\ \hline
         \textbf{Description} modifies dynamic property of an object.  \\
         \textbf{Parameters} \\
         \begin{tabularx}{6cm}{p{1.5cm} p{10.5cm}}
               Action &: \Set{Assign, Increase} \\
               NewValue &: map to a variable (*) holding the new value. \\
               Object &: map to a variable (*) points to the modified object. \\
               Property &: the \entityname{PropertyDescriptor} describing the modified property. \\
          \end{tabularx} \\
    \hline \hline
    \mono{ModifyPDDLReaction} \\ \hline
          \textbf{Description} modifies the generated PDDL script  \\
          \textbf{Parameters} \\
          \begin{tabularx}{6cm}{p{2cm} p{10cm}}
               Action &: \Set{Add/Remove\_Fact, Add/Remove\_Goal} \\
               Predicate &: the PDDL predicate being injected to (removed from) the PDDL script. \\
               PredicateArg &: arguments of the predicate. \\
               Negative &: indicate whether the predicate is negated. \\
          \end{tabularx} \\
    \hline \hline
    \textbf{\mono{ForAllReaction}} \\  \hline
          \textbf{Description} repeats a list of reactions where the parameters receive new values in each loop according to a \emph{ForAll-domain}. The ForAll-domain provides a list of value based on the input parameters. The following ForAll-domains are currently supported in the framework:
          \begin{itemize}
               \item \mono{ProvidedGoalDomain}: goals provided by a given actor.
               \item \mono{RequestGoalDomain}: goals requested by a given actor.
               \item \mono{AssignedGoalDomain}: goals assigned to a given actor.
               \item \mono{SatisfyGoalDomain}: goals assigned to a given actor, but not yet satisfied.
               \item \mono{RequestorDomain}: actors who request a given goal.
               \item \mono{ProviderDomain}: actors who provide a given goal.
          \end{itemize} \\
    \hline \hline
    \textbf{\mono{ReplanReaction}} \\ \hline
          \textbf{Description} forces the event simulator to re-plan. \\
    \hline
  \end{tabular}
  \caption{Built-in event reactions}
  \label{tbl:reactions}
\end{table}

\begin{example}
Let consider a solution in which an actor \emph{Alice} is assigned goal \emph{Shutdown\_electricity}. But she fails in 5 units of time after trying to accomplish the goal. And \emph{Alice} does not has the ability to fulfill this goal any longer. Unluckily, this goal still needs to be fulfilled. So, we need to replan to find a new solution in the context that \emph{Alice} no longer provides this goal. The definition for the event describing the situation is illustrated in figure \ref{fig:sat_fail_event}. When the simulator encounter the action \mono{(SATISFIES Alice Shutdown\_electricity)}, this event is triggered, the parameter \emph{Actor} and \emph{Goal} are respectively set to \mono{Alice} and \mono{Shutdown\_electricity}. Afterward, the post condition is applied which removes the fact \mono{(CAN\_PROVIDE Alice Shutdown\_electricity} from the new PDDL script before re-planning.
\end{example}

\begin{figure}[h]
    \textbox{
        \begin{tabbing}
        AA  \=BB \=CCCCCCCC \=Z \kill
        Event: \mono{SatisfactionFailure} \\
        Precondition: \\
            \>\mono{ActionRelativeTime} \\
            \>\>Action \> = \mono{SATISFIES} \\
            \>\>Actor \> = ?\\
            \>\>Goal \> = \mono{Shutdown\_electricity}\\
            \>\>Time   \> = 5 \\
        Parameters: \\
            \>\>Actor \> = ? \\
            \>\>Goal  \> = ? \\
        Post-Condition: \\
            \>\mono{ModifyPDDLReaction}\\
            \>\>Action \> = \mono{REMOVE\_FACT} \\
            \>\>Predicate \> =  \mono{CAN\_PROVIDE}\\
            \>\>PredicateArg \> = \mono{\{Actor\} \{Goal\}}\\
            \>\mono{Replan}
        \end{tabbing}
    }
    \caption{Satisfaction Failure event}
    \label{fig:sat_fail_event}
\end{figure}

\subsection{Event-based simulator} \label{sec:EventBasedSimulator}
In this section, we discuss about the \emph{event-based simulator} (hereafter, referred to as \emph{event simulator}) which is built on the plain simulator as previously represented. The event simulator hooks into the plain simulator for checking whether events happen on every time circle. The event simulation algorithm is listed in figure \ref{algo:event-simul}

\begin{figure}
    \centering
    \fbox{
    \begin{algo}
        \State \textkeyword{function} event\_simulate(model: ODM, sol: Solution)
        \Var
            \State \var{node, root, child}: SimulationNode;
            \State \var{Q}: Queue of SimulationNode;
            \State \var{new\_solutions}: list of Solution;
        \EndVar
        \Begin
            \State \var{root} = \keyword{new} SimulationNode(\var{solution});
            \State \var{Q}.enqueue(\var{root}); \label{event-simul:enqueue_first_node}
            \While {Q \keyword{is not} empty} \label{event-simul:while}
                \State \var{node} = \var{Q}.dequeue();
                \State \var{node}.createPlainSimulator(); \Comment{create a plain simulator for the solution associated with the processing node. The created simulator is accessed through \emph{Simulator} property of \var{node}}
                \State Hook event processor into \var{simulator};
                \State \var{node}.Simulator.simulate(\var{node}.Solution);
                \If {Need replan}
                    \State Capture state of world into the \var{model}'s post-PDDL command list;
                    \State \var{new\_solutions} = generateSolutions(\var{model}); \label{event-simul:replan}
                    \If {\var{new\_solutions} \keyword{is not} empty}
                        \ForEach{\var{s}: Solution \textkeyword{in} \var{new\_solutions}}
                            \State \var{child} = \keyword{new} SimulationNode(\var{s});
                            \State \var{node}.addChild(\var{child});
                            \State \var{Q}.enqueue(\var{child}); \label{event-simul:enqueue_child_node}
                        \EndFor
                    \Else
                        \State \var{node}.isDanglingNode = \true; \Comment{This node needs to replan, but no solution found. Then it is considered as \emph{dangling node}.}
                    \EndIf
                \Else
                    \State \var{node}.isCompleteNode = \true;
                \EndIf
            \EndWhile
            \Return \var{root};
        \End
    \end{algo}}
    \caption{The event simulation algorithm.}
    \label{algo:event-simul}
\end{figure}

The algorithm starts by creating a structure called \emph{simulation node} (figure \ref{fig:simul_node}) has references to the corresponding solution, the PDDL script that generates this solution, the ODM that generates the PDDL script and the plain simulator that simulates this solution. The created node is put in a queue \mono{nodes} (line \ref{event-simul:enqueue_first_node}).

In the While loop (line \ref{event-simul:while}), the event simulator picks up an \emph{simulation node}, and invokes the plain simulator to simulate the original solution. When an event precondition is satisfied, event simulator bind each unassigned parameter in \entityname{Parameters} section (c.f. section \ref{sec:ODM}) with appropriate value extracted from the solution's action interrupted by the event. The binding is done by using name matching. Then, all reactions in the event's post condition are applied.

If one of the reactions causes the solution fails to continue, the event simulator captures the current state of world and passes to the \emph{Solution Generator} for re-planning. New generated solutions could be filtered to improve the performance as well as to prevent the solution explosion. The filter criteria are based on a simple evaluator, by which, only "good enough" solutions are returned. By default, the filter does nothing. Users have to choose an appropriate simple evaluator and a threshold value. A solution is considered as good enough if the returned value of the filter evaluator is greater than or equals the threshold value. Afterward, the event simulator creates new simulation nodes corresponding to new generated solutions, line \ref{event-simul:replan}. These nodes are enqueued, line \ref{event-simul:enqueue_child_node}, and later processed by the event simulator.

\begin{figure}
\subfigure[Simulation Tree] {
        \centering
        \includegraphics[width=0.5\textwidth]{figures/simulated_tree}
	    \label{fig:simul_tree}
	}
	\subfigure[Simulation Node] {
        \centering
        \includegraphics[width=0.3\textwidth]{figures/simulated_node}
		\label{fig:simul_node}
	}
    \caption{The simulation tree structure returned by the event simulator \subref{fig:simul_tree} and the structure of a simulation node \subref{fig:simul_node}.}
    \label{fig:simulation_tree}
\end{figure}

Another point to be considered is that the AI planner does not always find and return a solution. Because the most of planner chooses the local search approach, hence there is no guarantee that the search will stop with a solution. And it is not enough evidence to conclude that the problem is unsolvable. To deal with such situations, we set the time limit for the planner. We assume that if the AI planner does not find any solution within a period of time, then the problem is unsolvable.

If a \emph{simulation node} needs to re-plan but there is no solution found, we call this node as dead-end node or \emph{dangling node}. The \emph{dangling node} means if we follow this solution, and the given set of events happens, we will never archive all the top goals. On the other hand, we called a simulation node as leaf node if the associated solution is successfully completed.

At the end of simulation progress, the event simulator returns root node of the simulation tree (as depicted in figure \ref{fig:simul_tree}) to the caller which in turned passes it to the event-based evaluators, and to the \emph{Visualizer} to display the final result on the screen.

\section{Simple Evaluators}
\subsection{Overall Benefit/Cost Quotient} \label{sec:BCQ}
The benefit/cost quotient (BCQ) is widely adopted in many fields, especially economic. The objective is to gain benefit per each unit of cost as much as possible. In our organizational planning problem, the BCQ could be the satisfaction degree of all goals over the effort (or consumed time) need to pay to archive goals. In some situations, users may want to focus only on the cost (or benefit) come from goal satisfaction.

Normally, each goal has its own threshold level of satisfaction. A goal is considered as satisfied if it is fulfilled with a certain level of satisfaction at least the threshold value. In practice, there are many actors are able to satisfy a given goal, but their satisfaction abilities of this goal are different.

\begin{example}
Considering an example where two students Alice and Bob want to satisfy the goal \emph{"passing the Math examination"}. Alice accomplishes the exam in 1 hour with grade 8 (10 is max) while Bob needs 2 hours with grade 9. Obliviously, the later student satisfies the goal with a higher level than the former. But both of them pass the exams since their grades are greater than 5. Alice, however, needs less effort to complete exams than Bob. Then we can say that the benefit and cost of actor (student) Alice for completing this goal are 8 and 1, respectively; and those of actor Bob are 9 and 2.

Therefore, the efficiency (BCQ) of Alice (8/1 = 8) is theoretically better than Bob (9/2 = 4.5). Nevertheless, the comparison of BCQ does not make sense in this case because the final grade, of course, is more important regardless how much effort each student has to pay.
\end{example}

To this extend, in order to evaluate the BCQ of a solution, the BCQ evaluator need to know the benefit and cost for each action in a solution. In this evaluator, we consider only satisfaction action. Therefore, the benefit of the action $SATISFIES(\actor, \goal)$ determines how well the actor \actor\ fulfills the goal \goal. Similarly, the cost of this action is how much effort the actor \actor\ has paid for fulfilling goal \goal. Hence, the domain of the effort and benefit could be \Set{low, medium, high, veryhigh}, and \textit{\{average, good, verygood, excellent\}}. Therefore, the registered entry of BCQ in the event model of ODM as a tuple \Seq{BCQ, \Set{\Seq{\Capabilities, BENEFIT}, \Seq{\Capabilities, EFFORT}}}.
We employ a \emph{normalize function}, \normfunc, which maps these enumerate values to numeric as of formula \ref{eq:normfunction}.
\begin{equation}\label{eq:normfunction}
    \small\setlength{\parskip}{0cm}
    \begin{array}{ll}
        \normfunc(effort) = \left \{
            \begin{array}{ll}
                25, & low, \\
                50, & medium, \\
                75, & hight, \\
                100, & veryhight
            \end{array} \right.
        &, \normfunc(benefit) = \left \{
            \begin{array}{ll}
                25, & average, \\
                50, & good, \\
                75, & verygood, \\
                100, & excellent
            \end{array} \right.
    \end{array}
\end{equation}

The BCQ of a solution is computed as follow:
\begin{equation}\label{eq:bcq}
    BCQ(\Solution) =  \large\frac{\sum_{\action \in \Solution}\normfunc(Bef(\action))}{\sum_{\action \in \Actions}\normfunc(Cost(\action))}
\end{equation}
Where:
%cap = \Seq{\action.Actor, \action.Goal}|\action.Functor = \mono{SATISFIES} \wedge cap \in \Capabilities
\begin{itemize}
    \item $ Bef(\action) = \left \{ \begin{array}{c p{6cm}}
                    $\PropValue(cap, \mono{BENEFIT}),$ & $ \action = SATISFIES(\actor_i, \goal_k) \wedge cap = \Seq{\actor_i, \goal_k} \in \Capabilities $ \\
                    0, & otherwise
                    \end{array} \right.  $
    \item $ Cost(\action) = \left \{ \begin{array}{c p{6cm}}
                    $\PropValue(cap, \mono{EFFORT}),$ & $ \action = SATISFIES(\actor_i, \goal_k) \wedge cap = \Seq{\actor_i, \goal_k} \in \Capabilities $ \\
                    0, & otherwise
                    \end{array} \right. $
\end{itemize}


The figure \ref{algo:BCQ} presents the algorithm of the BCQ evaluator.

\begin{figure}
    \fbox{
    \begin{algo}
        \State \textkeyword{function} CBQ(model: ODM, actions: list of PddlAction)
        \Var
            \State \var{benefitProp, costProp}: PropertyDescriptor;
            \State \var{act}: Actor;  \var{goal}: Goal; \var{cap}: Capability;
            \State \var{cost, benefit}: float;
        \EndVar
        \Begin
            \State \var{benefitProp} = \var{model}.getDescriptor(Capability, BENEFIT); \label{bcq:1}
	        \State \var{costProp} = \var{model}.getDescriptor(Capability, COST); \label{bcq:2}
            \ForEach{\var{a}: PddlAction \textkeyword{in} actions} \label{bcq:3}
		        \If {\var{a}.Functor = SATISFIES}
			        \State \var{act} = \var{model}.getActorByName(\var{a}.getArgument(0));
        			\State \var{goal} = \var{model}.getGoalByName(\var{a}.getArgument(1));
			        \State \var{cap} = \var{act}.findCapability(\var{goal)};
        			\If {\var{cap} \keyword{is not} null}
				        \State \var{cost} += norm(\var{benefitProp}.getValue(\var{cap}));
				        \State \var{benefit} += norm(\var{costProp}.getValue(\var{cap}));
		            %\Else
            			%\State \var{cost} += DEFAULT\_COST;
                    \EndIf
                \EndIf
            \EndFor
            \State \Return \var{benefit/cost};
        \End
    \end{algo}}
    \caption{The Benefit/Cost quotient algorithm.}
    \label{algo:BCQ}
\end{figure}

This algorithm is worth for some comments. Line \ref{bcq:1} and \ref{bcq:2} find the \entityname{PropertyDescriptor} of \mono{BENEFIT} and \mono{COST} which are attached to actors' \entityname{Capability} as mentioned above. In the FOR loop (line \ref{bcq:3}), only \mono{SATISFIES} actions are considered. The algorithm extracts the actor, goal from the action; and looks for the corresponding capability. Then, it extracts the cost and benefit from this capability to accumulate.

This benefit/cost model could be extended to accept many types of cost as well as many kinds of benefit. Then cost and benefit is not only a scalar value, but a vector. In this case, each element in a vector has different contribution factor as it has different important level. More important, elements in the cost vector are usually belong to different domain or measurement unit. For instance, a cost vector consists of two element \Seq{effort, duration}, the \emph{effort} could be one of \Set{low, medium, high, veryhigh}, and \emph{duration} could be the number of minutes to complete the task. Therefore, we need a \emph{normalize} function that maps these heterogenous values to a standard domain. The following is an alternative of formula \ref{eq:bcq}, but it accepts cost/benefit vector.

\begin{equation}\label{fm:bcq-ex-vector}
    \overrightarrow{BCQ}(\Actions) =  \large\frac{\sum_{\action \in \Actions}\overrightarrow{Bef}(\action)}{\sum_{\action \in \Actions}\overrightarrow{Cost}(\action)}
\end{equation}

or in the scalar form

\begin{equation}\label{fm:bcq-ex-scalar}
    \overline{BCQ}(\Solution) =  \large\frac
                {\sum_{\action \in \Solution}\sum_{i=1}^{n}\alpha_i \cdot \normfunc(Bef_i(\action))}
                {\sum_{\action \in \Solution}\sum_{i=1}^{m}\beta_i \cdot \normfunc(Cost_i(\action))}
\end{equation}

where:
\begin{itemize}
    \item $\alpha_i, \beta_i$: the contribution factor of the $i^{th}$ element of benefit and cost vectors, respectively.
    \item \emph{\normfunc}: is the normalize function since the costs values are heterogenous.
    \item $Bef_i(\action), Cost_i(\action)$: the $i^{th}$ element of benefit and cost vectors, respectively.
\end{itemize}

\subsection{Actor criticality analysis}

The solution could be considered as a social network. In this network, an actor is a super node consisting of many goal nodes. Goal delegations among actors create links for the network. The criticality of an actor measures how a social network will be affected in case the actor has been removed or has left the network. This notion is highly connected to that of \emph{resilience} of network \cite{BRYL-GIOR-MYLOP-09-REJ}. In practice, a social network might be collapsed if its highest-degree nodes are removed. Therefore the study of criticality of actors in a solution is quite important to the self-reconfigurable STS, since this highly vulnerable can happen in the real life.

In our framework, we adopt the idea of criticality analysis in \cite{BRYL-GIOR-MYLOP-09-REJ} which concisely summarized as follow.
\begin{description}
\item \emph{\textbf{Leaf goals satisfaction dimension}} All the leaf goals assigned to an actor will be unsatisfied when that actor is removed. Let an integer number \goalweight\ is the weight of goal \goal. \goalweight\ is intuitively the measure of importance of \goal\ for the system defined by a human designer. The criticality of actor \actor\ in a solution/configuration \Solution, according to leaf goals satisfaction dimension, is defined as:
\begin{equation}
    cr_{\goal}(\actor, \Solution) = \frac{\sum_{SATISFIES(\actor, \goal) \in \Solution}\omega(\goal)}
                                        {\sum_{SATISFIES(x, \goal) \in \Solution}\omega(\goal)}
\end{equation}

Where \emph{x} is an actor and \goal\ is a goal

\item \emph{\textbf{Dependency dimension}} All the in- and outgoing dependencies for goals, together with the actor, are removed when actor is removed. This means that a number of delegation chains become broken and the goals delegated along these chains cannot reach the actor at which they will be satisfied. Hence, the fraction of "lost" dependencies (ingoing or outgoing) when actor a is removed from the socio network constructed in accordance with solution \Solution is:
\[
    cr_{in}(\actor, \Solution) = \frac{\sum_{PASSES(\actor', \actor, \goal) \in \Solution}\omega(\goal)}
                                        {\sum_{PASSES(x, y, \goal) \in \Solution}\omega(\goal)},
\]
\[
    cr_{out}(\actor, \Solution) = \frac{\sum_{PASSES(\actor, \actor', \goal) \in \Solution}\omega(\goal)}
                                        {\sum_{PASSES(x, y, \goal) \in \Solution}\omega(\goal)},
\]
\begin{equation}
    cr_{dep}(\actor, \Solution) = cr_{in}(\actor, \Solution) + cr_{out}(\actor, \Solution)
\end{equation}

Where \emph{\actor', x, y} are actors and \goal\ is a goal

\item \emph{\textbf{Actor criticality with respect to a set of goals}} It is also important to quantify the impact of an actor removal on the top-level goals of a STS, or, in general, on any predefined set of non-leaf goals.
Let \goaldiraff\ is the set of goals directly affected by the removal of actor \actor\ in solution \Solution:
\begin{flalign*}
    G_{dir\_aff}(\actor, \Solution) = & \{ \goal: goal. SATISFIES(\actor, \goal) \in \Solution \vee \\
    & \exists\actor': actor.(PASSES(\actor', \actor, \goal) \in \Solution) \vee \\
    & \exists\actor'': actor.(PASSES(\actor, \actor'', \goal) \in \Solution)  \}
\end{flalign*}

Let \goalaff\ is the set of goals affected by the removal of actor \actor\ in solution \Solution, corresponding to the set \goaldiraff. \goalaff\ is constructed by goal reasoning which allows to infer the (un)satisfiability to top goals by propagating though a goal graph the (un)satisfaction evidence \cite{BRYL-GIOR-MYLOP-09-REJ}.

Let \goalref\ is the set of "reference" goals, i.e. top-level or any pedefined subset of system goals with respect to which criticality of an actor in a solution will be evaluated \cite{BRYL-GIOR-MYLOP-09-REJ}. Then the criticality of \actor\ in \Solution\ corresponding to \goalref  is defined as follows:
\begin{equation}
    cr(\actor, \Solution, \Goals_r) = \frac{\sum_{\goal \in \Goals_r \wedge \Goals_{aff}(\actor, \Solution) }\omega(\goal)}{\sum_{\goal \in \Goals_r}\omega(\goal)}
\end{equation}
\end{description}

Based on the above three dimensions, we introduce the concept of \emph{overall actor criticality} for a specific actor \actor as:
\begin{equation}
    cr(\actor, \Solution) = \omega_1cr_{\goal}(\actor, \Solution) + \omega_2cr_{dep}(\actor, \Solution) + \omega_3cr(\actor, \Solution, \Goals_r)
\end{equation}
where $\omega_i, i=\overline{1,3}$ are the contribution factors of each type of criticality measurements. Since there are many actors with different criticality in one solution, we are thus interested in the variance analysis of actor criticality in solutions. The variance ratio is computed as:
\begin{equation}
    \Delta cr(\Solution) = \frac{1}{N}\sum_{\actor \in \Solution}(cr(\actor, \Solution) - \overline{cr})^2
\end{equation}
where
\[
    \overline{cr} = \frac{1}{N}\sum_{\actor \in \Solution}cr(\actor, \Solution)
\]

We consider this variance ratio (or standard deviation) as another metric for evaluating solutions. The lower the variance ratio of actor's criticality in a solution, the more \emph{resilience} the solution is.

\emph{\textbf{Another approach for criticality analysis}} Thank to the event-based simulator, we can explore another expansion to evaluate the criticality of an actor. We start the simulation on the given solution then trigger an event that removes an actor to see how the solution is adapted. If the solution fails and can not be replanned successfully, then the actor is high-criticality in solution; otherwise it is not. We can also remove some particular parameters (e.g, the capability of satisfying certain goal) of an actor instead of removing the actor itself. The simulation can also be run on several solutions to see how resilient they are with the same event.

By this faction, we might identify the criticality of the actor in a specific period of time during the execution of the given solution.

%The solution could be considered as a social network. In this network, an actor is a super node consisting of many goal nodes. Goal delegations among actors create links for the network. The criticality of an actor measures how a social network will be affected in case the actor has been removed or has left the network. This notion is highly connected to that of resilience of network \cite{BRYL-GIOR-MYLOP-09-REJ}. In practice, a social network might be collapsed if its highest-degree nodes are removed. Therefore the study of criticality of actors in a solution is quite important to the self-reconfigurable STS, since this highly vulnerable can happen in the real life.
%
%In our framework, we adopt the idea of criticality analysis in \cite{BRYL-GIOR-MYLOP-09-REJ} which concisely summarized as follow.
%
%%<blabla Bryl work here>
%
%The Bryl et al approach in criticality analysis is good to apply in a static solution, but not take in account its dynamicity. For example, an actor may be assigned only one leaf goal, but only it can provide this goal. Thus the removal of this actor leads to that goal is no way satisfied. That may also cause other goals to fail.
%
%Thank to the event-based simulator, we can explore another approach to evaluate the criticality of an actor. We start the simulation on the given plan then trigger an event that removes this actor to see how the solution is adapted. We can also remove some particular capabilities of this actor instead of removing the actor. By this way, we can identify the criticality of the actor in which period of time during the execution of the given solution.

%\subsection{Risk analysis}

\section{Simulating-base Evaluators}


\subsection{Solution execution time}
The solution execution time is somehow an interesting criterion. In some situation when the time to archive the top goal is the most important regardless of cost and benefit obtaining from the goal.

As aforementioned, actions in a solution can be performed in parallel. Hence, there is no correlation between solution length and solution execution time. Instead, an accurate way is to simulate the given solution. Depend of which simulator is applied, there are two calculation for the solution execution time. If solution simulator is used, the execution time is the value returned by the simulator. If event simulator is employed, the simulation tree is returned. Let consider the simulation tree as a direct weighted graph where the weight of a connection is the simulation time of the target node. If the target is a dangling node, the weight is set to infinitive. To this extend the execution time is the simulation time at the root node plus the cost of the shortest path from root to a leaf node.

In figure \ref{algo:exec-time}, it is the algorithm calculating the execution time of a solution base on the simulation tree returned by the event simulator. The input of the algorithm is a simulation node. The idea is to employ recursive method. The algorithm result is first set to the simulation time of the input node. If the simulation node has any child, it recursively calls itself for each of children to find the smallest value. Finally, it adds this value the initial result and returns the sum value to the caller.

\begin{figure}
    \centering
    \fbox{
    \begin{algo}
        \State \textkeyword{function} executionTime(node: SimulationNode)
        \Var
            \State \var{time, child\_time, temp}: integer;
        \EndVar
        \Begin
            \State \var{time} = \var{node}.Solution.getSimulationTime(); \Comment{get the simulation time of the associated solution.}
            \If {\var{node}.Children \keyword{is not} empty}
                \State \var{child\_time} = \mono{MAX\_INT};
                \ForEach{\var{child}: SimulationNode \keyword{in} \var{node}.Children}
                    \State temp = executionTime(\var{child});
                    \If {$temp < child\_time$}
                        \State \var{child\_time} = \var{temp};
                    \EndIf
                \EndFor
                \State \var{time} += \var{child\_time};
            \EndIf
            \State \Return \var{time};
        \End
    \end{algo}}
    \caption{The solution execution time algorithm.}
    \label{algo:exec-time}
\end{figure}

\section{Early Evaluators}

\subsection{Local Gain of Benefit/Cost Quotient}\label{sec:LG-BCQ}
The \emph{Local Gain of Benefit/Cost Quotient} (LG-BCQ) evaluator computes the gain of each action generated during the planning process. Each time the AI planner tries a move (or action), it calculates the LG-BCQ value for this move as guided in the PDDL script. This value then is accumulated and used as a optimizing metric for the planner. Therefore, the planer always tries to generate a solution with the maximal accumulated LG-BCQ.

Unlike the overall BCQ (section \ref{sec:BCQ}) which only measures cost and benefit for goal satisfaction, LG-BCQ takes into account other actions as well. Eventually, there three different kinds of action are considered as follows:
\begin{itemize}
    \item \emph{Goal satisfaction}: \satcost{ik} and \satbenefit{ik} respectively denotes the cost and benefit of satisfying goal $\goal_k$ by actor $\actor_i$. These values are the same of that used in the overall BCQ.
    \item \emph{Goal decomposition}: \decomposecost{ik}, \decomposebenefit{ik} denotes the cost and benefit of decomposing goal $\goal_k$ by actor $\actor_i$.
    \item \emph{Goal delegation}: \delegatecost{ik}, \delegatebenefit{ik} denotes the cost and benefit of delegating goal $\goal_k$ by actor $\actor_i$.
\end{itemize}

The LQ-BCQ then is computed as of formula \ref{eq:lq-bcq}:
\begin{equation}\label{eq:lq-bcq}
    \begin{split}
    LQ-BCQ(\Solution) = & \large\sum_{SATISFIES(\actor_i, \goal_k) \in \Solution}\frac{\satbenefit{ik}}{\satcost{ik}} + \\
                        & \large\sum_{DECOMPOSES(\actor_i, \goal_k, \goal_{k1},\ldots, \goal_{kn}) \in \Solution}\frac{\decomposebenefit{ik}}{\decomposecost{ik}} + \\
                        & \large\sum_{PASSES(\actor_i, \actor_j, \goal_k) \in \Solution}\frac{\delegatebenefit{ik}}{\delegatecost{ik}}
    \end{split}
\end{equation}
where
\begin{itemize}
    \item $\satbenefit{ik} = \frac{1}{n} \cdot \sum_{j=1}^{n}\normfunc(Bef_j(SATISFIES(\actor_i, \goal_k)))$
    \item $\satcost{ik} = \frac{1}{n} \cdot \sum_{j=1}^{n}\normfunc(Cost_j(SATISFIES(\actor_i, \goal_k)))$
    \item \decomposebenefit{ik}, \decomposecost{ik}, \delegatebenefit{ik}, \delegatecost{ik} are assigned values by domain experts. These values should also be normalized like that of SATISFIES action.
\end{itemize}

The LQ-BCQ is an early evaluator, it thus is hard-coded in the PDDL script which has limitation of function invocation. Therefore, in our implementation all costs value are first normalized before they are scripted.

\subsection{Actor Budget Constraint} \label{sec:ABC}
The \emph{Actor Budget Constraint}(ABC) evaluators prevents the situations in which an actor is assigned too much that exceed actor's limitation. Concretely, suppose that each actor has a budget to satisfy goals. And when this actor fulfills a goal, the budget is decreased a certain amount. Once this budget is too low to fulfill any goal, this actor is exhausted and it should not be assigned goals any more. The ABC objective is to avoid such a situation.

The implementation of ABC is quite simple. At the beginning, each actor has its own value of maximum budget. For each of consumed energy actions (\mono{AND/OR\_DECOMPOSE, PASSES, SATISFIES}) we append a precondition saying that actor's budget should be greater than the required cost of this action. And in the action's effect, we decrease actor's budget by amount of action's cost. The action's cost can be retrieved as described in \ref{sec:LG-BCQ}.

To this extend, this early evaluator can be seen as an implementation of the \emph{local optimization} in Bryl et al approach \cite{BRYL-GIOR-MYLOP-09-REJ}, but at the planning process.

