This section provides the result of the performance evaluation of our approach. First, the performance of the exhaustive search has been measured to compare with the performance of our approach and to search for the best solution (i.e., best architectural instance for the current situations and the user's quality requirements) that will be compared to solutions produced by our approach. Then, we have measured the performance of our approach on the basis of the best solution yielded by the exhaustive search.

We conducted three experiments to verify how accurate and fast our approach is as compared to other search techniques (speed and accuracy are necessary properties to evaluate dynamic architectural selection approaches). The first experiment was conducted to measure the speed of our approach when finding (near) optimal architectural instances and to compare the result with the result of the random search (because the genetic algorithm is basically a randomized algorithm, it is necessary to compare our approach to the random search~\cite{DBLP:journals/infsof/HarmanJ01}). The second experiment was conducted to measure the accuracy of our approach for a small number of generations and to compare the result with the result of the hill climbing search. The third experiment was conducted to compare the performance of our approach, the random search, and the hill climbing search for a large number of architectural decision variables.

The experimental methodology is as follows: We selected 11 projects, as shown in Table~\ref{tab:subjects} and analyzed their requirements and architectures to extract quality variables, architectural decision variables, situation variables, and situation evaluation functions. In addition, we designed an SIG for each project. We implemented our approach in Java. We performed tests on a desktop equipped with 2GB RAM, Intel Pentium 4 processor rated at 2GHz.
We believe that this experiment setting can verify the effectiveness and the scalability of our approach.

\begin{table}
 \centering
 \caption{Data set used in our experiments. ``ADVs'' stands for architectural decision variables and $\delta$ represents the average number of architectural decision values in an architectural decision variable. UCT, EMS, KMA, UWF, and GLI are software systems funded by the Korean government (UCT = u-City system, EMS = emergency management system, KMA = weather forecast service system, UWF = unified welfare system, and GLI = human resource management system).}
 \begin{tabular}{|c|c|c|}
 \hline
 {\bf Subject} & {\bf \# of ADVs} & { $\delta$ }  \\ \hline \hline
 DPA~\cite{dpa} & 7 & 3.7 \\ \hline
 MEDBI~\cite{medbi} & 5 & 4.5 \\ \hline
 IFLA~\cite{ifla} & 9 & 5.5 \\ \hline
 EMUL~\cite{emul} & 6 & 5.5 \\ \hline
 OASIS~\cite{oasis} & 12 & 5.3 \\ \hline
 DELO~\cite{delo} & 14 & 4.8 \\ \hline
 UCT & 19 & 4.5 \\ \hline
 EMS & 27 & 5 \\ \hline
 KMA & 32 & 5.5 \\ \hline
 UWF & 36 & 3.9 \\ \hline
 GLI & 41 & 4.6 \\ \hline
 \end{tabular}
 \label{tab:subjects}
\end{table}



\subsection{Baseline}
\label{sec:baseline}

As a baseline, we have conducted an experiment that measures the performance of the exhaustive search. In this experiment, we measured not only the elapsed time to exhaustively search every combination of architectural decision variables (we used the depth-first search~\cite{depthfirst}) but also the best chromosome that will be used for evaluating the performance of our approach. As stated in Section \ref{sec:selproblem}, the dynamic architectural selection problem is a combinatorial optimization problem and has complexity $O(\delta^n)$. Therefore, the size of the search space increase exponentially.

As shown in Figure \ref{fig:exp1}, the exhaustive search method searches the problem space in one second when $ADV < 10$ (MEDBI, EMUL, DPA, and IFLA). However, the elapsed time to search the problem space exponentially increase and eventually exceeds one hour (DELO). Moreover, in cases of UCT, EMS, KMA, UWF, and GLI, the elapsed time is more than 24 hours.

The exhaustive search technique is not appropriate for the dynamic architectural selection problem with a large number of architectural decision variables, because it is not acceptable for users to wait for the search process to end every time the current situation or the set of quality requirements is changed. The remainder of this section shows the result of performance evaluation of our approach as compared to this baseline.


\begin{figure}[hbtp]
\centering \includegraphics[width=0.45\textwidth]{fig/exp1.eps} \caption{Performance of the exhaustive search. Note that the Y-axis is logarithmic. The search time of UCT, EMS, KMA, UWF, and GLI are not shown here because their search time is more than 24 hours.} \label{fig:exp1}
\end{figure}


\subsection{Performance of Our Approach}
\label{sec:performance}

On the basis of the result of the previous experiment described in Section~\ref{sec:baseline} (i.e., the best chromosome found by the exhaustive search), we conducted three performance experiments: (1) speed comparison with the random search (RS), (2) accuracy comparison with the hill climbing search (HC), and (3) accuracy and speed comparison with RS and HC for a large number of architectural decision variables.

The first experiment measures the elapsed time and generations of our approach required to find an (near) optimal solution from the search space. Further, it compares the result with the result of RS that randomly explores the search space. In the case of our approach, the initial population size, cross-over probability, and mutation probability are defined by $n \times \delta$, 0.5, and $\frac{1}{n}$, respectively, where $n$ is the number of architectural decision variables and $\delta$ is the average number of alternatives of architectural decision variables. These values are the same for the other two experiments. In this experiment, RS randomly produces $n \times \delta$ (same as the size of the population in our approach) solutions for each iteration.

The stopping criterion for both our approach and RS is defined by how the elitist solution is closed to the optimal solution found by the previous experiment (exhaustive search in Section \ref{sec:baseline}). In general, it is difficult to anticipate the time required to find an optimal solution because both are randomized algorithms. However, to compare our approach with RS, we can consider a near-optimal solution that is close to the best solution (such as the Las Vegas algorithm \cite{alghandbook1998}). In this experiment, we assume that the algorithms terminate when the difference between the elitist chromosome (hereinafter referred to as ``the elitist'') in the population and the best combination found in the previous experiment (i.e., the best solution found by exhaustive search in Section \ref{sec:baseline}) is smaller than 5\% of the best combination (i.e., \texttt{if} $Fit(best) - Fit(elitist) < 0.05 \cdot Fit(best)$, \texttt{then} terminate the algorithm where $Fit(a)$ evaluates the value of chromosome $a$ as the value (utility) function (described in Section \ref{sec:quvari}) does).

As shown in Figure \ref{fig:lvtime}, the time required to obtain an (near) optimal solution by our approach is very short compared to the time of taken by the exhaustive search and RS (for each size of architectural decision variables, we ran 100 tests) if $n > 10$. In addition, the elapsed time of RS is rapidly increasing (note that the Y-axis is logarithmic) as the number of architectural decision variables increases. Similar results can be obtained when counting the number of generations (or iterations for RS) required to obtain an (near) optimal solution as shown in Figure \ref{fig:lvgen}. The number of generations in the case of our approach does not proportionally increase while the number of iterations of RS increases rapidly (note that the Y-axis is logarithmic).

%\begin{landscape}
\begin{figure}[hbtp]
\centering \includegraphics[width=0.49\textwidth]{fig/lvtime.eps} \caption{Elapsed time to obtain an (near) optimal solution for each number of architectural decision variables. GA and RS represent our approach and the random search, respectively. Note that the Y-axis is logarithmic.} \label{fig:lvtime}
\end{figure}
%\end{landscape}

%\begin{landscape}
\begin{figure}[hbtp]
\centering \includegraphics[width=0.49\textwidth]{fig/lvgen.eps} \caption{Number of generations required to obtain an (near) optimal solution. GA and RS represent our approach and the random search, respectively. Note that the Y-axis is logarithmic.} \label{fig:lvgen}
\end{figure}
%\end{landscape}




The second experiment compares the accuracy of our approach and to that of HC. In this experiment, we conducted two sub-experiments: (1) comparison between our approach and HC in terms of accuracy and (2) convergence speed of our approach. To conduct the first sub-experiment, we defined the stopping criteria of our approach as the fixed number of generations (like the Monte Carlo simulation \cite{1949}). In other words, the approach stopped after a fixed number of generations. The number is $n \times \delta \times 10$, where $n$ is the number of architectural decision variables and $\delta$ is the average number of alternatives of architectural decision variables. HC stops the search when a set of neighboring chromosomes (in this experiment, the size is $2 \times n$, where $n$ is the number of architectural decision variables) of the best-so-far chromosome has no better chromosome than the best-so-far chromosome. For both our approach and HC, we ran 100 tests for each size of architectural decision variables.

The result of the first sub-experiment is shown in Figure \ref{fig:gahc}. The result shows that our approach outperforms HC for every size of architectural decision variables. The Y-axis represents the accuracy of solutions as compared to the best one found in the exhaustive search (the experiment in Section \ref{sec:baseline}); this accuracy is calculated by $100 - [\frac{Fit(best) - Fit(elitest)}{ Fit(best)} \times 100]$ (where $Fit(elitest)$ is replaced by $Fit(\mbox{best-so-far})$ for HC). Every run took less than 100$ms$ for both our approach and HC.

%\begin{landscape}
\begin{figure}[hbtp]
\centering \includegraphics[width=0.49\textwidth]{fig/gahc.eps} \caption{Accuracy of our approach and that of the hill climbing search. GA and HC represent our approach and the hill climbing search, respectively.} \label{fig:gahc}
\end{figure}
%\end{landscape}

The second sub-experiment is conducted to verify how fast the solution, which our approach produces, approaches the best solution. We fixed the number of generations to stop at 100 and recorded the value of the elitist of each generation until the $100^{th}$ generation. The result is shown in Figure \ref{fig:mctest}. Note that the Y-axis represents the ratio of proximity to the best solution found in the exhaustive search. For every size of architectural decision variable, the elitist quickly approaches the best solution. In most cases, the elitist converges to a near-optimal solution after 40 generations. The approaching speed may vary for each run; however, it cannot influence the results that chromosomes produced by our approach finally approach near-optimal solutions. Further, the elapsed time for 100 generations is less than 10$ms$ for every size of architectural decision variables.

%\begin{landscape}
\begin{figure}[hbtp]
\centering \mbox{\includegraphics[width=0.49\textwidth]{fig/mctest.eps}} \caption{Ratio of proximity to near-optimal chromosomes (ADV = the number of architectural decision variables).} \label{fig:mctest}
\end{figure}
%\end{landscape}

The third experiment evaluates the performance of our approach, RS, and HC for a large number of architectural decision variables ($n \geq 19$). In the case of a large number of architectural decision variables, it takes a very long time to find the best solution by the exhaustive search. Therefore, it is difficult to apply (1) the same stopping criteria for our approach and RS in the first experiment and (2) the same accuracy evaluation method for our approach and HC in the second experiment. Instead, we can approximate that the elitist is the best or close to the best by comparing it with the elitist of previous generations. In other words, if the elitist has not been changed in a very long time, then, we can assume that it may be the best, or close to the best, or the search method may not able to find better ones in further generations (thus, additional generations may not be necessary). In this experiment, the required number of generations, where the elitist has not been changed, is set to $10 \times n \times \delta$ (for our approach and RS, the stopping criteria for HC is the same as that for the second experiment). For each architectural decision variable, we ran 100 tests.

For large numbers of architectural decision variables, our approach outperformed RS and HC. The result of the experiment is shown in Figure \ref{fig:largefitness}. Our approach produced better solutions than HC and RS for every subjects. In addition, our approach outperformed RS in time, as shown in Figure \ref{fig:largetime} (HC produced solutions in less than 100$ms$ for every size of architectural decision variables).

%\begin{landscape}
\begin{figure}[hbtp]
\centering \includegraphics[width=0.49\textwidth]{fig/largefitness.eps} \caption{Fitness values of produced solutions by our approach (GA), HC, and RS. HC and RS represent the hill climbing and the random search, respectively.} \label{fig:largefitness}
\end{figure}
%\end{landscape}

%\begin{landscape}
\begin{figure}[hbtp]
\centering \includegraphics[width=0.49\textwidth]{fig/largetime.eps} \caption{Required time to perform the tests shown in Figure \ref{fig:largefitness}. GA and RS represent our approach and the random search, respectively.} \label{fig:largetime}
\end{figure}
%\end{landscape}


\subsection{Analysis of Performance Evaluation}

We have conducted a baseline experiment (the exhaustive search described in Section \ref{sec:baseline}) and three experiments (described in Section \ref{sec:performance}). The result of the first experiment, which measured the required time and generations to reach a near-optimal solution (i.e., speed), shows that our approach outperforms the result of RS.
This implies that the application can rapidly converge to a near-optimal combination of architectural decision variables and can provide the new and near-optimal architectural instance to the user in response to changes in situations and requirements.

%This implies that our approach is more faster for the dynamic architectural selection problem of mobile application than the random search even though both our approach and RS are randomized algorithms.

The result of the second experiment, which measured the accuracy of our approach with a small number of generations (i.e., the stopping criteria was a fixed number of generations), shows that our approach is relatively more accurate than HC (the first sub-experiment) and rapidly converge to a near-optimal solution (the second sub-experiment) even in a small number of generations. This implies that the application, which applies our approach, can find a near-optimal architectural instance in a short time and they are relatively accurate.

The result of the third experiment, which measured and compared the performance of our approach, HC, and RS for a large number of architectural decision variables, shows that our approach outperformed RS, and HC in this regard. In particular, the variance of the solutions that our approach produced was smaller than the variances produced by RS and HC. This implies that our approach can produce relatively more consistent solutions than RS and HC. This was also shown in the other two experiments.

% Therefore, we can fix the number of generations for each change.

Note that we used different stopping criteria for each experiment. In the first experiment, our approach and RS stopped when the fitness difference of the elitist solution and the best solution was lower than 5\%. However, in practice, this type of execution is not applicable to real applications because it is impossible for the application to know the best solution for every change in the situation and requirement. In the second experiment, we fixed the number of generations as a stopping criterion. A fixed number of generations can be effective for a relatively small number of architectural decision variables; however, this may not be the case for a larger number of the same. Therefore, in the third experiment, our approach stopped when the elitist had not been changed for a specified number of generations ($10 \times n \times \delta$).

This is one of the widely used stopping criteria in genetic algorithm applications. This stopping criterion can be applicable to practical systems because the required number of generations to terminate proportionally (not, exponentially) varies as the number of architectural decision variables increases. Therefore, our approach recommends the use of this criterion in practical applications.
