%----------------------------------------------------------------------------------------
%	PROBLEM 1
%----------------------------------------------------------------------------------------

% To have just one problem per page, simply put a \clearpage after each problem
\newpage
\begin{homeworkProblem}
\section{Ant Colony Optimization} \label{aco}
\subsection{Problem statement}
Implement two stochastic local search (SLS) algorithms for the traveling salesman problem with time windows (TPSTW), building on top of the perturbative local search methods from the first implementation exercise.
\begin{enumerate}
  \item Run each algorithm 25 times with different random seed on each instance. Instances will be available from http://iridia.ulb.ac.be/˜stuetzle/Teaching/HO/. As termination criterion, for each instance, use the maximum computation time it takes to run a full VND (implemented in the previous exercise) on the same instance and then multiply this time by 1000 (to allow for long enough runs of the SLS algorithms).
 \item Compute the following statistics for each of the two SLS algorithms and each instance:
 \begin{itemize}
   \item Percentage of runs with constraint violations
   \item Mean penalized relative percentage deviation
 \end{itemize}

\item Produce box-plots of penalized relative percentage deviation.
\item Determine, using statistical tests (in this case, the Wilcoxon test), whether there is a statistically significant difference between the quality of the solutions generated by the two algorithms.
\item Measure, for each of the implemented algorithms on 5 instances, the run-time distributions to reach sufficiently high quality solutions (e.g. best-known solutions available at http://iridia.ulb.ac.be/˜manuel/tsptw-instances\#instances).
Measure the run-time distributions across 25 repetitions using a cut-off time of 10 times the termination criterion above.
\item Produce a written report on the implementation exercise:
\begin{itemize}
  \item Please make sure that each implemented SLS algorithm is appropriately described and that the computational results are carefully interpreted. Justify also the choice of the parameter settings and the choice
of the iterative improvement algorithm for the hybrid SLS algorithm.
  \item Present the results as in the previous implementation exercise (tables, box-plots, statistical tests).
  \item Present graphically the results of the analysis of the run-time distributions.
  \item Interpret appropriately the results and make conclusions on the relative performance of the algorithms across all the benchmark instances studied.
\end{itemize}
\end{enumerate}

\subsection{Introduction} \label{sec:introACO}
Ant Colony Optimization is an example of population-based meta-heuristic (i.e a set of algorithmic concepts that can be used to define heuristic methods) inspired by the behavior of the ant species \emph{Iridomyrmex humilis}.

To be more precise, these insects are able, by means of stigmergic communication, to choose the shortest path between their nest and a food source, when given the choice (cf. \cite{deneubourg1990self}).

The communication process occurs by deposing a certain quantity of pheromone in the environment that can be sensed by the other ants and that will be used by them as an heuristic (i.e an information to guide their choice) for selecting the shortest path.

Furthermore, the pheromone quantity on a certain location decreases over time because of evaporation, thus requiring a continuous deposit process to be effective.

The convergence to one of the paths will occur as a consequence of the self-reinforcing pheromone deposit mechanism.
In fact the more pheromone is deposited on a path, the more ants will follow the pheromone trail on that path deposing even more pheromone.

The first application of Ant Colony Optimization method, the Ant System, has been made on the optimization version of the Traveling Salesman Problem (TSP) (cf. \cite{dorigo1996ant}).

In this implementation, a population of virtual agents (an ant colony) is used to explore the search space (the virtual environment).

In the same fashion as the real insects, the simulated ants are able to deposit virtual pheromone in the environment, to signal to the other ants the presence of promising solutions.

The general outline of the ACO meta-heuristic is the following: 

\begin{algorithm}[!h]
  \caption{Ant Colony Optimization - Outline}\label{aco:outline}
  \begin{algorithmic}[1]
    \State \emph{InitalizePheromoneTrail} 
    \While{!(TerminationCondition)}
        \State \emph{ConstructAntsSolutions}
        \State \emph{LocalSearch} (Optional)
        \State \emph{UpdatePheromoneTrails}
    \EndWhile
\end{algorithmic}
\end{algorithm}

The design of the solution construction and pheromone update mechanism has a great impact on the performance of the algorithm.
Implementation details of the basic ACO system, the Ant System can be found in \cite{dorigo2006artificial}.

\subsection{Algorithm structure} \label{sec:algstrucACO}
The proposed algorithm draws inspiration from one of the extensions to the Ant Colony Optimization meta-heuristic framework, the \maxmin Ant System (cf. \cite{stutzle2000max}).

The main differences of the \maxmin Ant System with respect to the basic ACO approach are the following:
\begin{itemize}
  \item Only iteration best or best-so-far ants update pheromone.
  \item A local search after the solution generation is used to further improve the solutions found by the ants at each iteration.
  \item $\forall t \text{ } \tau_{\min} < \tau_{i,j}(t) < \tau_{\max}  $ - Pheromone trails have explicit upper and lower limits
  \item Pheromone trails are re-initialized when stagnated.
\end{itemize}

The implemented algorithm differs from the \maxmin Ant System in the implementation of the pheromone update.
Here, the pheromone deposit is made by all the ants, each one of them deciding whether to deposit the pheromone on the current best solution or the iteration best one.

\begin{itemize}
    \item The initialization of the pheromone trails to their upper bound favors diversification at the beginning of each trial.
    \item The pheromone update rule favors exploitation of (intensification on) the best solutions at each iteration of the algorithm.
    \item By bounding the intensity of the pheromone trails, the probability of stagnation (i.e. all the ants converging and exploiting a single sub-optimal tour) is reduced.
    % \item If the pheromone trails values for the solution components of a certain tour $s$ are equal to $\tau_{\max}$, the algorithm is said to be converged.
  \end{itemize}
  

\begin{algorithm}[!h]
  \caption{\maxmin Ant System for TSPTW - Outline}\label{maxmintsptw}
  \begin{algorithmic}[1]
    \Procedure{ACO}{$\alpha,\beta,\rho,\varepsilon,t_{\max},f_{best}$}
    \Require $N$ - Node set
    \Require $E$ - Edge set 
    \Require $c$ - Edge cost function
    \Require $t$ - Time window function
    \State {\emph{InitalizePheromoneTrail}($f_{best},n_{cities}$)}
    \While{\emph{!TerminationCondition}($s,t_{max},f_{best}$)}
      \ForAll {Ant $k$}
        \State $s' \gets$ \emph{ConstructSolution}($\alpha,\beta$)
        \If {\emph{IsImproved}($s,s'$)}
          \State $s \gets s' $
        \EndIf 
      \EndFor
      \State $s \gets$\emph{IterativeImprovementIBI}()
      \State \emph{UpdatePheromoneTrails}($\rho,\varepsilon$)
    \EndWhile
    \State \textbf{return} $s$
    \State
  \EndProcedure
\end{algorithmic}
\end{algorithm}

\begin{center}
  
\begin{minipage}{.45\textwidth}
\centering
\begin{tikzpicture}[->,>=stealth',shorten >=1pt,auto,node distance=3cm,
                    semithick]
  \tikzstyle{every state}=[fill=blue!20,draw=none,thick]

  \node[initial,state]    (Init)                   {};
  \node[state]            (CI) [right of=Init]     {\emph{CI}};
  \node[state]            (CS) [above right of=CI] {\emph{CS}};
  \node[state]            (LS) [below right of=CI] {\emph{LS}};
  \node[state,accepting]  (End)[below left of=CI]  {$\Omega$};

  \path (Init) edge              node {DET:\emph{IPT}()} (CI)
        (CI) edge              node {CDET(not \emph{TC})} (CS)
             edge [left]       node {CDET(\emph{TC})} (End)
        (CS) edge [loop above] node {CDET(not \emph{CC})} (CS)
            edge              node {CDET(\emph{CC})} (LS)
        (LS) edge [loop below] node {CDET(not \emph{LO})} (LS)
            edge               node {CDET(\emph{LO})} (CI);
\end{tikzpicture}
\end{minipage}%
\hspace{1.5cm}
\begin{minipage}{.45\textwidth}
\centering
\paragraph{Nodes}
\begin{itemize}
  \item \emph{CI} $\equiv$ Dummy node
  \item \emph{CS} $\equiv$ \emph{ConstructSolution}($\alpha,\beta$)
  \item \emph{LS} $\equiv$ \emph{IterativeImprovementIBI}()
\end{itemize}
\paragraph{Conditions}
\begin{itemize}
  \item \emph{IPT} $\equiv$ \emph{InitializePheromoneTrail}()
  \item \emph{CC} $\equiv$ \emph{ConstructionComplete}()
  \item \emph{LO} $\equiv$ \emph{LocalOptimum}()
  \item \emph{TC} $\equiv$ \emph{TerminationCondition}($s,t_{max},f_{best}$)
\end{itemize}
\end{minipage}
\captionof{figure}{Implemented Ant System GLSM}
\end{center}


\subsubsection{Pheromone Initialization}
\begin{algorithm}[!h]
  \caption{Pheromone Initialization}\label{init}
  \begin{algorithmic}[1]
    \Procedure{InitalizePheromoneTrail}{$f_{best},n_{cities}$}
      \State $a \gets 10$
      \State $\tau_{\max} \gets \frac{1}{f_{best}}$
      \State $\tau_{\min} \gets \frac{\hat{\tau}_{\max}}{a}$
      \State $i \gets 0$
      \State $j \gets 0$
      \For{$i < n_{cities}$} 
        \For{$j < i$} 
          \State $\tau_{ij} \gets \tau_0$
          \State $\tau_{ji} \gets \tau_{ij}$
          \State $ j \gets j + 1$  
        \EndFor
        \State $ i \gets i + 1$ 
      \EndFor
    \EndProcedure
\end{algorithmic}
\end{algorithm}

As discussed in \nameref{sec:introACO}, the ACO methods are based on stigmergic communication among the agents by means of virtual pheromone.

While the real ants can deposit pheromone anywhere in the environment, the virtual ants may only exchange information concerning solutions components.

For this reason, every admissible edge $e_{i,j}$ of $E$ has an associated pheromone value $\tau_{i,j}$, that have to be initialized at the beginning of the execution of the algorithm.

The initialization value $\tau_0$ is a parameter of the algorithm, for the \maxmin Ant System $\tau_0 = \tau_{\max}$. 

Provided that the best solution is known, the bounds on the pheromone values can be determined as:

\begin{equation}
\begin{array}{lccr}
  \tau_{\max} = \frac{1}{f_{best}} & & & \tau_{\min} = \frac{\tau_{\max}}{a}
\end{array}
\end{equation}

Whenever the global optimum value is unknown, the values of the bounds can be estimated using the best known solution so far, by:

\begin{equation}
\begin{array}{lccr}
  \hat{\tau}_{\max} = \frac{1}{\rho \cdot T_{d}^{\text{Bo}}} & & & \hat{\tau}_{\min} = \frac{\hat{\tau}_{\max}}{a}
\end{array}
\end{equation}


\subsubsection{Solution construction}
\begin{algorithm}[!h]
  \caption{Solution Construction}\label{solConstr}
  \begin{algorithmic}[1]
    \Procedure{ConstructSolution}{$\alpha,\beta$}
      \State \Comment{$s_i$ represents the $i^{th}$ component of the solution}
      \State $s_0 \gets 0$ \Comment{Every solution starts at the depot}
      \State $s_1 \gets$ \emph{RandomCitySelection}() \Comment{Random choice of the starting city}
      \State $i \gets 1$
      \While{!\emph{SolutionComplete}()}
        \State $s_i \gets $ \emph{RouletteWheelSelection}() \Comment{ $s_i$ stochastically chosen according to the probability distribution defined by \ref{eq:tranprob}} 
        \State $i \gets i+1$
      \EndWhile
    \EndProcedure
\end{algorithmic}
\end{algorithm}

The solution construction process, used by every ant $k$ in the system, consist of a probabilistic selection of solution components.
Every edge $e_{i,j}$ has a selection probability $p_{i,j}^k(t)$ (also called transition probability) defined as follows:

\begin{equation} \label{eq:tranprob}
p_{i,j}^k(t) = \begin{cases}
  \frac{[\tau_{i,j}(t)]^\alpha \cdot [\eta_{i,j}]^\beta}{\sum_{k} \in A(s_{i}) [\tau_{k,j}(t)]^\alpha \cdot [\eta_{k,j}]^\beta} & j \in A(s_{i}) \\
 0 & \text{otherwise} \\
\end{cases}
\end{equation}

As one can see in \ref{eq:tranprob}, the transition probability is determined by a constant, locally available heuristic information $\eta_{i,j}$ and by the time varying pheromone trail $\tau_{i,j}(t)$.
This probability is defined on the set $A(s_i)$ of available (i.e. not yet visited) cities while visiting solution component $s_i$.
The value of the parameters $\alpha$ and $\beta$ determines the relative importance of the heuristic information and the pheromone trail, respectively.

\subsubsection{Admissible heuristics}
The heuristic component $\eta_{i,j}$ is used to guide the selection of solution components towards those components that are included in optimal solution.

\paragraph{Dorigo et al, 1996}
The heuristic originally proposed by Dorigo et al, in \cite{dorigo1996ant}, for the TSP problem is:
\begin{equation}
  \eta_{i,j} = \frac{1}{c(e_{i,j})}
\end{equation}

The main idea behind this heuristic is that, if the selection process tends to select, at each step, the shortest connection between the current node and the following, the built tour should be of the shortest length.

This heuristic is cited for explanation purposes, even though it cannot be used for the TSPTW problem, since it will guide the exploration only towards shorter solutions, without taking into account the presence of the time windows.

\paragraph{Cheng and Mao, 2007}
The local heuristics used in \cite{cheng2007modified} are similar to that proposed by Gambardella et al. \cite{gambardella1999macs} in their multiple ant colony system (MACS) designed to solve the vehicle routing problem with time windows (VRPTW).

\begin{equation}
[\eta_{i,j}]^\beta = [g_{i,j}]^\beta \cdot [h_{i,j}]^\gamma
\end{equation}  

The two components $g_{i,j}$ and $h_{i,j}$, are designed, respectively, to avoid lateness (that is, arriving in the node where the time windows is already terminated) and waiting times (i.e. arriving in the node before the time windows open).

\subparagraph{Lateness avoidance}
\begin{equation}
g_{i,j} = \begin{cases}
 \frac{1}{1+e^{\delta \cdot (G_{i,j} - \mu)}}  &  G_{i,j} = b_j - t_j \geq 0 \\
0 & \text{otherwise} \\
\end{cases}
\end{equation}
where
\begin{itemize}
  \item $G_{i,j} = b_j - t_j$ - Slack corresponding to the time window $j$ while being in node $i$
  \item $t_i$ - Arrival time at node $i$
  \item $b_i$ - Closing time of time window $i$
  \item $G(i) = \{k\text{ } | \text{ }G_{i,k} \geq 0\}$ - Set of feasible neighbors of node $i$ (i.e. such that node $k$ is reached earlier than its closing time)
  \item $\mu = \frac{1}{|G(i)|}\sum_{j \in G(i)}  G_{i,j}$ - Average slack 
  \item $\delta$ - Parameter to control the slope of the sigmoidal function
\end{itemize}

\subparagraph{Waiting time avoidance}
\begin{equation}
h_{i,j} = \begin{cases}
 \frac{1}{1+e^{\lambda \cdot (H_{i,j} - \upsilon)}}  &  H_{i,j} = t_j - a_j \geq 0 \\
0 & \text{otherwise} \\
\end{cases}
\end{equation}
where
\begin{itemize}
  \item $H_{i,j} = t_j - a_j$ - Waiting time corresponding to the time window $j$ while being in node $i$
  \item $t_i$ - Arrival time at node $i$
  \item $a_i$ - Opening time of time window $i$
  \item $H(i) = \{k\text{ } | \text{ }H_{i,k} \geq 0\}$ - Set of non-waiting neighbors of node $i$ (i.e such that node $k$ is reached within the time window)
  \item $\upsilon = \frac{1}{|H(i)|}\sum_{j \in H(i)}  H_{i,j}$ - Average waiting time 
  $\lambda$ - Parameter to control the slope of the sigmoidal function 
\end{itemize}


\paragraph{Lopez-Ibanez and Blum, 2010}
The approach used in \cite{lopez2010beam} is a linear combination based on $\lambda_i$ coefficients, of the normalized values of opening and closing times of the time windows and the traveling cost from one city to another.

\begin{equation} \label{eq:heuristic}
\eta_{i-1,k} = \lambda_{a} \cdot \frac{a_{\max}-a_{k}}{a_{\max}-a_{\min}} + \lambda_{b} \cdot \frac{b_{\max}-b_{k}}{b_{\max}-b_{\min}} + \lambda_{c} \cdot \frac{c_{\max}-c_{i-1,k}}{c_{\max}-c_{\min}}
\end{equation}

where
\begin{itemize}
  \item $a_i$ - Opening time of time window $i$
  \item $a_{\max} = \max_{j \in N} a_{j}$ - Maximum time window opening time in the neighborhood of node $i$
  \item $a_{\min} = \min_{j \in N} a_{j}$ - Minimum time window opening time in the neighborhood of node $i$
  \item $b_i$ - Closing time of time window $i$
  \item $b_{\max} = \max_{j \in N} b_{j}$ - Maximum time window closing time in the neighborhood of node $i$
  \item $b_{\min} = \min_{j \in N} b_{j}$ - Minimum time window closing time in the neighborhood of node $i$
  \item $c_{i,j}$ - Traveling cost from node $i$ to node $j$
  \item $c_{\max} = \max_j c_{i,j}$ - Maximum traveling cost from node $i$ 
  \item $c_{\min} = \min_j c_{i,j}$ - Minimum traveling cost from node $i$
  \item $\lambda_{a},\lambda_{b},\lambda_{c}$ s.t. $\lambda_{a}+\lambda_{b}+\lambda_{c}=1$ - Randomly selected weights
\end{itemize}

In this implementation, the heuristic information will be computed according to \cite{lopez2010beam}

\subsubsection{Solution improvement}

\begin{algorithm}[H]
  \caption{Solution improvement}\label{solImprov}
  \begin{algorithmic}[1]
    \Procedure{IsImproving}{$s,s'$}
      \If {$\Omega(s') < \Omega(s)$}
          \State \textbf{return true}
      \Else
           \If {$\Omega(s') = \Omega(s) \wedge f(s') < f(s)$}
            \State \textbf{return true}
           \EndIf
      \EndIf
      \State \textbf{return false}
      \EndProcedure
\end{algorithmic}
\end{algorithm}

A solution $s'$ is considered improving the current best solution $s$ if and only if:
\begin{itemize}
  \item Either, it has a smaller number of constraints violation (i.e. $\Omega(s') < \Omega(s)$)
  \item Or, it has the same number of constraints violations ($\Omega(s') = \Omega(s)$) but the total tour duration is smaller ($f(s') < f(s)$).
\end{itemize}


\subsubsection{Pheromone trails update}
\begin{algorithm}[H]
  \caption{Pheromone Trails Update}\label{update}
  \begin{algorithmic}[1]
    \Procedure{UpdatePheromoneTrails}{$\rho,p_b$}
     \State $i \gets 0$
      \State $j \gets 0$
      \For{$i < n_{cities}$} 
        \For{$j < i$}
          \ForAll{Ants}
            \If{\emph{Random}() $< \varepsilon$} 
              \State $\tau_{ij} \gets (1-\rho)\cdot\tau_{ij}+\Delta\tau_{i,j}^{Bo}$
            \Else
              \State $\tau_{ij} \gets (1-\rho)\cdot\tau_{ij}+\Delta\tau_{i,j}^{Bi}$
            \EndIf
            \If{$\tau_{ij} < \tau_{\min}$} 
              \State $\tau_{ij} \gets \tau_{\min}$
            \EndIf
            \If{$\tau_{ij} > \tau_{\max}$} 
              \State $\tau_{ij} \gets \tau_{\max}$
            \EndIf
          \State $\tau_{ji} \gets \tau_{ij}$
          \State $ j \gets j + 1$  
        \EndFor
        \EndFor
        \State $ i \gets i + 1$ 
      \EndFor
    \EndProcedure
\end{algorithmic}
\end{algorithm}

As discussed in \nameref{sec:algstrucACO}, the pheromone update will be made by all the ants, updating the pheromone values either on the trail corresponding to the best solution in the current iteration: 

\begin{equation}
  \Delta\tau_{i,j}^{Bi} = \begin{cases}
    \frac{1}{T_{d}^{\text{Bi}}} & e_{i,j} \in T^{\text{Bi}}  \\
    0 & \text{otherwise} 
      \end{cases}
\end{equation}

Or the global best (best-so-far) solution:

\begin{equation}
  \Delta\tau_{i,j}^{Bo} = \begin{cases}
    \frac{1}{T_{d}^{\text{Bo}}} & e_{i,j} \in T^{\text{Bo}}  \\
    0 & \text{otherwise} 
  \end{cases}
\end{equation}

where
\begin{itemize}
\item $e_{i,j}$ - Edge connecting node $i$ and $j$
\item $T_{d}^{i}$ - Complete tour duration of tour $i$
\item $T^{\text{Bi}}$ - Best tour of the current iteration
\item $T^{\text{Bo}}$ - Best tour overall 
\end{itemize}

The choice between the two aforementioned solutions is made stochastically with a probability $\varepsilon$ (a parameter of the algorithm) to choose the best-so-far solution.

\subsubsection{Local search}
\begin{algorithm}[H]
  \caption{Iterative Improvement - (Insert neighborhood with best-improvement pivoting rule)}\label{locsearch}
  \begin{algorithmic}[1]
    \Procedure{IterativeImprovementIBI}{$s$}
      \State $s^{*} \gets s$ 
      \For {$i \in \{1,\cdots,|N|\}$}
        \For {$j \in \{1,\cdots,|N|\}$}
          \If{ $i = j \vee  j = i-1 $}
				    \State \textbf{continue}
			    \EndIf
			    \State $s' \gets$ \emph{InsertTourComponent}($s,i,j$)
			    \If {\emph{IsImproving}($s^{*},s'$)}
          \State $s^{*} \gets s'$
        \EndIf
        \EndFor
      \EndFor
      \State \textbf{return} $s^{*}$
    \EndProcedure
\end{algorithmic}
\end{algorithm}

\begin{algorithm}[H]
  \caption{Insert neighbor solution computation}\label{locsearch}
  \begin{algorithmic}[1]
    \Procedure{InsertTourComponent}{$s,i,j$}
      \State \Comment{$s_i$ represents the $i^{th}$ component of the solution}
      \State $e \gets s_i$
      \If{i<j}
      \State $k \gets i$
        \While{$k < j$}
				  \State $s_i \gets s_{i+1}$
				  \State $k \gets k+1$
			  \EndWhile
			\Else
			\State $k \gets i$
        \While{$k > j$}
				  \State $s_i \gets s_{i-1}$
				  \State $k \gets k-1$
			  \EndWhile
			\EndIf
			\State $s_j \gets e$
     \State \textbf{return} $s$
    \EndProcedure
\end{algorithmic}
\end{algorithm}

The local search procedure, a 1-shift best-improvement search in the neighborhood of the the current best solution $s$,  is performed after the tour construction and evaluation by each ant.
In this implementation, the local search procedure is performed only once, instead of being launched on the solution constructed by each ant.

\subsubsection{Termination condition}
\begin{algorithm}[H]
  \caption{Termination Condition}\label{aco:termcond}
  \begin{algorithmic}[1]
    \Procedure{TerminationCondition}{$s,t_{\max},s_{best}$}
          \If{ $(f(s) = f_{best} \wedge \Omega(s) = 0) \vee  t > t_{\max} $}
				    \State \textbf{return true}
			    \EndIf
      \State \textbf{return false}
    \EndProcedure
\end{algorithmic}
\end{algorithm}

The algorithm will stop if the maximum time limit is exceeded or, if the best known solution is constructed by one of the ants during an iteration.
In the actual implementation, in order to estimate the run time distribution over a sufficiently long run, only the time constraint is considered, while the global optimum tour structure is stored in memory, along with its finding time.

\subsubsection{Implementation parameters}
\begin{center}
\begin{tabular}{|c|c|c|}
\hline
\textbf{Parameter} & \textbf{Default} & \textbf{Selected} \\ \hline 
$\alpha$ & 1.0 & 2.0 \\\hline
$\beta$ & 2.0 & 1.0 \\\hline 
$\rho$ & 0.8 & 0.8 \\\hline 
$\varepsilon$ & 0.5 & 0.5 \\ \hline 
$Ants$ & 25 & 25 \\ \hline  
$t_{\max}$ & 10[s] & 100[s] \\ \hline
\end{tabular}
\captionof{table}{SA - Algorithm parameters overview}
\label{saParameters}
\end{center}

The value of $\beta = 2 \cdot \alpha$ was chosen in order to differentiate the contribution of the pheromone and heuristic information to the selection probability of a solution component.

With this parameter choice, the information exchanged by the ants through virtual stigmergy will have an higher weight in the solution construction process than the information locally available to the ant (i.e. the heuristic).

The parameter $\varepsilon$ influences the trail update process by determining the likelihood with which the current best-so-far solution will be used for the pheromone update in spite of the best solution found in the iteration.

A value of $0.5$ provides an unbiased selection among the two.

The number of ants and the value of $\rho$ were chosen according to cite \cite{stutzle2000max}.

The maximum run-time $t_{\max}$ has been chosen equal to 100 seconds in order to analyze the run-time distribution of the algorithm over a sufficiently long run.

 
\end{homeworkProblem}
