\documentclass[a4paper,10pt,notitlepage]{article}
\usepackage{graphicx}
\usepackage[margin=0.8in]{geometry}
\usepackage{subfig}
\usepackage{appendix}
\usepackage{multirow}

\DeclareMathSizes{10.0}{11}{7}{10}


%opening
\title{Assignment 2: Particle Swarm Optimization}
\author{Carlos Garc\'{\i}a Cordero \\ \\ School of Informatics \\ University of Edinburgh}

\renewcommand{\abstractname}{Introduction}

\begin{document}

\maketitle

\section{Introduction}
\label{sec:intro}
Particle Swarm Optimization (PSO) is a popular heuristic optimization technique which tries to simulate how certain groups (swarms) of individuals (particles) interact to achieve a common goal. Particles, such as birds in a flock or fish in a shoal, have simple and limited set of rules shared among each other. These collection or rules allow the swarm to behave as a single entity; allowing the capabilities of the swarm to extend beyond those of the particles.

The standard PSO algorithm, for one dimension, can be described as:
\begin{center}
  $v_{t+1} = \omega v_{t} + \alpha_1 r_1 [\hat{x} - x_t] + \alpha_2 r_2 [\hat{g} - x_t],$
  
  $x_{t+1} = x_t + v_{t+1}.$
\end{center}
The position and velocity of a particle at time $t$ is denoted by $x_t$ and $v_t$ respectively. The velocity consists of three components summed together: the \textit{inertia component} ($\omega v_{t}$), the \textit{cognitive component} ($\alpha_1 r_1 [\hat{x} - x_t]$) and the \textit{social component} ($\alpha_2 r_2 [\hat{g} - x_t]$). Each component is weighted according to a factor ($\omega, \alpha_1, \alpha_2$) and, to add a stochastic property to the algorithm, $\alpha_1$ and $\alpha_2$ are multiplied by a random number, $r_i$, between 0 and 1. And although this definition applies to only one dimension, it is not lacking in generality. 

This heuristic algorithm, like any other, possesses advantages and disadvantages that must be taken into account whenever it is used. PSO is generally fast and efficient when the optimization problem lines up to a particular set of unknown parameters. Establishing which parameters suite the problem, however, may be difficult to find. Compared to other algorithms, PSO has relatively few parameters that must be adjusted, and a set of good parameters can usually be generalized to many problems (\cite{El-Gallad2002}).

Standard particle swarms, however, have an inherent problem with the way they search solution spaces. The particle's velocity determines the space to examine for new candidate solutions. These spaces can be seen as the volume of an orthotope\footnotemark, where each of its edges has a length that corresponds to a component of the velocity. Therefore, if any of the velocity's components is zero, the solution space is dramatically reduced in size. This is specially evident if a velocity's component is zero for an edge and that velocity is associated with a particle that lies on the same edge. The particle will get stuck in a rather small solution space. It's also important to note that particle swarms have a bias in their bearing towards axes. This by no means indicates that particle swarms are defective, it's just a property that must be taken into account when using this heuristic algorithm (\cite{Spears2010}). It is plausible to use this property to find solutions faster if there is some domain knowledge that can be applied.

\footnotetext{Orthotopes are also knows as hyperrectangles or boxes that extend to n-dimensions.}


\section{Impact of $\alpha_1$ and $\alpha_2$ over the Algorithm's Performance}
\label{sec:impac_of_alphas}
Parameter selection in any heuristic algorithm is considered by many a black art. Most of the time empirical observations determine suitable values for the parameters; there is no real analytical approach to select these. There have been multiple attempts at creating meta-heuristic algorithms that find good values for these parameters with partial success (\cite{Pedersen2010}). In general, however, these mete-heuristic algorithms find parameters that only work well for a particular set of problems.

\subsection{Behaviour of $\alpha_1$ and $\alpha_2$}
\label{sec:alphas_behaviour}
The behaviour of the PSO algorithm is now explored given different values of $\alpha_1$ and $\alpha_2$ over the interval [-2,2] taking samples every $0.1$ steps. The following run of the algorithm was done using the fixed parameters $\omega = 0.729$, swarm size $S = 50$ and number of iterations $I = 50$. The selection of these values was not made arbitrarily, multiple studies agree that these particular values usually generalize well to different problems (\cite{Clerc2005}, \cite{Pedersen2010}, \cite{Trelea2003}).

The results obtained from evaluating the simple sphere function $f(x)=\sum_i x_i^2$ in six dimensions are shown in Figure (\ref{fig:1_a}). The $X$ and $Y$ axis represent the parameter values used for $\alpha_1$ and $\alpha_2$ respectively. The $Z$ axis represents the ``meta-fitness'' of the heuristic algorithm. The meta-fitness, for only one test function, is defined as the sum of the optimization results from individual runs of the heuristic algorithm (\cite{Pedersen2010}). Because the sphere function has a single global minimum at zero, values close to zero represent better performance of the heuristic algorithm.

\begin{figure} [ht]
 \centering
  \fbox{
    \subfloat[]{\label{fig:1_a_3d}\includegraphics[scale=0.70]{figures/1_a.png}}
    \hspace{8mm}
    \subfloat[]{\label{fig:1_a_map}\includegraphics[scale=0.70]{figures/1_a_map.png}}
  }
 \caption{Meta-fitness performance of the PSO given $\omega = 0.729$.}
 \label{fig:1_a}
\end{figure}

From the graphs, it is clear that the algorithm performed best with $\alpha_1$ in the range $[0.1,2]$ and $\alpha_2$ in the range $[0.1, 1.5]$. It's important to make the remark that 6 dimensions were used, instead of the recommended 2, as the regions of best performance were not statistically clear using only 2 dimensions. 

\subsection{Behaviour of $\omega$}
\label{sec:omega_bahaviour}
To further explore the effect that $\omega$ has in the performance of the algorithm, different sets of values for $\omega$ were quickly examined. When $\omega$ was less than $0.729$ the behaviour of the meta-fitness didn't change much. On the other hand, as $\omega$ moved away from $0.729$ the range for suitable values of $\alpha_1$ and $\alpha_2$ was reduced. If $\omega$ is bigger than $1$, just as Clerc observed (\cite{Clerc2005}), no suitable values for $\alpha_1$ and $\alpha_2$ are found. These results are illustrated in Figure (\ref{fig:1_omegas}).

\begin{figure} [ht]
 \centering
  \fbox{
    \subfloat[$\omega = 0.8$]{\label{fig:1_e}\includegraphics[scale=0.54]{figures/1_e.png}}
    \subfloat[$\omega = 0.9$]{\label{fig:1_f}\includegraphics[scale=0.54]{figures/1_f.png}}
    \subfloat[$\omega = 1.1$]{\label{fig:1_g}\includegraphics[scale=0.54]{figures/1_g.png}}
  }
 \caption{Meta-fitness performance of the PSO given variable values of $\omega$.}
 \label{fig:1_omegas}
\end{figure}

\subsection{Behaviour of the Particles}
The performance of the heuristic algorithm is directly related to how the particles behave. The particles assume different behaviours given different values of $\alpha_1$ and $\alpha_2$. Particles tend to converge fast when $\alpha_1 = \alpha_2 = 0.1$, to oscillate when $\alpha_1 = \alpha_2 = 0.5$ and to diverge when $\alpha_1 = \alpha_2 = 2$, for the particular problem of minimizing the sphere function. These behaviours are shown in Figure (\ref{fig:particles_behaviour}). Every time the PSO algorithm was run using different values of $\alpha_1$ and $\alpha_2$, two random particles were followed from the start to the end recording their positions relative to the first and second dimensions.

\begin{figure} [ht]
 \centering
  \fbox{
    \subfloat[Particles Converging]{\label{fig:particles_converging}\includegraphics[scale=0.58]{figures/particles_converging.png}}
    \subfloat[Particles Oscillating]{\label{fig:particles_oscillating}\includegraphics[scale=0.58]{figures/particles_oscillating.png}}
    \subfloat[Particles Diverging]{\label{fig:particles_diverging}\includegraphics[scale=0.58]{figures/particles_diverging.png}}
  }
 \caption{Behaviour of the particles in the swarm.}
 \label{fig:particles_behaviour}
\end{figure}

In general, making references to Figure (\ref{fig:1_a}), particles with parameters $\alpha_1$ and $\alpha_2$ close to zero, and in the black area, converge fast; when these parameters are bigger than zero, and start to get close to purple edges, oscillate; and when they are in a purple or orange area diverge.


\section{Shortcomings and Improvements to Particle Swarms}
\label{sec:pso_shortcomings}
One of the main shortcomings of the standard PSO algorithm is its bias towards axes. Particles will end up exploring closer to the axes even if they are initially uniformly distributed in the n-dimensional space. Optimizations may be improved or affected by this shortcoming; therefore, we should refer to it as a property\footnotemark. Instead of trying to remove this property, a simple improvement to the standard PSO algorithm is explored in this section.

\footnotetext{In software development terms we would refer to this shortcoming as a feature; not a bug.}

\subsection{Slow convergence Behaviour}
Let us now inspect the behaviour of the heuristic algorithm putting special attention to the problem of slow convergence. When the particles oscillate heavily, slow convergence is to be expected. Section (\ref{sec:alphas_behaviour}) showed that some configurations of $\alpha_1$ and $\alpha_2$ were displaying this exact pattern. To further explore this behaviour, the algorithm is forced to stop prematurely by allowing the maximum number of iterations $I = 10$. Figure (\ref{fig:2_a_maps}) shows two graphs comparing two different values of $I$. The black region (where the meta-fitness is higher) is drastically reduced when $I=10$, just as expected.

\begin{figure} [ht]
 \centering
  \fbox{
    \subfloat[$I=50$]{\label{fig:2_a_map1}\includegraphics[scale=0.70]{figures/1_a_map.png}}
    \hspace{8mm}
    \subfloat[$I=10$]{\label{fig:2_a_map2}\includegraphics[scale=0.70]{figures/2_a_map.png}}
  }
 \caption{Meta-fitness given different values for the number of iterations $I$.}
 \label{fig:2_a_maps}
\end{figure}

\subsection{$\omega$ is the key}
The behaviour of $\omega$ was briefly explored in Section (\ref{sec:omega_bahaviour}). This is the key to finding a solution to the slow convergence problem: as $\omega$ becomes smaller, the interval of $\alpha_1$ and $\alpha_2$, where the algorithm performs good, increases. This is because high $\omega$ values favour exploration, while small values favour exploitation (\cite{Trelea2003}).

\subsection{Proposed variant to the standard PSO}
\label{sec:pso_variant}
The proposed variation to the algorithm uses a variable $\omega$ instead of a constant one. In each new iteration, $\omega$ is calculated as follows:
\[
  \omega_{t+1} = \Omega_{top} - \left\lbrace  \left(\frac{I_t}{I}\right) \times \left(\Omega_{top} - \Omega_{bottom}\right) \right\rbrace,
\]
where $I_t$ is the current iteration number at time $t$; $\Omega_{top}$ and $\Omega_{bottom}$ are constants defined as $\omega$ when $I_t = 0$ and $I_t = I$ respectively. Figure (\ref{fig:2_a_map_improved}) shows the results obtained when we apply this change to the algorithm. The parameter values were the same as in the previous section (excluding the introduction $\Omega_{top}$ and $\Omega_{bottom}$). The area where the meta-fitness is high has increased its size.

\begin{figure} [ht]
 \centering
  \fbox{
    \includegraphics[scale=0.50]{figures/2_a_map(2).png}
  }
 \caption{Meta-fitness with $I=10$, $\Omega_{top}=0.729$ and $\Omega_{bottom}=0.1$.}
 \label{fig:2_a_map_improved}
\end{figure}


\section{Modified PSO vs Standard PSO}
The standard PSO is now compared with the modified version from Section (\ref{sec:pso_variant}) to determine if there are any significant differences between the two. It's important to note that the standard PSO can easily be compared with the proposed modification. If $\Omega = \Omega_{top} = \Omega_{bottom}$, then
\[
 \omega_{t+1} = \Omega - \left\lbrace  \left(\frac{I_t}{I}\right) \times \left(\Omega- \Omega\right) \right\rbrace =
		\Omega - \left\lbrace  \left(\frac{I_t}{I}\right) \times \left(0\right) \right\rbrace =
		\Omega.
\]
The standard PSO can be seen as a special case of the modified version, specifically $\omega_{t+1} = \omega_{t}$ when $\Omega_{top} = \Omega_{bottom}$, hence $\omega$ remains constant.

To put more pressure on the algorithms, the difficulty of the optimization is incremented by a considerable amount (\cite[p. 52]{Clerc2005}). This time, instead of using the sphere function, we now examine a more complex function, the Griewank function, which is defined as
\[
 f(x) = \frac{1}{4000} \sum{x_i^2} - \prod{cos \left( \frac{x_i}{\sqrt{i}}\right)} + 1.
\]
The parameters for the following runs of the algorithm were swarm size $S=50$, dimensions $d=30$, iteration number $I = 50$, $\alpha_1 = \alpha_2 = 1.494$ and $\Omega_{top}=0.729$ in the search space $[-600,600]^{30}$. Figure (\ref{fig:omegas}) shows the meta-fitness of the PSO given variable values for $\Omega_{bottom}$.

\begin{figure} [ht]
 \centering
  \fbox{
    \subfloat[$y$ axis from 0 to 22.]{\label{fig:omegas_1}\includegraphics[scale=0.45]{figures/omegas_1.png}}
    \hspace{1mm}
    \subfloat[$y$ axis from 0 to 0.2.]{\label{fig:omegas_2}\includegraphics[scale=0.45]{figures/omegas_2.png}}
  }
 \caption{Meta-fitness given variable values of $\Omega_{bottom}$ ($x$ axis).}
 \label{fig:omegas}
\end{figure}

The meta-fitness of the standard version is $14.49$ while the meta-fitness of the modified algorithm is close to $0$ when $\Omega_{bottom}$ is in the range [0.529,0.1]. The modified version has proven to be better than the standard PSO for some values of $\Omega_{bottom}$.


\section{PSO Parameter Optimization}
The standard PSO is now extended and a total of five parameters are chosen to be optimized. The extended PSO is described as:
\begin{center}
  $\vec{v_{t+1}} = \omega_{t+1} \vec{v_{t}} + \alpha_1 \vec{r_1} [\vec{\hat{x}} - \vec{x_t}] + \alpha_2 \vec{r_2} [\vec{\hat{N}} - \vec{x_t}] +
		   \alpha_3 \vec{r_3} [\vec{\hat{G}} - \vec{x_t}],$

  $\omega_{t+1} = \Omega_{top} - \left\lbrace  \left(\frac{I_t}{I}\right) \times \left(\Omega_{top} - \Omega_{bottom}\right) \right\rbrace,$

  $x_{t+1} = x_t + v_{t+1}.$
\end{center}
The calculation of the position of the particles has not been modified. The new parameters for the velocity calculation are the best position in the neighbourhood $\vec{\hat{N}}$ and the all-time population best $\vec{\hat{G}}$. Section (\ref{sec:pso_variant}) describes the parameters in the calculation of $\omega_{t+1}$ and Section (\ref{sec:intro}) the other missing parameters. The selected five parameters to optimize are $\alpha_1$, $\alpha_2$, $\alpha_3$, $\Omega_{bottom}$ and $number of neighbours$.

The task at hand is to find the best possible values these parameters can take to minimize the Rastrigin function:
\[
 f(\vec{x}) = \sum_{i=1}^{n}\left(x_i^2 - 10cos(2 \pi x_i) + 10 \right).
\]
A meta-heuristic algorithm has been used to find the optimum parameter values. This algorithm is another PSO which uses multiple particle swarms as objective functions. In the meta PSO, each dimension corresponds to different parameters of the extended PSO. The particles in the meta PSO, therefore, represent different configurations the extended PSO can use.

\subsection{Meta PSO parameters}
The PSO variant described in Section (\ref{sec:pso_variant}) is used for the meta PSO as it proved to be better than the standard version. The selected parameters were swarm size $S=30$, iterations $I=100$, dimensions $d=5$, $\alpha_1 = \alpha_2 = 1.494$, $\Omega_{top} = 0.729$ and $\Omega_{bottom} = 0.1$ with initial random positions evenly distributed inside a hypersphere with origin at $0.0$.

\subsubsection{Fitness Function}
The fitness of the extended PSO, given certain parameters, was originally defined as the average solution of 10 runs of the algorithm after 500 iterations, with no other restrictions. This led, however, to some problems related to the relationship $\alpha_2$ has with the number of neighbours. If the number of neighbours is $0$, $alpha_2$ loses weight in the calculation of the velocity. Often times the meta PSO found solutions were $\alpha_2$ was extremely big (628, i.e.) and the number of neighbours was $0$.

To alleviate this problem some constraints had to be added to the fitness calculation. We generally want $\alpha_1$, $\alpha_2$ and $\alpha_3$ to be small; we penalize big values of $\alpha$. The neighbourhood number was defined as a fraction of the swarm size $S$; we penalize values above $1$. The fitness function was defined as:
\[
  fitness = best\ solution + \sum_{i=1}^{3} \left(\frac{10\alpha_i}{35}\right)^2 + \left( \lfloor neighbours \rfloor \right)^2.
\]

\subsection{Best Solutions Found}
Table (\ref{table:extended_pso_fitness}) shows five different results obtained by the meta PSO algorithm. Some parameters have interesting instances in these results. $\alpha_1$ is generally stable around $[1.1,1.8]$, $\alpha_3$ stabilizes at $[0.3,0.7]$, but $\alpha_2$ was found in a configuration ($0.0003$) where the algorithm performed well without it. $\Omega_{bottom}$ behaves in an interesting way too. For these runs $\Omega_{top} = 0.729$; consequently, in four of five solutions $\Omega_{bottom} > \Omega_{top}$. This is increasing $\omega_{t+1}$ each iteration. This behaviour is related to the Rastrigin function in that particles which increase their velocity escape local minimums easily.

\begin{table}[ht]
  \centering	
  \begin{tabular}{ c c c c c | c}
    $\alpha_1$ & $\alpha_2$ & $\alpha_3$ & $\Omega_{bottom}$ & Neighbours & Fitness \\ \hline
    1.4228412 & 0.19780363 & 0.41253717 &  0.73950255 & -0.31680573 & 0.030473632 \\
    1.1075769 & 0.21059033 & 0.36808549 & 0.89247923 & -0.65874238 & 0.019137416 \\
    1.8945855 & 0.00031228023 &  0.68357857 & 0.75236532 & -0.47276266 &  0.033213941 \\
    1.6961646 & 0.070782426 & 0.41389774 &  0.8089091 & -0.71536328 & 0.024933253 \\
    1.7849831 & -0.13053572 & 0.69755388  & 0.40518448 & -0.2777866 &  0.0301222 \\
  \end{tabular}
  \caption{Fitness of the extended PSO given different parameters.}
  \label{table:extended_pso_fitness}
\end{table}

\subsection{Extended PSO v.s. Standard PSO}
The performance of the extended PSO using the parameter values found by the meta PSO is vastly superior to the standard PSO using recommended parameters from the literature (\cite{Clerc2005}, \cite{Pedersen2010} and \cite{Trelea2003}). Table (\ref{table:extended_vs_standard}) shows five different minimums found by each algorithm.
\begin{table}[ht]
  \centering	
  \begin{tabular}{| c c | c c |} \hline
    \multicolumn{2}{|c|}{Standard PSO} & \multicolumn{2}{|c|}{Extended PSO} \\ \hline
    Example & Minimum  & Example & Minimum \\ \hline
    1 & 9.2765752 & 1 & 0.99495906 \\
    2 & 11.124753 & 2 &  0.016391875 \\
    3 & 13.331674 &  3 &  2.1316282e-14 \\
    4 & 7.7483331 & 4 & 0.99495906 \\
    5 & 14.936362 & 5  & 1.9899181 \\ \hline
  \end{tabular}
  \caption{Minimum examples of extended PSO and extended PSO.}
  \label{table:extended_vs_standard}
\end{table}
This shows that the meta PSO was successful in finding parameter values that work well for the Rastrigin function.


\section{Discussion}
Particle swarms have the characteristic of working with few parameters. In comparison with other heuristic algorithms, this comes as an advantage; though, parameters are more sensitive to small variations. Section (\ref{sec:impac_of_alphas}) explored the behaviour of the algorithm on the sphere function. The algorithm performed as expected and the parameters found included the recommended values in the literature. The recommended parameters, in this case, were not converging fast enough. 

The convergence problem was identified in Section (\ref{sec:pso_shortcomings}) as a bias the standard PSO has to explore axes more often than normal. Spears et al. (\cite{Spears2010}) identified that the problem lies in the way random vectors add a stochastic feel to the particle swarms. The obvious solution is to alter how these random vectors modify the velocity. Instead of doing that something different was used based on observations from Figure (\ref{fig:1_omegas}). The behaviour of $\omega$ allowed the algorithm to converge as the number of iterations increased.

The proposed modification had a profound impact on the sphere and Griewank functions, but it behaved differently than expected in the meta PSO. The best solutions found by the meta PSO often involved having $\Omega_{bottom} > \Omega_{top}$, which is counter-intuitive to what the algorithm is supposed to do. Regardless, the meta PSO was able to find parameter values that worked well to minimize the Rastrigin function.

With more time available, more parameters would have been able to be optimized. I'm particularly interested in the relationship between $\Omega_{top}$ and $\Omega_{bottom}$, the number of iterations required to converge and the effect of the limits imposed to the velocity and position of the particles. Due to the computational requirements and time constraints, some results might not be able to generalize well to large scales. This would require high parallelization of the PSO algorithm and evaluation of the correctness of the source code.


\section{Conclusion}
The PSO algorithm is able to perform well in simple environments such as the sphere function. In complex environments tuning the parameters is a necessity. The parameters recommended in the literature scale well to different problems, but they don't guarantee success. Meta-heuristic algorithms are good and easy ways to find good parameters for specific problems. Nonetheless, these algorithms require significant processing time and memory to yield appropriate results.

Particle swarms are simple by nature. They're easily modified to fit different problems. These simple modifications can be applied to the calculation of velocity or position of the particles, to the calculation of the fitness function, or to the relationship between particles. These characteristics make particle swarms viable heuristic algorithm that are able to be implemented in just a few hours.


%Bibliography
\pagebreak
\bibliographystyle{plain}
\bibliography{library}

\end{document}
