\chapter{Conclusion}
\epigraph{%
Success builds character, 
failure reveals it.
}%

This thesis presented a Potential Fields based Micromanagement AI for the RTS game StarCraft, optimized using NSGA-II. The goal as the authors started working on this thesis was to find a way to improve already existing results presented by previous RTS Micromanagement bots using Potential Fields. There was no previous solutions utilizing Multi-Objective Optimization to develop PFs, so developing one would be a step forward in the domain.

In the end the experiments yielded disappointing results in terms of practical performance and the AI did not perform better than previous bots. However as discussed the AI learns rapidly up until a certain point in its performance. The question whether this problem lies in the design or the implementation of the Potential Fields was raised. Which would explain the lacking performance of the AI regardless of its weight values.

After evaluating the results of the conducted experiments, the authors revised the possible causes of error suggested in Subsection~\ref{sub:exp2} and created two new revisions of the AI with a mixture of increasing weight thresholds, removing the Center of Group field and adding a repulsive collision avoidance field at each friendly unit. The new revisions were given a short ten generations to evolve, and then tested to check for improvements. Unfortunately neither of the revisions showed any improved performance, but they did help to empower the suspicion that the fault does indeed lie in the fundamental implementation of the Potential Fields, and not within the design itself.

\todo{fortid}
\todo{applicable to more general domain?}
\todo{what does the results prove?}

\section{Further work}
For further work it would be interesting to see a different design approach to the Potential Fields controlling the Micromanagement. Fields that expand behaviour like collision avoidance as well as taking more detailed aspects of the game into the calculations like terrain, obstacles, and enemy unit cooldown in the case of StarCraft.

It would also be interesting to see a more thorough training procedure. If more time is available each individual can be tested more than two times for higher accuracy. Co-evolution is also a possibility and would probably have a great impact on the behaviour of an optimal solution. Another approach could also be training the AI versus an experienced human player, because the built-in StarCraft AI performs far from optimally and employs few different tactics in Micromanagement.

The source code for the EMAPF bot created by \citet{sandberg2011evolutionary} is available on-line, extending it to using NSGA-II with the objectives presented in this report would prove more direct and conclusive results of how MOO affects the performance.

The results in this report indicate that Multi-Objective Optimization is indeed a valid approach worth exploring for tuning Potential Fields in RTS games. StarCraft has proven an excellent test platform, but the methods are applicable for almost any RTS game because of the generic nature of the genres domain rules and environment. It would be interesting to see this approach attempted in a different RTS game.