\chapter{Initial Approach}
\label{chap:Initial Approach}

\section{Environment}
\label{Environment}

 The environment is composed of variable and fixed components. The fixed items are the surface, the sky and the nest. The variable components are the ants, food items and obstacles. The simulation starts with one nest that can be destroyed and repaired. However, in this simulation the colony can neither build other nests nor enlarge their nest. The obstacles are regarded as building materials that can be used to repair the nest. Before the simulation starts, the user is asked to enter the number of ants, obstacles, food sources and the mode (see below). The world is then generated randomly based on the input. The user is limited with a certain range of number of ants, obstacles and food sources to avoid performance issues in the engine, causing the simulation to run smoothly.  
\\

The user has two modes to choose from. The first mode assumes that ants have no information about the surrounding environment and need to explore it and create a map. The other mode skips this step and assumes that the ants have already discovered the map and are now ready to do other jobs. Although the environment is already created the user can still manipulate it. While the simulation is running the user can insert more food sources, obstacles, ants, user ant and enemy ants at randomly generated coordinates. The environment is 3D; however, ants' motion is only in 2D coordinates. 

\section{Ant Agent}
\label{sec:Ant Agent}
In this thesis, I chose not to use a certain frame work. I created a class called ant behavioral component based on three AI components. First, I created the different basic functions that I will explain later like avoiding an obstacle. Second, I created a finite state machine that switches from a state to another upon both environmental effects and objective results. Finally, a behavioral priority list of actions that may cause the ant to either change objective, or switch among states. Using a behavior-based approach gives me a methodology for building agents systematically while maintaining comprehensive efficiency, as well as resolving one intelligent behavior at a time \cite{one}. This helps in building an agent who appears very sophisticated, yet is still utilizing a finite state machine as a control system \cite{one}. 

\subsection{Basic Functions}
\label{Basic Functions}

\subsubsection{Locomotion}
\label{Locomotion}
This is the low-level translation function that is responsible for moving one unit. It takes as an input three parameters. First, angle of rotation of the object. Second, object bounding box. Third, one unit of transfer. If the angle of rotation is b and one unit of transfer is r, then each point of the object will be translated $r\sin{b}$ in x direction and $r\cos{b}$ in z direction. 
\subsubsection{Steering}
\label{Steering}
It is the function responsible for changing the angle of the agent. There are two ways to calculate steering, either by using a Trajectory system, or a Binary system. In the Trajectory system, the agent changes the angle per one unit of transfer or in other words changes the value of the angle while moving. On the other hand, the Binary system doesn't change the angle while moving, it either changes the angle or moves and so on. The following figure shows the difference between both systems.

\begin{figure}[htp]
\begin{center}

\includegraphics[width=0.7\textwidth]{trajectory_binary.png} 
\caption{Steering systems}
\label{image_2}
\end{center}
\end{figure}

 In this thesis, I used the Binary system since it has fewer calculations and in complex situations gives the same curvy look of the Trajectory system. Now combining the steering system and the locomotion system, we get a motion system that works only in positive x positive z quadrant. Therefore, we use the ASTC rule (All,Sin,Tan,Cos)to calculate the required angles for vectors in order to have a full-motion system. In the following figure, you can see the ASTC rule. 

\begin{figure}[htp]
\begin{center}

\includegraphics[width=0.5\textwidth]{ASTC.png} 
\caption{ASTC rule}
\label{ASTC rule}
\end{center}
\end{figure}
 
 
\subsubsection{Target Allocation}
\label{Target Allocation}  This function is responsible for establishing the agent's target. It calculates the distance between the agent and the target. It operates in the following way. First, takes the target coordinates as an input then, calculates the required vector using ASTC rule and send it to the motion function. Motion function will move one unit with the given vector, then the target allocation function is recalled. It again recalculate the new vector and so on until the target is reached. It validates reaching the target using a flag. Upon each cycle of target allocation function call, if distance between the target and the agent is in a certain range, this flag is raised. 

\subsubsection{Obstacle Avoidance}
\label{Obstacle Avoidance}
Obstacle avoidance is a behavior that steers a vehicle to avoid obstacles lying in its path \cite{eight}. Obstacle avoidance works by the following way. The agent is engulfed by a circle, which is larger than the bounding box by one translation unit. After each translation by one unit towards the target, this circle is checked if it intersects an obstacle. In case of any intersection, the magnitude of intersection between the circle and the obstacle is measured, call it M. Afterwards, translation stops and rotation by one unit is performed. After rotation is performed, agent checks whether translation with the new angle will increase or decrease M. In case of increase or no change, then the agent is rotated by one unit again and rechecks translation with new angle. On the other hand, in case of decrease, the agent starts translation, increments a counter and rechecks translation again. If it happens and M increases, then the counter is reset to zero, and the agent rotates by one unit and rechecks with the new angle. Otherwise, if the counter reaches a certain value, the agent calls the function target allocation and completes motion. See the following algorithm.

\begin{algorithm}{}
\caption{Obstacle Avoidance Function}
\label{alg:Obstacle Avoidance Function}
\begin{algorithmic}[1]
\State M is distance between bounding circle and nearest obstacle.  
\State T is translation 1 unit
\State boolean fixer and flag
\State int counter and value. 
\State \textbf{Obstacle Avoidance Function}
\State fixer = true , flag = true 
    \While{fixer}
    \If{$M \leq T$} 
    \State rotate by 1 unit, flag=false, counter=0 \EndIf
	\If{$M > T$}
		\If{flag}
	\State	fixer=false 
		 \Else 
		\State counter=counter+1
		 \EndIf	
	 \EndIf	
	\If{$counter \geq value$}
	\State flag = true, fixer=false, counter=0, Target Allocation( )
	 \EndIf    
    \EndWhile
\end{algorithmic}
\end{algorithm}

The following figure shows an example of an agent avoiding an obstacle using this algorithm.
\clearpage

\begin{figure}[htp]
\begin{center}

\includegraphics[width=0.9\textwidth]{obstacle.png} 
\caption{Smart agent avoids an obstacle}
\label{Smart agent avoids an obstacle}
\end{center}
\end{figure}


\subsubsection{Deadlock Avoidance}
\label{Deadlock Avoidance}
In some cases given targets might be impossible to reach. An example for this is investigating a point, which is inside an obstacle. An agent will keep having an infinite loop between avoiding that obstacle and targeting a point inside it. To resolve this, I had an initial attempt of deadlock avoiding by the following way. Create a certain time interval, for example five seconds and a distance interval, for example two meters. Each five seconds I check the distance between the ant's initial position before the five seconds and it's final position after these five seconds. If the distance between initial and final points is less than two meters, then deadlock else not a deadlock. This attempt had three different problems. 
\\

\textbf{Problem 1: }Agent might successfully reach a target and return to his initial point within the deadlock time interval. The distance between the initial and final point is zero, resulting in a deadlock, although it is not. 
\\

\textbf{Problem 2 : }Agent might have a curvy motion around obstacles to reach his target within the deadlock time interval. The distance between the initial and final point is less than the distance interval resulting a deadlock, although it is not. 
\\

\textbf{Problem 3 : }Agent might have a target inside a long obstacle. Agent will keep moving around the obstacle. The distance between each initial and final position in the time interval is always greater than the distance interval. This leads to an infinite loop. Check the following figure. 

\clearpage
\begin{figure}[htp]
\begin{center}

\includegraphics[width=0.8\textwidth]{deadlock.png} 
\caption{Cases of deadlock function failure}
\label{Cases of deadlock function failure}
\end{center}
\end{figure}

\textbf{Problem 1 solution : }The problem is that time needs to be extended and initial point needs to be changed. Solution is to reset the timer and change initial point upon change of target to the target reached. 
\\

\textbf{Problem 2 solution : }The problem here is, target is not reached yet to reset timer and initial point. Solution is to extend the timer up to four times and reset timer upon each angle change by obstacle hitting. Increasing the timer will decrease the possibility of such events to occur. If it occurred, resetting timer upon change of angle will resolve it. 
\\

\textbf{Problem 3 solution : }The problem is looping around an obstacle with an unreachable target. Solution is to save last set of points passed. Afterwards, they are compared with current location if the same point is passed more than four times, the result is deadlock. Once a deadlock is verified, agents record this point as unreachable. The idea behind choosing the number four is you can pass three times maximum by default while going, coming back and redirecting. 

\subsubsection{Maze Solver}
\label{Maze Solver}Sometimes the world generated might contain some simple mazes. In this case, the agent must be smart enough to resolve them. To solve any maze, there are three different scenarios regardless of the algorithm that will be used. Either the whole maze can be known before entering it and only computation is needed, or the agent has eyes that can explore the surrounding while moving inside the maze, or neither nor and the agent learns while walking \cite{twelve}.
\\

In our case, we are the third type which is an agent discovers while walking. One of the most known algorithms that solves complex mazes with connected walls and simple mazes with none connected walls is Wall Follower algorithm. It is also known as either the left-hand rule or the right-hand rule. Basically, the agent follows either the left or the right wall as a guide through the maze. If the agent reached an opening in any of the walls, it will stop and turn to the direction of the wall, then move forward to reach other walls again. This way guarantees either finding an exit to the maze, or in the worst case getting out from the same entrance door if there are no exits. The figure below illustrates the worst case. It is almost impossible for the engine to create a complex maze, thus a simple maze solving algorithm was considered sufficient.
\begin{figure}[htp]
\begin{center}

\includegraphics[width=1.0\textwidth]{maze.png} 
\caption{Maze Solver Example}
\label{Maze Solver Example}
\end{center}
\end{figure}


\subsection{State Diagram}
\label{State Diagram}
The following figure represents the state diagram of an agent.
\begin{figure}[htp]
\begin{center}

\includegraphics[width=0.76\textwidth]{state_diagram_00.png} 
\caption{State Machine of an Agent}
\label{State Machine of an Agent}
\end{center}
\end{figure}

Agent starts in the nest in the state 0. He then transfers to the state 1 allocating a job. If there is any condition that blocks conducting the mission, e.g. the nest's door is blocked, map not revealed or job requirement is not available, then the agent goes back to state 0. If the job can be done in state 1, agent goes outside the nest, locates the target and transfers to state 2. State 2 is the state responsible for movement inside the environment. It translates agents to their targets. Regardless whether the mission is a failure or a success, once the agent reaches a result it is then translated to state 0, which is go to the nest. In case of an enemy invasion on the nest, the agent might be in either state 0, 1 or 2. Agent directly is translated to state 3 which is the defending state. In this state, Agent takes his defending mission, then moves to targets again using state 2.   
\\

To summarize the state machine, When there is an attack regardless of any agent's state, it is transferred to state 3. When the agent wants to go to the nest or stay there regardless of the reasons, agent is transferred to state 0. Any motion is done using state 2. State 1 is used for normal jobs planning, while state 3 is used for defending jobs planning.

\subsection{Behavioral Priority List}
\label{Behavioral Priority List}
 The state machine used by the agent is supported by a Behavioral Priority List. This extends the complexity of the simple state machine, by increasing switching between states for more reasons. Our priority list is listed with the same logic of the real-world ants. The following figure is an example of the priority list system. 

\begin{figure}[htp]
\begin{center}

\includegraphics[width=0.7\textwidth]{priorities.png} 
\caption{Priorities System}
\label{Priorities System}
\end{center}
\end{figure}
The list covered in purple is the priorities list items, which are protection, safety and resources. Defense against enemy assaults has the priority number one (protection). Having a fortified nest has the priority number two (safety). Collecting resources has the priority number three (resources). The pink boxes are mandatory requirements in order to perform an element in the priority list. Example 1 Resources, it has both map updates and strategies mandatory components. Agents cannot gather resources if they don't have information about the environment (map update) and neither paths taken nor number of agents sent (strategies). Example 2 Protection, it has only strategies as a mandatory component. Agents cannot defend the nest if they don't know what strategies to apply, however, they don't need information about the map. Lists with red arrows attached to elements in priorities list are parameters affecting each item in the priority list. Example 1 Protection, in order to defend the nest, agents need to check the house alarm, other agents' messages and information about the enemy. Example 2 Safety, in order to fortify the nest, agents need to check damage caused to the nest, repairing materials places and available workers. Example 3 Resources, in order to gather a resource agents need to allocate type of resource, position and available workers can go there.   