\section{Results}
\label{results}

We carried out the experiments described in section \ref{implementation}. However, we will report only on the most interesting plots for the different graphs. The first interest we had in the data was in studying how the interaction between police and drivers evolved. To illustrate this we used the number of drivers who received tickets in an iteration as a metric. By analyzing this in our simplest graph, the parallelogram graph with one driver and one police agent. The results are presented in figure \ref{fig:pargraph}, the experiment involved $1000$ runs, but we only report on the first $300$ number of iterations, in order to highlight the ticket trends. For the Q-learning driver, the total number of tickets received increases monotonically as the number of iterations increase. The EWA-learning driver, exhibits a different behavior, and receives tickets until it decides not to speed any longer.

\begin{figure}[h!]
  \centering
      \includegraphics[width=0.525\textwidth]{figures/DIA1D2C300_NEWT1DCumul.jpg}
  \caption{Tickets drivers received per iteration when we use the parallelogram graph. }
  \label{fig:pargraph}
\end{figure}

We designed the Q-learning driver to learn about the presence of police after a number of iterations. Due to this characteristic, a number of drivers, inevitably, get ticketed before the knowledge of adversary presence converges. After every driver knows of police presence in a road, we designed the driver agents such that the knowledge of police in a node decays with the number of iterations. This allows Q-learning drivers to engage in more risky behavior, and the trend is apparent in figure \ref{fig:pargraph}; after short transient periods, drivers try to go through the road again, and if police is there, then they get ticketed again.

On the other hand, the EWA learner explores every profitable path that BFS returns. During exploration, police agents take advantage of the knowledge they learn within an iteration to ticket drivers. This action is shown directly by the sharp increase in the EWA curve. In our implementation of EWA, there is no delay in convergence of knowledge for drivers, as their police-presence belief distribution is updated at every time step; this explains the quick convergence to a steady state. However, it was observed that after a short number of iterations( around $50$ in figure \ref{fig:pargraph}) EWA-learning drivers choose not to speed in a road, even when police leave the road. Our intuition is that this conservative strategy is due to the way we modelled the driver experience. Future work in the modelling of driver experience should mitigate this shortcoming.

In addition to investigating the number of drivers caught, we also report on the cost incurred by drivers if the police is present on their routes towards the destination node. To test this metric, we chose the diamond-like structure with three driver agents and two police agents. There were two main reasons for our decision. First, the non-speeding edges are very heavy. Second, the alternative paths to the optimal path are very heavy, even when the driver decides to speed on them. This rigidity permits us to see if the driver would prefer to speed in a more expensive road, or just slow down on the optimal path.

In figure \ref{fig:diamgraph}, the baseline curve is centered around $3.5$. This baseline is calculated as the average path cost along the optimal path, where the average is calculated by $\frac{\sum path weights}{number of driver agents}$. The baseline occurs when there are no police agents in the city. We observe that, in this case, every driver decides to speed. The EWA-learning driver incurs higher costs at the beginning when compared to the Q-learning driver agent (see figure \ref{fig:diamgraph}). The main reason for this behavior is due to the path exploration that occurs during the earlier iterations of EWA. However, as the number of iterations increase and the agent becomes more conservative, the  agents reduce their costs, even at the expense of a more attractive route that would require them to speed. On the other hand, the Q-learning agent explores paths more moderately. Although the instability in their costs reduces with time, agents continue to explore paths, to see if more beneficial paths are available.

\begin{figure}[h!]
  \centering
      \includegraphics[width=0.525\textwidth]{figures/ERN3D2C1000_NEWAverageCost3D.jpg}
  \caption{Aggregate path cost per iteration using the diamond-like graph. }
  \label{fig:diamgraph}
\end{figure}

For the sake of completeness, we mention other interesting results of experiments performed on the Q-learning algorithm. Due to time constraints, we could not extend the same experiments to the EWA algorithm. In figure \ref{fig:fridgraph}, we  observe how the average cost for the agents oscillates as they get ticketed, slow down, and attempt to speed again; this behavior is consistent with previous observations. Note that at the beginning, the average cost ranges from 15 to 80 time units, but as the simulation progresses, this margin begins to narrow until it centers around 20 units. This indicates that, after a high number of iterations (around 500), the drivers are learning how the police agents act, are adapting to their behavior, and are converging around a low cost value. This behavior is bad for police agents as their rate of issuing tickets decreases in the same rate as does the cost for the drivers; this decrease can be seen in figure \ref{fig:gridgraph}. This behavior indicates that when agents have enough options, as measured by number of routes to their destinations, they eventually learn to take a less expensive route that happens to be more costly than the optimal route, given that they will still have to deal with police agents.

\begin{figure}[h!]
  \centering
      \includegraphics[width=0.525\textwidth]{figures/6D12C1000_CummulativeCost6D.jpg}
  \caption{Average agent path costs per iteration using the grid graph. }
  \label{fig:fridgraph}
\end{figure}

\begin{figure}[h!]
  \centering
      \includegraphics[width=0.525\textwidth]{figures/gridgraphtotaltickets.jpg}
  \caption{Sum of tickets given to drivers per iteration using the grid graph. }
  \label{fig:gridgraph}
\end{figure}