\section{Randomized Reflex Agent}
In the randomized reflex agent, it is possible to take several different actions in terms of the action probability distribution. We use the multinomial distribution in this case. For designing the parameters in these multinomial distributions, we use heuristics in light of an optimal path, which is shown as the dashed line in figure \ref{fig:celltypes}. For a $n \times m$ map, there are $(n-1) \times m$ forward actions, and $\lfloor (m-2)/2 \rfloor$ left and right turns when no wall and dirt have
been detected. When facing walls, the ratio of turning left and turning right is $2:1$. Based on these observations, we design the if-then rules and multinomial distributions as in table \ref{tab:random}. The agent will choose an action according to the random number generated by the multinomial distribution.

\begin{table}[h]
    \centering
    \begin{tabular}{|l|l|l|c|c|c|c|c|}
        \hline
        WALL & DIRT & HOME & FORWARD & RIGHT & LEFT & SUCK & OFF  \\ \hline
        1    & 0    & 1    & 0.0     & 0.65  & 0.33 & 0.0  & 0.02 \\ 
        1    & 1    & 0    & 0.0     & 0.0   & 0.0  & 1.0  & 0.0  \\ 
        1    & 1    & 1    & 0.0     & 0.0   & 0.0  & 1.0  & 0.0  \\ 
        1    & 0    & 0    & 0.0     & 0.67  & 0.33 & 0.0  & 0.0  \\ 
        0    & 0    & 1    & 0.9     & 0.05  & 0.05 & 0.0  & 0.0  \\ 
        0    & 1    & 0    & 0.0     & 0.0   & 0.0  & 1.0  & 0.0  \\ 
        0    & 1    & 1    & 0.0     & 0.0   & 0.0  & 1.0  & 0.0  \\ 
        0    & 0    & 0    & 0.8     & 0.1   & 0.1  & 0.0  & 0.0  \\
        \hline
    \end{tabular}
    \caption{Parameters of multinomial distributions for each situation.}\label{tab:random}
\end{table}

We prefer the agent to stop at home once it has cleaned as many cells as possible. Note that in practice, we didn't let the agent turn off before reaching the maxmum number of permitted actions\footnote{We set a maximum cap on the number of actions an agent can take so that the randomized agent would not keep running forever potentially}. Our design favors cleaning as many dirty cells as possible over getting home correctly since the actions are randomized and it's possible that the agent can never get home if it is unlucky.  Thus for a dirty cell, we let the agent clean this cell first given our greedy strategy. When there is no wall and dirt detected, we designed the agent to be inclined to keep going forward, but the ratio is not as high as the case of the optimal path. We found
that decreasing the probability of going forward a little bit will achieve better performance.
