\documentclass[titlepage,12pt]{article}
\usepackage{enumitem}
\setlist[itemize, 1]{leftmargin=*, labelindent=12.5mm, labelsep=6.3mm, itemsep=0\baselineskip}
\usepackage{amsmath, amsfonts, amsthm, amssymb, algorithm, graphicx}
\usepackage[noend]{algpseudocode}
\usepackage{fancyhdr, lastpage}
\usepackage{geometry}
\usepackage{setspace}
\geometry{margin=1in}
\usepackage{hyperref}
\hypersetup{linktocpage
}
% Macros
\newcommand{\algorithmicbreak}{\textbf{break}}
\newcommand{\BREAK}{\STATE \algorithmicbreak}

\DeclareMathOperator{\BigOm}{\mathcal{O}}
\newcommand{\BigOh}[1]{\BigOm\left({#1}\right)}
\DeclareMathOperator{\BigTm}{\Theta}
\newcommand{\BigTheta}[1]{\BigTm\left({#1}\right)}
\DeclareMathOperator{\BigWm}{\Omega}
\newcommand{\BigOmega}[1]{\BigWm\left({#1}\right)}
\DeclareMathOperator{\LittleOm}{\mathrm{o}}
\newcommand{\LittleOh}[1]{\LittleOm\left({#1}\right)}
\DeclareMathOperator{\LittleWm}{\omega}
\newcommand{\LittleOmega}[1]{\LittleWm\left({#1}\right)}

\newcommand{\calP}{\mathcal{P}}
\newcommand{\Z}{\mathbb{Z}}
\newcommand{\R}{\mathbb{R}}
\newcommand{\Exp}{\mathbb{E}}
\newcommand{\Q}{\mathbb{Q}}
\newcommand{\sign}{\mathrm{sign\ }}
\newcommand{\abs}{\mathrm{abs\ }}
\newcommand{\eps}{\varepsilon}
\newcommand{\zo}{\{0, 1\}}
\newcommand{\SAT}{\mathit{SAT}}
\renewcommand{\P}{\mathbf{P}}
\newcommand{\NP}{\mathbf{NP}}
\newcommand{\coNP}{\co{NP}}
\newcommand{\co}[1]{\mathbf{co#1}}
\renewcommand{\Pr}{\mathop{\mathrm{Pr}}}

\newtheorem{problem}{Problem}
\newtheorem{theorem}{Theorem}
\newtheorem{lemma}[theorem]{Lemma}
\newtheorem{invariant}[theorem]{Invariant}
\newtheorem{corollary}[theorem]{Corollary}
\newtheorem{definition}[theorem]{Definition}
\newtheorem{property}[theorem]{Property}
\newtheorem{proposition}[theorem]{Proposition}
\newtheorem{observation}[theorem]{Observation}

\algnewcommand\algorithmicinput{\textbf{INPUT:}}
\algnewcommand\INPUT{\item[\algorithmicinput]}
\algnewcommand\algorithmicoutput{\textbf{OUTPUT:}}
\algnewcommand\OUTPUT{\item[\algorithmicoutput]}

%replace XXX with the submission number you are given from the ISCA submission site.
\newcommand{\IWreport}{2012}

\usepackage[normalem]{ulem}

\begin{document}
\title{A Latex template for Independent Work Reports}

\date{May 6, 2013}
\maketitle

\thispagestyle{empty}
\newpage
Honor Pledge
\newpage

\doublespacing
\begin{abstract}
abstract
\end{abstract}
\newpage
Acknowledgements
\newpage
\tableofcontents
\newpage
\section{Introduction}
\label{sec:intro}
Cluedo\textregistered, or Clue\textregistered{} in North America, is a popular murder-mystery solving board game originally published by Waddingtons in Leeds, England in 1949. The premise of the game is that Mr. Boddy has been murdered in a mansion, and the players act as detectives strategically moving around the game board, which comprises of rooms and hallways in-between rooms, in order to gather clues as to which room, suspect, and weapon were involved in the murder. From an information theory standpoint, the goal of the game is to reveal as much information as possible about the hidden variables: the room, suspect, and weapon of the murder. While seemingly simple, the formulation of an optimal strategy for this game is nontrivial, due to the inherently probabilistic and partial-knowledge properties of the game.

Clue\textregistered{} belongs to a group of problems known as "treasure hunt" problems. A treasure hunt is a game in which players must traverse the environment and locate one or more {\bf treasures}. The players have {\bf sensors} that they use to obtain measurements of information, which is dependent upon the players' locations. The goal is to obtain as many treasures as possible given the constraints such as time, adverse conditions of the environment, etc. It turns out that games such as Clue\textregistered{} are ideal benchmarks for testing computational intelligence theories and algorithms because they provide challenging dynamic environments with rules and objectives that are easily understood. Examples of a real-world application of a treasure hunt are landmine detection using robotic sensors, cleanining, monitoring of urban environments, manufacturing plants, and endangered species.

This paper introduces a Markov decision process based Clue\textregistered{} AI that models the state of the hidden variables using information entropy. The general strategy is to maximize the expected information gain, or information entropy reduction, over a finite number of discrete time steps, by choosing a sequence of rooms to visit and measurements to make in those rooms. As will be described in detail, this requires the AI to have several components including a feasible way to calculate the probability distribution of the hidden variables, represent board as a connectivity board, and calculate the optimal policy to take for any given game state by modeling the problem as a Markov decision process.

The structure of the paper will have the following layout. Section ~\ref{sec:prev} will give an overview of the previous work done on this problem. Section  ~\ref{sec:rules} describes the rules of Clue\textregistered{}. Section  ~\ref{sec:math} formulates the Clue\textregistered{} problem mathematically as a treasure hunt problem within reasonable approximations. Section  ~\ref{sec:prob} shows feasibility of computing the posterior probabilities of the hidden variables exactly. Section VI describes how to represent the state of the system as a set of {\bf shuffles}, how to update this system from observations, and how to compute the exact posterior probabilities using this representation. Section VII considers a "pure" form of the problem without distance cost and develops the optimal strategy for this modified problem using information entropy. Section VIII returns to the original problem and develops a good approximation of the optimal strategy using Markov decision process. Section IX evaluates and analyzes the implementation of the AI against other opponents. Section X concludes our findings and describes the potential for future work.

\section{Previous Work}
\label{sec:prev}
Silvia Ferrari and Chenghui Cai have co-authored a series of papers on the problem of creating an optimal strategy for Clue\textregistered{}. In one paper, they used Bayesian network inference to maintain a probability distribution of the hidden variables. Each unknown card, either in an opponent's hand or in the secret case file, is modeled as a random variable. Each time the AI gains information about one of the hidden variables, it updates the conditional probability tables of the remaining variables. Unfortunately, performing inference is intractable, so a series of simplifying assumptions had to be made. In addition, the algorithm's decision making model does not take into account the benefit-cost ratio of obtaining information, and instead takes the most immediate reward by heading towards the nearest room. Thus, the algorithm does not maximize utility over time.

In another paper, Cai and Ferrari build upon their Bayesian network approach by incorporating Q-learning by training a neural network. By improving the Q function, this approach helps the Bayes net incorporate new evidence into its belief state more correctly. Their Clue\textregistered{} AI is based on strong theory, but the algorithm performs relatively poorly, winning only half the time against a 12-year-old and a 14-year-old.

In addition, Riley Siebel '11 worked on creating a Clue\textregistered{} AI for his senior thesis. In his algorithm, he attempted to combine a propositional logic agent with a partially observable Markov decision process agent. The logical agent represents the belief state of the system using constraint satisfaction, which can then be used to perform inference on the hidden variables. In order to approximate the posterior probability distribution feasibly, he incorporated particle filtering, which randomly samples the space of all possible deals of cards. Finally, he combines this with a POMDP in order to choose the action that maximizes the reward, which he defines as the reduction in information entropy of the hidden variables. Unfortunately, due to the many overly expensive operations, his algorithm is too slow to finish a single game.

Our efforts mainly build upon Siebel's ideas by modeling the problem as a Markov decision process that seeks to maximize the reduction in information entropy of the hidden variables (since entropy is "uncertainty", reducing it to zero means the hidden variables are completely determined). However, we make a crucial insight that significantly reduces the running time. Note that the focus of all the previous efforts is on approximating the posterior probability distribution of the hidden variables, either through Bayesian inference, Q-learning, or constraint satisfaction. The key insight we make is that the posterior, through a clever representation, can be exactly computed in an efficient manner, without need for approximation. Using this, we are able to determine the exact probability mass distribution of all possible states and thus directly model the problem as a MDP. However, as we shall discuss in length, our algorithm is still only an approximation of the optimal strategy, and whether an optimal strategy exists remains a question to be answered.

\section{Rules of the Game}
\label{sec:rules}
The game board of Clue\textregistered{} represents Mr. Boddy's mansion containing nine room where the murder could have happened: ballroom, billard room, conservatory, dining room, hall, kitchen, library, lounge, and study. In between are the hallways that connect the rooms. The game board is shown in figure 1. There are six suspects:  Colonel Mustard, Miss Scarlet, Professor Plum, Mr. Green, Mrs. White, and Mrs. Peacock. There are six possible murder weapons: candlestick, knife, lead pipe, revolver, rope, and wrench. The game also includes a deck of twenty-one cards, each representing a room, suspect, or weapon. At the beginning of the game, one card of each type is randomly selected and secretly placed in the case file. Discovering the identity of these cards represents solving the murder, and thus is the objective of the game. The remaining cards are randomly shuffled and dealt, face down, to the players. The game typically has three to six players.

During game play, the players move tokens around the board by rolling a die. The number rolled specifies the maximum nuber of steps the player may take. The start position of each player depends on the specific token chosen. There are two ways for a player to enter one of the rooms: through a door connecting the room to the hallways, or through a secret passage from another room. Whenever a player enters a room, he or she has the option of making a suggestion, which consists of a triple of (room, suspect, weapon), representing the player's guess of the contents of the case file. The room specified in the suggestion must match the room the suggesting player is currently in. Players may not enter the same room twice in a row.

Suggestions are the means through which a player gains information. When a suggestion is made, the other players take turns trying to refute the suggestion, called a refutation. If a refuting player has any of the three cards being suggested, the player must show the card to the suggesting player, without revealing to the other players. Besides the same-room requirement, there are no other requirements to a suggestion; in particular, a player may suggest cards in his or her own hand, even though it is impossible for others to refute that card. Refutation ends either when a player refutes by showing a card, or when none of the remaining players are able to refute. Afterwards, play proceeds to the next player. Clearly, strategically chosen suggestions are the key to victory.

When a player feels confident enough, on any turn he or she may choose to make an accusation. Similar to a suggestion, an accusation is a triple (room, suspect, weapon), but can be made at any location on the board. After an accusation is made, the accuser checks the contents of the case file (but does not reveal to other players). If the accusation is correct, the game ends and the accusing player wins the game. Otherwise, the accusing player is removed from the game: he or she can no longer participate except for refuting the suggestions made by the remaining players.

(figure 1)

\section{Mathematical Formulation}
\label{sec:math}
\subsection{Clue\textregistered{}}
\label{sec:cluemath}
Let the set of all room cards be \[ R = \{r_i\} \] where $r_i$ is the $i$th room card, $\|R\| = 9$. Let the set of all suspect cards be \[ S = \{s_i\} \] where $s_i$ is the $i$th suspect card, $\|S\| =6$. Let the set of all weapon cards be \[ W = \{w_i\} \] where $w_i$ is the $i$th weapon card, $\|W\| =6$. The set of all cards is \[ C = R \cup S \cup W, \|C\| = 21 \]
Let the set of all players be \[ P = \{p_i\} \] where $p_i$ is the $i$th player. Let the indicator random variable 
$$
	p_{i,c} = \left\{
		\begin{array}{ll}
			1  & \quad \text{card } c \text{ is in player } i \text{'s hand}, c \in C \\
			 0  & \quad \text{otherwise}
		\end{array}
	\right.
$$
Define the set of cards in player $i$'s hand to be \[ \operatorname{cards}(p_i) = \{c: c \in C, p_{i,c} = 1\} \] Let the cards in the secret case file be $c_m = \{r_m, s_m, w_m\}$ where $r_m \in R, s_m \in S, w_m \in W$. Naturally, we are interested in the posterior probability distribution of the secret case file, \[ {\bf P}(c_m | {\bf e}) = {\bf P}(c_m | e_1, e_2, ..., e_t) \] where $e_1, e_2, ... e_t$ are the observations we have made throughout the game up to time $t$. 

Let the set of locations on the board be denoted as \[ L = \{l_i\} \] Each location has a set of neighbors it can reach, which we shall denote by $N(l_i)$. This naturally lends to the model of the board as a connectivity graph, which we shall denote by
\begin{gather} 
G_L = (L, E) \\
E = \{(l_i, l_j) : l_i, l_j \in L, l_j \in N(l_i)\}
\end{gather}
Let $d_{l_i, l_j}$ be the shortest distance between locations $l_i$ and $l_j$. We denote the location of player $i$ at time $t$ as $L(p_i, t) \in L$, which we shorthand as $L_{p_i,t}$. Let the die roll be $d_t$ at time $t$. The set of all reachable locations for player $i$ at time $t$ is denoted by \[ NL(p_i, t) = \{ l_i : l_i \in L, d_{l_{p_i,t}, L_i} \le d_t\} \] Since there is the constraint that players may not visit the same room twice in a row, we denote the last room player $i$ visited as $r_{p_i, t}$. Thus, the location of player $i$ at time $t + 1$ must satisfy $L_{p_i, t+1} \in NL(p_i, t) - \{ r_{p_i, t} \}$.

Finally, we need a way to formalize the observations $e_1, e_2, ..., e_t$. Ignoring all the external factors such as player facial expressions, emotions, conversation, etc. (arguably important for games such as Poker), the only observations a player can make are the pawn movements of the other players and the suggestions and refutations made throughout the game. We denote the set of all possible observable events as $V$. In fact, we can enumerate all these possible events:
\begin{itemize}
\item Player $i$ moved from location $x$ to location $y$. We denote this by {\tt Move}$_{p_i, x, y}$.
\item Player $i$ made suggestion with triple $(r, s, w)$. We denote this by {\tt Suggest}$_{p_i, r, s, w}$.
\item Player $j$ refutes player $i$'s suggestion. Depending which player {\it we} are, there are different scenarios:
	\begin{itemize}
	\item We are either player $i$ or player $j$. In this case, we either show or are shown the refuting card $c$, and we denote this by {\tt Refute}$_{p_i, p_j, c}$.
	\item We are neither player $i$ nor player $j$. In this case, we know a refutation was made, but we do not know the refuting card, and we denote this by {\tt Refute}$_{p_i, p_j}$.
	\end{itemize}
\item No player was able to refute the suggestion. We denote this by {\tt NoRefute}.
\item Player $i$ makes an accusation with triple $(r, s, w)$. We denote this by {\tt Accuse}$_{p_i, r, s, w}$.
\item The accusation failed. We denote this by {\tt FailedAccuse}.
\end{itemize}
With these, we can then define the observations at time $t$ to be \[ e_t = \bigcup v_i \quad \text{ where } \quad v_i \in V \] As we shall see, a crucial step is determining ${\bf P}(c_m | {\bf  e})$, and the calculation differs depending on the specific model chosen to represent the problem. 

\subsection{Treasure Hunt Problem}
\label{sec:treasuremath}
In order to precisely formulate the problem of optimally playing Clue\textregistered{}, we first need an overview of the treasure hunt problem. The objective of the treasure hunt problem is to infer a hidden variable or treasure $y$, called the {\it hypothesis}, from a series of measurements. In the discrete version of the problem, $y$ takes on a finite range $Y = \{y^1, ..., y^p\}$, where $y^i$ is the $i$th possible value of $y$. A measurement $m_i$ is a discrete random variable that represents an observation made by the sensor. $m_i$ also has a finite range $M_i = \{m_i^1, ..., m_i^N\}$, where $m_i^l$ denotes the $l$th possible value of $m_i$. While $y$ is hidden, it can be inferred an available set of measurements $M = {m_1, ..., m_r}$ through the joint probability mass function (PMF) $ {\bf P}(y, M) = {\bf P}(y, m_1, ..., m_r)$.

Each measurement in the set $M$ can only be made at certain locations in the sensor workspace. We denote the workspace as a set of cells $K = \{k_1, ..., k_q\}$, in which a subset $\bar K \subseteq K$ are the {\it observation cells} where measurements are made, and the rest are {\it void cells}. Time is discretized, $t_1, ..., t_f$. The sensor can only be in one cell at a time $t_k$ and obtain at most one measurement $z(t_k)$. The adjacency relationship between cells in $K$ can be represented as a connectivity graph:

\begin{definition}
A connectivity graph with observations, $G$, is a nondirected graph where each node represents either an observation cell or a void cell, and two nodes, $k_i$ and $k_j$ are connected by edge $(k_i, k_j)$ with a distance of $d_{ij}$, if and only if the corresponding cells are adjacent.
\end{definition}

At any time, the sensor can choose to make a measurement if it is currently in an observation cell, which we call the {\it test decision} $u(t_k)$, and then moving to an adjacent cell, which we call the {\it action decision} $a(t_k)$. In order to make the best possible decision, we need some objective function to maximize. We can characterize this using the reward function:

\begin{definition}
The reward at $t_k$ is the measurement benefit, $B$, minus the cost of measurement, $J$, and of the sensor movement or distance, $D$,  as in \[ R(t_k) = w_B B(t_k) - w_J J(t_k) - w_D D(t_k) \] where $w_B, w_J, w_D$ are weights appropriate for the problem.
\end{definition}

Since the objective is to infer the hypothesis $y$, the measurement benefit $B$ must be a function of the PMF ${\bf P}(y, M)$ where $M$ are the results of the measurements made. We shall define the function in Section VII, where we show the relationship using information entropy.

We now have enough to formulate the treasure hunt problem, for which the solution to the problem is the optimal sensor decision strategy:

\begin{problem}
Given a connectivity graph with observations, $G$, and a joint PMF, ${\bf P}(y, m_1, ..., m_r)$, of the hypothesis variable $y$ and $r$ measurements, find the strategy $\sigma^\star = \{u(t_k), a(t_k)|k=0, ..., f\}$ that maximizes the total measurement profit \[ V = \displaystyle\sum_{k=0}^f R(t_k) \]
\end{problem}

Note that $B$ must be a additive function for the definition of $V$ to hold. There are several interesting reductions we can make. First, note that in the special case that every measurement must be made, the problem reduces to a travelling salesman problem.

\begin{proposition}
If all measurements $\{m_i\}$ must be made, then the treasure hunt problem reduces to finding a minimum cost tour of all observation cells $\bar K$.
\end{proposition}

\begin{proof}
\begin{align*}
 V &= \displaystyle\sum_{k=0}^f R(t_k)\\
     &= \displaystyle\sum_{k=0}^f w_B B(t_k) -  \displaystyle\sum_{k=0}^f w_J J(t_k) - \displaystyle\sum_{k=0}^f w_D D(t_k)
\end{align*}
Since all measurements must be made, the measurement benefit and measurement cost of each $m_i$ must be considered, which means the first two sums are constant:
\[
\displaystyle\sum_{k=0}^f w_B B(t_k) -  \displaystyle\sum_{k=0}^f w_J J(t_k) = \displaystyle\sum_{m_i} [w_B B(m_i) - w_J J(m_i)] = \text{constant}
\]
Thus, maximizing $V$ is equivalent to minimizing $\quad\displaystyle\sum_{k=0}^f w_D D(t_k)\quad$ for all observation cells $\bar k \in \bar K$, which is the definition of the travelling salesman problem.
\end{proof}

Another interesting reduction is that if the distance costs are all zero, then the optimal strategy is to make measurements in descending order of measurement benefit-to-cost ratio.

\begin{proposition}
If distance costs $D = 0$, then the optimal strategy is to make measurements in descending order of measurement profit.
\end{proposition}

\begin{proof}
\begin{align*}
 V &= \displaystyle\sum_{k=0}^f R(t_k)\\
     &= \displaystyle\sum_{k=0}^f [w_B B(t_k) - w_J J(t_k)]
\end{align*}

Since distance costs are 0, the sensor can move from any cell to any other cell in one turn (the constraint that the sensor can only be in one cell at a time still applies). Therefore, at east time $t_k$, we can make a measurement $m_k$. Thus, \[ V = \displaystyle\sum_{k=1}^f [w_B B(m_k) - w_J J(m_k)] \] where $m_k \in M$. Maximizing $V$ is simply choosing $f$ measurements with the highest measurement profits, which can be enumerated in decreasing order.
\end{proof}

We shall make use of this reduction in Section VII, where we consider a simplified form of the Clue\textregistered{} problem with infinite dice rolls. 

\subsection{Clue\textregistered{} as a Treasure Hunt Problem}
\label{sec:cluetreasuremath}
We formulate the problem of finding the optimal strategy for playing Clue\textregistered{} as a treasure hunt problem, as defined in the previous section, with some approximations.

Naturally, the hypothesis $y$ is simply the contents of the case file, $\{r_m, s_m, w_m\}$. Another possibility is to also include the hidden variables of the opponent hands. The latter is useful when we calculate our probability mass function.

Next, notice the board representation as a connectivity graph $G_L$ is equivalent to the sensor graph $G$ of the treasure hunt problem, with the only difference being the constraint that players may not visit the same room twice in a row. We can account for this by constructing directed graph
\begin{gather} 
G_L' = (L_{t_0} \cup ... \cup L_{t_f}, E') \\
E' = \{(l_i, l_j) : l_i \in L_{t}, l_j \in L_{t+1}, l_j \in N(l_i), l_j \not=r_{p,t}\}
\end{gather}
where $p$ is the player controlled by the AI and $r_{p,t}$ is the last room visited in the directed path from the start $L_{p, t_0}$ to $l_i$. This ensures that any path in $G_L'$ does not have the player visit the same room twice in a row. We also need a notion of distance cost $d_{ij}$ between two locations $l_i$ and $l_j$. It is tempting to use the shortest path length $d_{l_i, l_j}$, but since a player may move more than one cell away during each turn depending on the dice roll, the actual distance cost between $l_i$ and $l_j$ should be defined as the expected number of turns needed.

\begin{problem}
Given that we are on a current location $l_i$, what is the expected number of turns it would take to get to location $l_j$? Let $C_{ij}$ be the shortest Manhattan distance between $l_i$ and $l_j$. Assume we roll a dice with $6$ sides each turn, and move a maximum distance of the number we rolled.

Formally, we want $E[T]$ where $T$ is a random variable representing the number of turns it takes to move from $l_i$ to $l_j$.
\end{problem}

\begin{observation}
\begin{align*}
E[T] &= \displaystyle\sum\limits_{t=0}^\infty Pr[T > t]\\
&= \displaystyle\sum\limits_{t=0}^\infty Pr[\displaystyle\sum\limits_{i=1}^t r_i < C_{ij}]\\
&= \displaystyle\sum\limits_{t=0}^{C_{ij} - 1} Pr[\displaystyle\sum\limits_{i=1}^t r_i < C_{ij}]
\end{align*}

where $r_i$ is the random variable representing the roll of the dice on the $i$th turn.
\end{observation}

\begin{proof}
The probability of taking more than $t$ turns is equivalent to the probability that the sum of the first $t$ rolls is less than the shortest distance. Since $1 \le r_i \le 6$, $\displaystyle\sum\limits_{i=1}^t r_i \ge t \implies \forall t \ge C_{ij}$, $\displaystyle\sum\limits_{i=1}^t r_i \ge C_{ij} \implies Pr[\displaystyle\sum\limits_{i=1}^t r_i < C_{ij}] = 0$.
\end{proof}

\begin{observation}
$Pr[\displaystyle\sum\limits_{i=1}^t r_i < C_{ij}] = 1 - Pr[\displaystyle\sum\limits_{i=1}^t r_i \ge C_{ij}]$
\end{observation}

\begin{proposition}
Let $F(t,c) = Pr[\displaystyle\sum\limits_{i=1}^t r_i \ge c]$. Then, the following recurrence holds:
$$
  F(t,c) =
    \begin{cases}
      0 &  : c > 6t\\
      F(t-1,c) + \frac{1}{6} \displaystyle\sum\limits_{i=1}^6 [F(t-1,c-i) - F(t-1,c-i+1)] &: else
    \end{cases}
$$
\end{proposition}


\begin{proof}
Either we already rolled $\ge c$ in $t-1$ turns, or we are within 6 of the $c$. The expression $F(t-1,c-i) - F(t-1,c-i+1)$ is the probability we rolled exactly $c - i$ in $t-1$ rolls, and we have probability $\frac{1}{6}$ of rolling $i$ on the turn $t$.
\end{proof}

The above result gives us an efficient dynamic programming solution for calculating $E[T]$. First, we calculate $F(t, c)$ for $1 \le t \le C_{ij} - 1, 1 \le c \le C_{ij}$, where $C_{ij} = d_{l_i, l_j}$ as defined in Section 4.1, then we compute the outer sum in Observation 5. This is a $\BigOh{ n^2}$ algorithm, where $n = \BigOh{C_{ij}}$. Since the Clue\textregistered{} board is bounded by a maximum $C_{ij}$ of around 50, our algorithm more than readily gets the job done.

Next, we need to specify the set of available measurements $\{m_i\}$. In Section 4.1, we specified a set of possible observations $e_1, ..., e_t$ as a union of events {\tt Move, Suggest, Refute, NoRefute, Accuse, FailedAccuse}. However, the only events that are controllably measured by the player are ${\tt Suggest}_{p,*}$ and its corresponding responses ${\tt Refute}_{p,p_j,*}$ or ${\tt NoRefute}$. The remaining events are determined by the opponent players, and since their actions are out of our control, we do not consider these events as measurements. Since a player can only make a suggestion with a room card that matches the room the player is currently in, we define the {\it observation cells} as the rooms ${r_1, ..., r_9} \in L$, and the remaining locations as {\it void cells}.

Finally, we need to specify both the probability mass function ${\bf P}(y|{\bf e})$ and the measurement benefit and cost functions $B$ and $J$. Since a player can always make a suggestion as long as he or she is in a room, $J = 0$. In Section VI, we first present a way to exactly calculate and maintain ${\bf P}(y|{\bf e})$. In Section VII, we define $B$ in terms of information entropy.

It is important to note that the formulation of Clue\textregistered{} as a treasure hunt problem allows it to be a benchmark for algorithms that could have applications in many areas described in the introduction. In essence, the player's position determines the allowable suggestions, or measurement, about the case file, or hypothesis. Decisions on the player's movements must be made before the suggestion's outcome becomes available. Therefore, the strategy must optimize a tradeoff between the expected benefit of visiting a room and the cost, which consists of the distance traveled. The measurement benefit of the suggestion essentially measures how much it could reduce the uncertainty on the hypothesis.

\section{Computing the Exact Probability Mass Function}
\label{sec:prob}
\subsection{Feasibility}
\label{sec:probfeasibility}
The PMF ${\bf P}(y|{\bf e}) = {\bf P}(r_m, s_m, w_m|{\bf e})$ gives a probability distribution over all possible combinations of $r_m, s_m, w_m$ that could be in the case file. Since there are $9 * 6 * 6 = 324$ possible combinations, each combination takes up a portion of the probability {\it mass} that represents how likely it is. The PMF must satisfy $\displaystyle\sum_{r_m, s_m, w_m} {\bf P}(r_m, s_m, w_m|{\bf e}) = 1$.

How do we gauge how likely each combination is? If we knew what cards our opponents are holding, then clearly we also know what the secret cards are, since those are simply the cards that are neither in our hand nor opponent hands.

\begin{observation}
\[ {\bf P}(r_m, s_m, w_m|{\bf e}) = P(p_{1,r_m} = p_{1, s_m} = p_{1,w_m} = ... = p_{N,r_m} = p_{N,s_m} = p_{N,w_m} = 0|{\bf e}) \]
\end{observation}

However, this requires knowing the cards in opponent hands, which increases the number of hidden variables and is a seemingly harder problem. One way would be to enumerate all permutations of card deals. Considering the three player case, after the 3 cards are put into the envelope, there are $21 - 3 - 18$ remaining cards, so each player gets $6$ random cards. Enumerating all deals would result in \[ {21 \choose 6} * {21-6=15 \choose 6} * {15-6=9\choose6} \approx 10^{11}\]
Siebel and others quickly dismissed this idea because even the initialization would take too long and could not be stored in memory. [citation]

However, the size of the permutation space is taken from the perspective at the beginning of the game, before even the deal, with zero knowledge a priori. After the initial deal, each player can see his or her {\it own} cards, which amounts to $6$ cards worth of information. Moreover, there aren't just any three cards in the case file--there are one of each type of room card, suspect card, and weapon card. Suppose we are dealt $r$ room cards, $s$ suspect cards, and $w$ weapon cards into our hand, then there are $9-r$ room cards, $6-s$ suspect cards, and $6-w$ weapon cards that could be in the case file. In addition, since we only have two opponents, enumerating all permutations of one of the two hands determines the other hand, by process of elimination. Thus, the size of the permutation space is a function
\[
f(r,s,w) = (9-r)(6-s)(6-w)*{21-9=12\choose6}
\]
where $r+s+w = 6$, since we have a total of $6$ cards. $f$ is maximized at $r = 4, s = 1, w= 1$, thus
\[
\max [f(r,s,w)|r+s+w=6] = 6^3 * {12\choose6} = 115500
\]
which is easily handled by modern processor speeds and memory. 

Even in the six player case, the permutation space only increases by two to three magnitudes, which is capable of being handled by machines as confirmed by experimentation.

\subsection{Algorithm}
\label{sec:probalgoritm}

\end{document}

