\documentclass{article}
\usepackage{fullpage}
\title{AI cheat sheet}
\author{Stuart Hannah}
\begin{document}
\maketitle
\section*{Definition}
An exact definition of AI is difficult to pin down. Definitions vary
between having an entity that acts like a human and one which performs
well against an ideal intelligence which we will define as
\emph{rationality}. A distinction is also made between intelligent (by
the previous metric of intelligence) behavior and intelligent
reasoning.

Definitions that focus on these aspects are AI are either too vague to
be of any use in understanding AI, or too specific and suffer from
being ``solved'' whilst still not producing an intelligent agent. This
problem has been described as the ``shifting goal-posts of AI'', as
soon as we are able to implement something we previously thought to be
the definition of AI, we find that it is trivial and not really the
definition at all...

Turing avoids this question in his 1950's paper ``Computing Machinery
and Intelligence''. Instead of giving a formal set of requirements
that a machine would need to satisfy to be classed as intelligent, a
test is provide to determine whether an artificial intelligence could
be distinguished from a intelligent being (possibly a human one).

The test itself is a variation of a game defined by Turing known as
the imitation game. It is a game consisting of three people A, B and
C. Where A is a woman, B a man and C of either gender. C attempts to
determine which of the other two is the man, and which the woman. If
we replace one of the contestants with a machine, the question can the
be which contestant can think.

To pass this test a computer would require the following capabilities,
\begin{itemize}
\item Natural language processing - enable it to communicate and
  understand English
\item Knowledge representation - to store what it knows or hears
\item Automated Reasoning - use stored information to answer questions
  and to draw new conclusions
\item Machine Learning - to adapt to new circumstances and to detect
  and extrapolate patterns
\end{itemize}

In the paper, physical interaction between the contestant's and the
tester is deliberately avoided. The so-called \emph{Total Turing Test}
includes a video signal so that the interrogator can test the subjects
perceptual abilities, as well as the opportunity for the interrogator
to pass physical objects, ``through the hatch''. In order to pass the
machine would require the following;

\begin{itemize}
\item Computer Vision - to perceive objects
\item Robotics - manipulate objects and move about
\end{itemize}

Although this test provides a good example of how AI can be defined,
in practice modern AI research is not overly concerned with passing
the test. It is more important to understand the underlying principals
than trying to duplicate an exemplar.

\section*{Agents}
An agent is anything that can be viewed as perceiving its environment
through sensors and acting on the environment through actuators.

The term percept refers to an agents perceptual inputs at any given
instant. An agent's percept sequence is the complete history of
everything that the agent has ever perceived. In general and agents
choice of action at any given instant can depend on the entire
percept sequence observed to date.

If we can specify the agents choice of action based on every possible
percept sequences, then we have said more or less everything that there
is to say about the agent. Mathematically speaking, we can say that an
agents behavior is described by the agent function that maps any
given percept to an action. The agent function is an abstract
mathematical description; the agent program is a concrete
implementation, running on the agent architecture.

A rational agent is one that does the right thing. Conceptually
speaking this means that every entry in the table for the agent
function is filled out correctly. The table described involves the
mapping of all possible percepts sequences to stored actions. An
obvious question that arises from this is ``What is the right thing?''

As a first attempt to answer this question we will say that the right
action is one that will cause the agent to be most successful. 

A performance measure embodies the criterion for the success of an
agents behavior. When an agent is place in an environment, it
generates a sequence of actions according to the percepts it
receives. This sequence of actions causes the environment to go
through a sequence of states. If the sequence is desirable than the
agent has performed well. Obviously there is not on fixed measure
suitable for all agents. We will insist on an objective performance
measure. As a general rule it is better to design performance measures
according to what one actually wants in the environment, rather than
according to how one thinks the agent should behave.

\subsection*{Rationality}
What is rational at any given time depends on the following

\begin{itemize}
\item Performance measure that defines the criterion of success
\item Agents prior knowledge of the environment
\item The actions that the agent can perform
\item The agents percept sequence to date.
\end{itemize}

This leads to the definition of a rational agent.

``For each possible percept sequence, a rational agent should select
an action that is expected to maximize it's performance measure, given
the evidence provided by the percept sequence and whatever built in
knowledge the agent has''
(Artificial Intelligence: A modern approach, Russel \& Norvig)

We need to draw an important distinction between omniscience and
rationality. An omniscient agent knows the actual outcome of its
actions and can act accordingly; but omniscience is impossible in
reality. Rationality maximizes perceptual performance. Our definition
of rationality does not require omniscience, because the rational choice
depends on the percept sequence to data.

Doing actions in order to modify future percepts- sometimes called
information gathering- is an important part of rationality.

Successful agents split the task of computing the agent function into
three different periods.
\begin{itemize}
\item When the agent is being designed
\item When it deliberates on its next action
\item As it learns from experimentation
\end{itemize}

To the extent that the agent relies on the prior knowledge of it
designer rather than its own percepts, we say that the agent lack
autonomy. A rational agent should have autonomy- it should learn what it
can to compensate for partial or incorrect prior knowledge.

\subsection*{Task environments}

Task environments are ``essentially the problem to which agents are
the solution''. We will group the performance measure, the
environment, the agents actuators and the agents sensors under the
heading of the task environment. Table \ref{tab:task_environment}
shows the types of task environment.

\begin{table}
  \begin{tabular}{p{4.5cm} p{11cm}}
    Types & Explanation \\
    \hline
    %%%%%%%%%%%%%%%%%%%%%%%%%%%%
    Fully observable vs\.~partially observable &

    If an agent's sensors give it access to the complete state of the
    environment, it is fully observable. A task environment is
    effectively fully observable if the sensors detect all aspects that
    are relevant to the choice of action; relevance in turn depends on
    the performance metric. Fully observable environments are convenient
    because the agent need not maintain any internal state to keep track
    of the world. An environment might be partially observable because
    of noise or inaccurate sensors.\\

    %%%%%%%%%%%%%%%%%%%%%%%%%%%%
    Deterministic vs\.~stochastic &
    
    If the next state of the environment is completely determined by
    the current state and the action executed by the agents, then we
    say that the environment is deterministic; otherwise it is
    stochastic.\\

    %%%%%%%%%%%%%%%%%%%%%%%%%%%%
    Episodic vs\.~sequential &

    In an episodic task environment, the agent's experience is dived
    into atomic episodes. Each episode consists of the agent perceiving
    and performing a single action. Crucially, the next episode does
    not depend on the actions taken in previous episodes. In episodic
    environments, the choice of action in each episode depends only on
    the episode itself.\\

    %%%%%%%%%%%%%%%%%%%%%%%%%%%%
    Static vs\.~dynamic &

    If an environment can change while an agent is deliberating, then
    we say that the environment is dynamic for the agent; otherwise it
    is static. Static environments are easy to deal with because the
    agent does not need to keep looking at the world while it is
    deciding on an action, nor need it worry about the passage of time.\\

    %%%%%%%%%%%%%%%%%%%%%%%%%%%%
    Discrete vs\.~continuous &
    
    The discrete/continuous distinction can be applied to the state of
    the environment, to the way time is handled, and to the percepts
    and actions of an agent. \\

    %%%%%%%%%%%%%%%%%%%%%%%%%%%%
    Single-agent vs\.~multi-agent &

    The distinction between single-agent and multi-agent environments
    may seem simple enough. For example, an agent solving a crossword
    puzzle by itself is a single agent environment, whereas an agent
    playing chess is in a two agent environment.    
  \end{tabular}
  \caption{Types of task environment}
  \label{tab:task_environment}
\end{table}

The job of AI is to design the agent program that implements the agent
function mapping percepts to actions. We assume this program will run
on some sort of computing device with physical sensors and
actuators. We call this the \emph{architecture}.
$$\mbox{agent} = \mbox{architecture} + \mbox{program}$$

The agent program we consider will all have the same skeleton; they
take the current percept as input and the agent function, which takes
the entire percept history. The agent program takes just the current
percept as input because nothing more is available from the
environment; if the agents actions depend on the entire percept
sequences, the agent will have to remember the percepts.

It is instructive to consider why the table-driven approach is doomed
to failure. Let $\mathcal{P}$ be the set of possible percepts and let
$\mathcal{T}$ be the lifetime of the agent (the total number of
percepts it will receive). The look-up table will contain 
$$\sum_{t=1}^{\mathcal{T}} \vert \mathcal{P} \vert^t$$
entries.

The daunting size of these tables means that
\begin{itemize}
  \item No physical agent in this universe will have the space to
    store the table
  \item The designer would not have time to create the table
  \item No agent could ever learn all the right table entries from its
    experience
  \item even if the environment is simple enough to yield a feasible
    table size, the designer still has no guidance about how to fill
    in the table entries
\end{itemize}

Despite all this, Table-Driven-Agents do what we want. They implement
the desired agent function, it is just their implementation that is
not practical. 

\subsection*{Types of agents}

Table \ref{tab:agents} shows four types of agents (ordered in
increasing generality)

\begin{table}
  \begin{tabular}{p{3cm} | p{5cm} p{3cm} p{3cm}}
      Type of agent & Description & Advantages & Disadvantages \\
      \hline
      %%%%%%%%%%%%%%%%%%%%%%%%%%%%      
      Simple reflex driven agent &

      Agents select current action on the basis of the current
      percept, ignoring the percept history. Can be augmented with
      some randomization to it's actions (other methods prove to be
      better however) &

      Small and simple to program &

      Limited intelligence, in order to be effective requires that the
      environment is fully observable\\
      
      %%%%%%%%%%%%%%%%%%%%%%%%%%%%      
      Model-Based Reflex agent &

      Maintains an internal state of the parts of the world it can't
      see derived from the percept history. Requires knowledge of how
      the world works independently of the agent and what affect the
      robot can have on the world.&
      
      Internal state should reflect some of the unobservable aspects
      of the current state.&

      Assumes that the correct decision is evident given the derived
      state of the environment.\\

      %%%%%%%%%%%%%%%%%%%%%%%%%%%%      
      Goal-based agent &

      Attempts to make correct decision based on multiple
      options(w\.r\.t\. its performance metric).&

      More versatile than model-based reflex agent as can weigh
      between different options&

      Less efficient\\
      %%%%%%%%%%%%%%%%%%%%%%%%%%%%      
      Utility based agent &

      Has a utility function (measures the associated degree of
      happiness which results from a decision)&

      Improvement on goal-based agents (binary) as allows scale of
      (Un)happiness& \\
  \end{tabular}
  \caption{Types of agent}
  \label{tab:agents}
\end{table}

These agents can be extended into \emph{learning agents}. A learning
agents is composed of four parts;
 
\begin{itemize}
\item Learning element - responsible for making improvements
\item Performance element - responsible for selecting external actions
  (what we previously considered the entire agent)
\item Critic - feedback on how the agent is doing and how the
  performance element should be modified
\item Problem generator - suggests actions that will need to new and
  informative experiences
\end{itemize}

\section*{Problem solving agent(searching)}

Here we consider a goal-based agent we define as a problem solving
agent. Problem solving agents decide what to do by finding sequences
of actions that lead to desirable outcomes.

In attempting to achieve a state desirable by the agents performance
measure it is important to identify ``happy'' states. This is known as
\emph{Goal formulation} and is the first task in problem solving.  A
goal is a set of world states and the agent's task is to find which
sequence of actions will get it to a goal state. Knowing what actions
and states to consider for a given goal is known as \emph{Problem
  formulation}.

In general an agent with several immediate options of unknown value
can decide what to do by first examining different possible sequences
of actions that lead to states of known values, and then choosing the
best sequence. This process is known as a \emph{search}. A search
algorithm takes a problem as input and returns a solution in the form
of an action sequence. Once a solution is found, the actions it
recommends can be executed. This is called the \emph{execution phase}

\subsection*{Problem formulation}

A problem is described by four components:
\begin{itemize}
\item Initial state the agent starts in
\item A description of the possible actions available to the agents
  (typical formulated using successor functions)
\item Test to determine whether a given state is a goal state
\item Path cost function to assign a numeric cost to each path (chosen
  according to the performance measure)
\end{itemize}

The initial problem and the possible actions implicitly define the
state space of the problem. Set of all states reachable from the
initial state. A path in the state space is a sequence of states
connected by a sequence of actions.

A solution to a problem is a path from the in ital state to the goal
state. Solution quality is measured by the path quality function, an
optimal solution has the lowest path cost among all solutions.

\subsection*{Searching for solutions}

Here we deal with search techniques though an explicit search tree,
generated by the initial state and the successor function.

In this tree we will assume that a node is a data structure with five components 

\begin{itemize}
  \item State - state in the state space to which the node corresponds
  \item Parent - parent node in the tree
  \item Action - the action applied by the parent to generate this node
  \item Path cost - the cost, $g(n)$, of the path from the initial state to the node
  \item Depth - number of steps from the initial node
\end{itemize}

The output of a problem solving algorithm is either a failure or a
solution. To measure the complexity of the algorithm we consider three
parameters. The maximum number of successors for any node $b$, $d$ the
depth of the shallowest goal node and $m$ the maximum depth of any
path in the state space. Time complexity is measured in the number of
nodes generated during the search and space complexity in terms of the
maximum number of nodes stored in memory.

\subsubsection*{Uninformed search}

Here we consider 5 different uninformed searches (searches that use no
additional information about states beyond that provided in the
problem definition)

\begin{itemize}
\item Breadth-first search is a simple strategy in which the root node
  is expanded, then each successor, then their successors and so
  on. Finds the shallowest goal node, not necessarily optimum. This
  has complexity $O(b^{d+1})$. It is more expensive in space than
  time.
\item Uniform-cost search expands the node with the lowest path
  cost. This ensures that the path found is optimal. If we assume that
  $C'$ is the cost of the optimal solution and that every action costs
  at least $\epsilon$ then worst-case time and space complexity is
  $O(b^{\frac{C'}{\epsilon}})$
\item Depth first search expands the deepest node in the current
  fringe of the search tree. Very modest in memory requirements
  $O(bm)$. Backtracking can further improve this to $O(m)$. Worst case
  in time is $O(b^m)$ (and tree may be infinite!).
\item Depth limited search is the same as above but with a predefined
  depth limit $l$. Iterative deepening depth-first search is a general
  strategy used with deep first search to find the best depth
  limit. 
\item Bidirectional search runs two simultaneous searches, one forward
  from the initial state and the other backward from the goal, the
  search stops when an overlap between the searches occurs.
\end{itemize}

One of the major issues with these approaches is that we do nothing to
address repeated states, these are bad both in the time they waste and
the possibility that an infinite loop occurs.

\subsection*{Searching with partial information}

Up until this point why have assumed that the environment is fully
observable and deterministic, with the agent knowing the full effect
of each action. Unfortunately this ideal is oft absent from real
life. We require techniques to deal with incomplete data.

Incomplete data leads to three distinct problem types
\begin{itemize}
  \item Sensor-less problems - agent has no sensors and may be in one
    of several possible initial states, leading to multiple possible
    successor states.
  \item Contingency problems - if environment is partially observable
    then the agents percepts provide new information after each
    action. Each possible percept defines a contingency that must be
    planned for.
  \item Exploration problems - When the state and actions of the
    environment are unknown and the agent must act to discover them.
\end{itemize}

Uninformed search strategies can find solutions to problems by
systematically generating new states and testing them against the
goal. Unfortunately, these strategies are incredibly inefficient in most
cases.

\section*{Informed Search and exploration}

An informed search staragy is one that uses problem specific knowledge
beyond teh definition of the problem itself in order to find more
solutions more efficently than an uninformed stratagy

%\subsection*{Best-first search}

General approach is the best-first search. Best-first search is an
instance of the general Tree-Search algorithm in which a node is
selected for expansion based on an evaluation function $f(n)$. Best
first search can be implemented within our general framework via a
priority queue, a data structure that will maintain the fringe in
ascending order of $f$-values.

``Best-first search'' is a misnomer. All we can do is choose the node
that appears to be best according to the evaluation function. If the
evaluation function is exactly accurate, then this will indeed be the
best node.

There is a whole family of Best-first search algorithims with
differnet evaluation functions. A key component of these algorithms is
a heauristic function, $h(n)$
$$h(n) = \mbox{estimated cost of the cheapest path from node } n
\mbox{ to a goal node}$$

Heuristic functions are the most common form in which additional
knowledge of the problem is imparted to the search algorithm.

\subsection*{Greedy best-first search}

Greedy best-first tries to expand the node that is closest to the goal
and, on the grounds that this is likely to lead to a solution
quickly. Thus, it evaluates nodes by using just the heuristic
function: $f(n) = h(n)$

Greedy-best first serach resembles depth-first search in the way it
prefers to follow a single path all the way to the foal, but will back
up when it hits a dead end. It suffers from the dame defects as depth-first 
search, it is not optimal, and it is incomplete. The worst cae space
and time complexity is $O(b^m)$.

\subsection*{$A^*$ search}

The most widely-known form of best-search is $A^*$ search. It
evaluates nodes by combining $g(n)$, the cost to reach the node, and
$h(n)$, the cost to get from the node to the goal.

$$f(n) = g(n) + h(n)$$

Since $g(n)$ gives the path cost from the start node to node $n$, and
$h(n)$ is the estimated cost of the cheapest path from $n$ to the
goal, we have
$$f(n) = \mbox{estimated cost of the cheapest solution through } n$$

Thus, if we are trying to find the cheapest solution, a reasonable
thing to try first is the node with the lowest value of $g(n) +
h(n)$. It truns out that this strategy is more than just reasonable;
provided that the heauristic function $h(n)$ satisfies certain
conditions, $A^*$ search is both complete and optimal.

\section*{Neural Networks}



\section*{Perception}

Perception proivdes agents with information about the world they
inhabit. This is typically initiated by sensors. A sensor is anything
that can record some aspect of the environment and pass it as input to
an agent program.

There are two ways in which an agent can use it percepts. In the
feature extraction approach agents detect some small number of
features in their sensory input and pass them directly to their agent
program. The alternative is a model-based

\end{document}
