\section{Fundamentals problems of classical AI}

	We have seen that a relatively simple system (Turing Machine) can compute everything.	We have also seen that a network of cells can compute and act as a general computer. So is it possible that my thought are in fact a result of some computation ?\\
	
	\paragraph*{Physical symbol Systems :} The physical symbol system (PSS) Hypothesis states that : PSS has the necessary and sufficient means to produce general intelligent action (still debated today).\\
	
	\textit{Example :} cat(Maja) (cf TD) in FOL
	\begin{itemize}
		\item Cat and Maja are symbols.
		\item the FOL-syntax infers a certain relation between the two symbols.
		\item This relation produces some kind of mental state.
	\end{itemize}
	
	\paragraph*{Language of Thought\\}
	Language of Thought Hypothesis (LOTH) :
	\begin{itemize}
		\item Thinking takes place in a mental language
		\item Cognition and cognitive processes are only 'remotely plausible' when expressed as a system of representations that is tokened by a linguistic or semantic structure.
	\end{itemize}
	
	Human have so kind of mental representations, \textit{i-e} Humains can mentally represent things that are not physically present.\\
	Memories appears to be represented in an abstract form \textit{i-e} we remember the semantic content (meaning) of the sentence better than the exact formulation.
	
	Representations are combinatory. There is structure in the way people represents facts about the world. \\
	\textit{Example :} If we can represent Kalle loves Lisa we can also represent Lisa loves Kalle.\\
	There is also a syntax, the two facts contains the same parts but has different meaning.\\
	Finally, Humans can reason, we can use our mental representation to derive new knowledge.\\
	
	\paragraph*{LOT is not a spoken language :} 
	\begin{itemize}
		\item Argument : 
			\begin{itemize}
				\item Why would it be spoken language X, and not ...
				\item Spoken language can be ambiguous
			\end{itemize}
		\item Lot is however similar to spoken language.
	\end{itemize}
	
	\paragraph*{The Frame problem\\}
	
	From Cut-out animation. What goes on the frame ?\\
	\textbf{Frame problem in Logic :}
	\begin{itemize}
		\item Shortly presented last lecture
		\item How to describe the effects of action in Logic without having to explicitly represent a large number of intuitively obvious non-effects.
		\item How to avoid so-called Frame-axioms (rules that describes non-change, or the limits of change)
	\end{itemize}
	
	Representational (epistemological) Frame problem.
	\begin{itemize}
		\item Observe any agent (robot, animal, human) with a (mental) representation of the world
		\item How does this agent update the model of the world when something happens. Where do we stop looking for consequences ?
	\end{itemize}
	
	How to keep a model of the world up to date ?\\
	
	\paragraph*{The Turing Test}
	\paragraph*{The Chinese Room}
	
	\paragraph*{Symbol Grounding\\}
	
		Possible to do in a limited domain. Very difficult to do in general
		\begin{itemize}
			\item Identify any object in any environment
			\item Hard to identify attributes that separate any two object (or in general any two concepts)
		\end{itemize}
	
		Further problems :
		\begin{itemize}
			\item How do an agent create new symbols ?
			\item How do an agent decide what symbol should refer to before the relation is there ?
			\item How do the agent change the ontology ? (ontology : the boundary of what the agent can represent)
		\end{itemize}
	