\section{Compilers vs. Interpreters}
\label{chapter::CompilerModel}
	In the following chapter we will provide a review of the different possibilities of how a transplantation of a board game language can be realized. We will present the theory behind compilers and interpreters and will recommend a model which is suited the best for the task\cite{bogfrasad3}.
	There are several ways to translate a language to actual binary code which can be executed by a computer. This chapter we will try to find the best suits the goals for CLUBs, We will come with our choice wheather to have a Compiler or an Interpreter and why we want to use it, in section \ref{Our choice}. 

		\section{The Compiler}
			A compiler's functionality can basically be described as a translator. A translator translates from one language into another. In case of a compiler the 'other language' is just a lower-level programming language. To understand which model suites a board game language best, we have to understand how a compiler works. In the following we will describe the key aspects of a compiler. 
			
			In general a compiler has three stages; The syntax analysis (section \ref{CompilerSyntaxAnalysis}), the contextual analysis (section \ref{CompilerContextualAnalysis}) and the code generation (section \ref{CompilerContextualAnalysis}). We review those three stages in their respective sections below.
			
			\subsection{Syntax Analysis} \label{CompilerSyntaxAnalysis}
				The syntax analysis as the name indicates analysis the syntax of the file which is being compiled. 
				
				The syntax is a term which is used to describe which symbols (tokens) are used in a language and how these symbols and \textbf{subphrases}\footnote{A subphrase is a phrase blow the super phrase in the abstract syntax tree (AST) % need confirmation!
				} add up into \textbf{phrases}\footnote{A phrase is the string which is achieved by taking the node of an AST from left to right}%silly but true ;) page 24 eBook page 7 in noobbook. 
				Phrases in a programming language are commands, expressions and declarations. %det her har jeg taget meget direkte fra bogen, afsnit 1.3 (eBook side 20). 
				A syntax tree is a tree structure containing the terminal nodes labeled by terminal symbols.
				More specifically the syntax is concerned with terminal symbols like \texttt{'while'} or \texttt{';'}, (in most programming languages,) non-terminal symbols (from now on referred to as nonterminal) which are syntactic variables and a finite set of production rules which define how phrases are composed from terminals and subphrase.
				
				\begin{lstlisting}[basicstyle=\small\sffamily,
keywords={letter,digit,tab,lf,cr,TOKENS,ident,isop,notop,number,scope, COMMENTS, FROM, TO, IGNORE, tab, cr, lf,size\_t},
keywordstyle={\color{blue}},
comment={[l]{//}}, morecomment={[s]{/*}{*/}}, commentstyle=\itshape,S
columns={[l]flexible}, numbers=left, numberstyle=\tiny,
frameround=fftt, frame=shadowbox, captionpos=b,
caption={ATG code},
label=LST:barrier]
CHARACTERS
	letter = "ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz".
	digit = "0123456789".
	tab = '\t'.
	lf = '\n'.
	cr = '\r'.

TOKENS
	ident = letter {letter | digit}.
	isop = "is".
	notop = "not".
	number = digit {digit}.
	scope = "->".

COMMENTS FROM "//" TO lf
IGNORE tab + cr + lf
\end{lstlisting}
			
			\subsection{Contextual Analysis} \label{CompilerContextualAnalysis}
				The contextual analysis as the name indicates is the analysis of the contextual constrains. The contextual contains or static semantics is a term for mainly the \texttt{scope rules} and the \texttt{type rules}. The contextual analysis has thereby two phases the first one being the \texttt{Identification} which concerns the scope rules and the \texttt{Type Checking} which namely concerns the type rules.
				
				\subsubsection{Identification - Scope Rules}
					The identification in the contextual analysis most important task is to relate each identifier to the corresponding declaration and if that identifier isn't doing anything illegal in other words type checking. %here i mean like if u have an int and add a string. THATS NOT LEGAL!
					If this can't be done there is an error in the program and the identification will return an error.
					%Page 137 - Are you really still reading this ?? WHY?
					The identification is thereby one of the most performance heavy phases in a compiler when applied to a longer source program. There are several ways to negate performance issues though. One of them is by using an identification table instead of a tree structure.
					How the identification tables are designed depends on which \texttt{block structure} the given language is using. There are three different block structures
					
					\begin{figure}	
					\centering					
					\includegraphics[scale=0.75]{./Diagrams/IdentificationTable.pdf}
					\caption{Identification Table from: http://www.comsci.us/compiler/images/enter.gif}
					\label{fig::IdentificationTablepix}
					\end{figure}
				
					\begin{enumerate}
						\item Monolithic Block Structure.
							\begin{itemize}
								\item Used in languages like Basic and Cobol.
							\end{itemize}
						\item Flat Block Structure.
							\begin{itemize}
								\item Used in the language Fortran.
							\end{itemize}
						\item Nested Block Structure.
							\begin{itemize}
								\item Used by Pascal, C and Java.
							\end{itemize}
					\end{enumerate}
					
					The figure \ref{fig::IdentificationTablepix} represents an overview of an identification table. The table reflects the overall structure of all three block structures.
						%table goes here when its approved <3
					We are only going to talk about the Nested block structure since this paper is aiming for the use of a language in the style of C or java, The Monolithic and Flat structures are thereby out of scope. %Is this good enough?
					
					\subsubsection{Decoration}
						When the contextual analysis is finished and can be represent the result by a decorated AST. The difference between a normal and a decorate AST are that the decorated has a link from an identifier with the corresponding declaration and each expression is ``decorated'' with the type \texttt{T}.
				\subsection{Type Checking - Type Rules}
					%more stuff before this just building the sections
				Type rules are rules which define what type our expression is. The type Checker has the task to make sure that every type is valid also the type of the no expressions.
				
				Types in most languages are checked from literals and identifiers to larger sub expressions.
				
				Type checking is very straight forward and will therefore not be explained further.
					
			\subsection{Code Generation}\label{CompilerCodeGeneration}
				Now, after the syntax analysis and the contextual analysis, the code of the program is ready to be generated. The biggest issue in this stage is how the identifiers are treated which are declared and used through the program. This contains the replacement of a declaration meaning that in the case of:\texttt{int} $x = 12$ the identifier x has to be exchanged with the integer 12 on each occurrence, bluntly put.
				
				Another important thing when designing the code generation is the target language e.g. assembly or even binary code.
				
				\subsubsection{Code Selection}%page250
					The selection of the code is very important since this is going to be defining which instructions will be in the object code of each phrase. When designing the compiler this can be influenced by a code template. A code template is nothing more than specified rules which decide what the form of all phrases in the object code are. 
				\subsubsection{Storage allocation}%page250
					The storage allocation is about allocating memory to variables. This can be done statically for global variables (static storage allocation) or it can be done by stack storage allocation locally which saves them relatively to their address. 
				\subsubsection{Register allocation}%page250
					The storage allocation is used if the target system has registers. The allocation of the registers for the intermediate data which the CPU is working on has to be managed. Code generation has to keep the memory cycle at the lowest to guarantee the maximum of performance.
			\subsection{Single- and Multipasser}
				%Missing code example maybe?
				The basic idea behind single- and multi-passing is simple. Single passing, as the name suggests, passes the code a single time through the compiler. 				
				In this case of compiler design uses the contextual analysis and the code generation hand-in-hand to generate code ''on the fly''. On the fly in this case means that the code is generated even before the hole code has been parsed, it happens step-by-step.
				
				When we talk about multi-passing in compiling we talk about how a compiler actually runs multiple times through the source code. Doing that enables then compiler to optimize the code on a lower level for example when allocating a variable which not is used or while looping through an array in a certain way.
				
			
				The figure \ref{SinglePassingCompilerDia} shows a structure diagram of a single-passing compiler when we compare that to the multi-passing compiler, figure \ref{MultiPassingCompilerDia}, it becomes obvious what advantages and disadvantages the two models have. The single passing compiler offers much faster compiling speed while the multipass compiler offers better optimization.
  				%We should put these diagrams in the appendix
				\begin{figure}
				\centering	
				\subfigure[Single-Passing Compiler]		
				{		
					\includegraphics[scale=0.35]{./Diagrams/StructureDiagramSinglepasscompilin.pdf}
					\label{SinglePassingCompilerDia}
				}
				\subfigure[Multi-Passing Compiler]		
				{	
					\includegraphics[scale=0.35]{./Diagrams/StructureDiagramMultipassCompiler.pdf}
					\label{MultiPassingCompilerDia}
				}
				\end{figure}
			
		\section{Interpreter}
			An interpreter executes the given source code immediately, meaning that it isn't compiled into a lower level language, but is directly executed.
			There are two types of interpreters we need to think about while using this term: \textbf{Iterative Interpretation} and \textbf{Recursive Interpretation}.
			
			
			\subsection{Iterative Interpretation}
				The most common form of interpretation is the iterative interpretation. This form has four basic steps as provided below:
					\begin{enumerate}
						\item Get the next instruction from the user
						\item Analyze the instruction given (checking for option for example)
						\item Execute the given instruction
						\item Return to step one
					\end{enumerate}
				This loop is repeated until the program is terminated. A good example of the iterative interpretation are command languages like the UNIX shell or Microsoft DOS.
				
				The iterative interpretation only allows primitive execution of programs since it only can execute line by line.
			
			\subsection{Recursive Interpretation}
				Modern languages like java are far more high-level than older ones like C. These high-level interpreters often use commands with subcommands or even subsubcommands which makes them highly composite. % VAGUE!
				
				The iterative interpretation can't be used here because of the composite of the modern languages which means that a new model has to be created. The recursive interpretation is just that it uses a different scheme which as the name suggests is recursive.
				
				The Recursive Interpretation has the following scheme
				\begin{enumerate}
					\item Perform syntactical analysis, outputting an AST.
					\item contextual analysis converting the output to and decorated AST. %page 334 in online PRO book	( syntax trees AST)
					\item Execute the program the whole program (recursive)
				\end{enumerate}
				
		\section{Our choice} \label{Our choice}
			For a game board language we will be using an interpreter design over the compiler design. The reason for doing so is that the interpreter is simpler to make, because we don't have to do any code generation. But it is going to cost more of the machine to run, but it will sute our means for the game board language.
			The interpreter is going to be a recursive interpretation, bacause it fit our idear of a programming laungage where the order of which action etc. are created has nothing to say. On a side note it also fits our SPO course because it has been a bit focued on multipass compilers, and a lot of what we have learend during that can be applyed to the interpreter.
			
			
			
			
			
