

The lexicon defines the words that the grammar based parser knows. All words are divided into wordcategories, for example nouns, verbs, adjectives and adverbs. The wordcategories are also used with a semantic categorization, like ingredients and positive verbs ("like", "desire", "enjoy" etc). This was inspired by an article on semantical grammars in the Encyclopedia of Artificial intelligence \cite{shapiro} (more on this in the semantic section). 

To be able to capture the diversity of a natural language we need to be able to handle words in many diffrent forms. For example, the parser should be able to handle nouns in both singular and plural, or verbs with diffrent inflections. To accomplish this we have chosen to make rules that define the lexicon. A rule that defines regular nouns looks like:

\begin{center}
	\textbf{REGN["banana","apple","tomato"];}
\end{center}

This says that the words banana, apple and tomato are found in the word category REGN, regular nouns. A noun in plural form is associated with the same noun in singular form using what we call a map expression:

\begin{center}
 	\textbf{REGN:REGNPL[ ( REGN, REGN[-1] != x \& REGN[-1] != o ? REGN + s ),
        ( REGN, REGN[-1] == x $\mid$ REGN[-1] == o ? REGN + es )];}
\end{center}

This rule defines a relationship between singular (REGN) and plural (REGNPL). This is defined as REGN:REGNPL in the rule. The next part of the rule says that a REGN that does \textit{not} end with the letter \textit{x} or \textit{o} is converted to a REGNPL by adding \textit{s} to itself. For instance, \textit{banana} becomes \textit{bananas}. The last part of the rule tells us that a REGN that \textit{does} end with \textit{x} or \textit{o} is converted by adding \textit{es} to itself, thus \textit{tomato} becomes \textit{tomatoes}.

This cannot be applied to irregular nouns since they don't follow any rules. To capture irregular nouns all irregular forms and their relation to each other are stated as:

\begin{center}
 	\textbf{IRREGN:IRREGNPL[("thief", "thieves"),("mouse", "mice")];}
\end{center}

With these different types of rules all the words and relations that the parser needs are defined. The lexicon is implemented using hashmaps and allows us to quickly convert words between diffrent forms.
