Some results exceeded our expectations while others were disappointing. The most positive thing was how powerful the map expressions in the lexicon turned out to be. Describing the relationship between lactose and milk, and giving the agent some knowledge about allergies just took a matter of minutes. But despite all work and effort put into this project the agent still has a very limited ability to understand written input. We have identified several possible flaws in our approach. Identifying these flaws has been very instructive and from that point of view the project has been a success!

One thing that contributed to the "lack of intelligence" in the agent was that a tremendous amount of time was spent on basic things like loading and parsing the file containing the lexicon and the grammatic rules. We concluded that the choice of programming language and the fact that no existing tools were used (like GF, Grammatical Framework) were not the best choices we could have made. It would probably have been a better idea to use Python with the Natural Language Toolkit, or Prolog, even if it meant learning a new programming language from scratch. In that case more time could have been spent on developing the knowledge base and a more complete set of grammars and semantics.

A more important cause is the approach itself. First of all, the use of a lexicon showed its limitations quite soon. Every single word that the user may write to the NL-interface has to be listed in the lexicon, and such a list of words is close to impossible to maintain since new words and abbreviations are popping up all the time in natural languages. Adding new recipes to the database requires that all ingredients in the recipe are listed in the lexicon too. This gives undesirable redundancy and need for maintenance.

Another approach would be to not list any words (incomplete grammars as described in \cite{polinsky}) that don't have any meaningful semantics (nouns that are not ingredients for instance). But this would require changes to the parsing algorithm and could result in more ambiguity (more possible parse trees). The algorithm would have to make \textit{guesses} about what word category a word belongs to. A good algorithm would learn new words as they appear in the input.

Second, the grammars enforces limitations in the understanding of the language. As soon as the first grammatic rule is written all sentences that do not follow this rule become invalid. Understanding more formulations requires exponentially more grammatic rules \cite{russel_norvig}, which makes it hard to maintain the rule file. In fact, if the semantics did not depend on the grammars, there would be no need writing grammatic rules at all! Interpreting negations and allergic information somewhat correctly is the only important role the grammars and the semantics play in this project.

Third, compositional semantics impose limitations too, but this is related to the problems using grammars. Different grammatic parsings results in different combinations of semantic functions. Some of these combinations turned out to be invalid regarding to the type checker (that was invoked in run time). In order to make the program understand such combinations the type checking was removed and questionable intrepretations of these combinations was implemented.

For instance, the only reasonable input to the function \textit{inc()} (include) are \textit{objects} created by the function \textit{obj()}. An object can be an ingredient or a meal category. But in several cases invalid constructions like \textit{inc(inc(), obj())} were created. The sentence "I like to have banana in my food" will generate this combination since "like" and "have" are interpreted as positive verbs and generates the \textit{inc()}-expressions. Another invalid combination is generated by the sentence "I have lactose intolerance". The semantics become \textit{inc(exc(obj(1, ingr(), milk)))}.

One way to solve this problem would be to write the grammars and the semantic rules with extreme care. But the way that was chosen was to allow these combinations and to interpret them in some way instead. The agreed solution was to call the function containing the actual \textit{obj()}-statement. In the example with the sentence "I have lactose intolerance" the function \textit{exc()} is called, and not \textit{inc()}. Wether this behaviour is correct or not depends on the actual sentence, but as far as the testing went it always generates the expected results.

A simple keyword search algorithm turned out to be easier to implement (a matter of hours compared to several weeks), didn't require a complicated rule file and (because of that) did not suffer from all previously mentioned problems. Some problems this method suffered from though was that it couldn't handle negations and allergic information correctly. 

Adding hard coded checks and rules will improve the keyword algorithm, but just like the grammars the number of such rules can increase quickly, and can make the source code just as complex as the corresponding rule file would be. 

To summarize we conclude that a combination of the two methods, where the keyword algorithm is used as a backup algorithm, works quite well and gives acceptable results. Using all the knowledge gained throughout the project we have realized that more work on the semantics would have been needed, and that a solution using incomplete grammars (as mentioned previously) could have been an interesting approach to try.
