%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% Answers for 6.863 Assignment2
% Created by Olga Wichrowska 06/2010
% Modified by rcb 9/1/12
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\documentclass[10pt]{article}

\setlength{\topmargin}{-0.8in}
\setlength{\oddsidemargin}{0in}
\setlength{\textwidth}{6.5in}
\setlength{\textheight}{9.25in}

\usepackage{sectsty}
\usepackage{amsmath}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{graphicx}
\usepackage{listings}
\usepackage{subfigure}

\sectionfont{\large}
\subsectionfont{\normalsize}
\subsubsectionfont{\small}

\newcommand{\handout}[3]{
  \renewcommand{\thepage}{#1 - \arabic{page}}
  \noindent
  \begin{center}
    \hspace*{-0.25in}\framebox[6.5in]{
      \vbox{
        \hbox to 6.25in { {\bf 6.863J/9.611J: Natural Language Processing } 
                          \hfill Prof. Robert C. Berwick }
        \vspace{4mm}
        \hbox to 6.25in { {\Large \hfill #1  \hfill} }
        \vspace{2mm}
        \hbox to 6.25in { {\it #2 \hfill #3} }
        }
      }
  \end{center}
  \vspace*{4mm}
}

\begin{document}
\handout{Assignment 2: Context-free grammar writing}{My posse consists of: [COLLABORATORS]}{[My name]}

\begin{enumerate}

\item {\bf Hand in the output of a typical sample run of 10
  sentences from your random sentence generator.  Be sure to attach
  the code so we can test it.}

\item\begin{enumerate}
\item {\bf Why does your program generate so many long sentences?}
    Specifically, what grammar rule is responsible and why? What is
    special about this rule?

\item The grammar allows multiple adjectives, as in, \verb|the fine perplexed pickle|. 
{\bf Why do your generated program's sentences exhibit this so rarely?}

\item {\bf Which numbers must you modify to fix the problems in 2(a) and
    2(b), making the sentences shorter and the adjectives more
    frequent? (Check your answer by running your new generator and
    show that they work.)}

\item {\bf What other numeric adjustments can you make to the grammar in
  order to favor more natural sets of sentences? Experiment. Hand in
  your grammar file as a file named \verb+grammar2+, with comments that
  motivate your changes, together with 10 sentences generated by the
  grammar.}

\end{enumerate}
\item {\bf Modify the grammar into a new single grammar that can also generate the types of
  phenomena illustrated in the following sentences.}
\begin{enumerate}
\item \verb| Sally ate a sandwich .|
\item \verb| Sally and the president wanted and ate a sandwich .|
\item \verb| the president sighed .|
\item \verb| the president thought that a sandwich sighed .|
\item \verb| that a sandwich ate Sally perplexed the president .|
\item \verb| the very very very perplexed president ate a sandwich .|
\item \verb| the president worked on every proposal on the desk .|
\end{enumerate}

\noindent
{\bf Briefly discuss your modifications to the grammar. Hand
  in the new grammar (commented) as a file named \verb|grammar3| and about 10
  random sentences that illustrate your modifications.}

\item {\bf Give your program an option ``\verb|-t|'' that makes it produce
  trees instead of strings. Generate about 5 more random
sentences, in tree format. Submit them as well as the commented code
for your program.} 

\item When I ran my sentence generator on \verb|grammar|, it produced
  the sentence:
\begin{verbatim}
every sandwich with a pickle on the floor wanted a president .
\end{verbatim}
\noindent
This sentence is ambiguous according to the grammar, because it could
have been derived in either of two ways.
\begin{enumerate}
\item  One derivation is as follows; {\bf what is the other one?}

\bigskip
\begin{verbatim}





    (START (ROOT (S (NP (NP (NP (Det every)
                                (Noun sandwich))
                                     (PP (Prep with)
                                         (NP (Det a)
                                             (Noun pickle))))
                                (PP (Prep on)
                                    (NP (Det the)
                                        (Noun floor))))
                             (VP (Verb wanted)
                                 (NP (Det a)
                                     (Noun president))))
                  .))
\end{verbatim}
\item {\bf Is there any reason to care which derivation was used?}
\end{enumerate}

\item\begin{enumerate}
 \item {\bf Does the parser
  always recover the original derivation that was ``intended'' by
  \verb|randsent|? Or does it ever ``misunderstand'' by finding an alternative
  derivation instead?  Discuss. (This is the only part of question 6a
  that you need to answer.)}

\item {\bf How many ways are there to analyze the following Noun Phrase (NP)
  under the original grammar?  Explain your answer.}

\item By mixing and matching the commands above, generate a bunch of
  sentences from \verb|grammar|, and find out how many different parses they
  have. Some sentences will have more parses than others. {\bf Do you
  notice any patterns? Try the same exercise with \verb|grammar3|.}

\begin{enumerate} 
\item {\bf Probability analysis of first sentence: Why is \verb|p(best_parse)| so small?  What probabilities were
  multiplied together to get its value of 5.144032922e-05?}\\
\noindent
\verb|p(sentence)| is the probability that \verb|randsent| would
  generate this sentence. {\bf Why is it equal to \verb|p(best_parse)|?}\\
\noindent
{\bf Why is \verb+p(best_parse|sentence)+=1?}

\item {\bf Probability analysis of the second sentence:  \\
\noindent
What does it mean that \verb+p(best_parse|sentence)+ is 0.5 in this
case? \\
\noindent
Why would it be {\it exactly} 0.5?}

\item {\bf Cross-entropy of the two sentence corpus. Explain exactly
    how the following numbers below were calculated from the two sets
    of numbers above, that is, from the parse of each of the two
    sentences.}

\item {\bf Based on the above numbers, what {\it perplexity} per word did
  the grammar achieve on this corpus?}

\item {\bf The compression program might not be able to compress the
    following corpus that consists of just two sentences very well.
    Why not? What cross-entropy does the grammar achieve this time?
    Try it and explain.}  (The new 2 sentence corpus is given below.)

\item {\bf How well does {\tt grammar2} do on average at predicting
    word sequences that it generated itself?  Please provide an answer
    in bits per word.  State the command (a Unix pipe) that you used
    to compute your answer.}

\item If you generate a corpus from {\tt grammar2}, then {\tt
    grammar2} should on average predict this corpus better than {\tt
    grammar} or {\tt grammar3} would. In other words, the entropy will
  be {\it lower} than the cross-entropies. {\bf Check whether this is true:
    compute the numbers and discuss.} \end{enumerate}
\end{enumerate}

\item  {\bf Think about all of the following phenomena, and extend your grammar
  from question 3 to handle them. (Be sure to handle the particular
  examples suggested.)  Call your resulting grammar
  \verb|grammar4| and be sure to include it in your write-up along with examples of it
  in action on new sentences like the ones illustrated below.}

\begin{enumerate}
\item {\it Yes-no questions}.

\item {\it WH-word questions.} 

\end{enumerate}
\end{enumerate}
\end{document}

