# 1. Implement IBM Model I and its EM training and apply that to train on the given corpus
# 2. Implement the Viterbi alignment algorithm for maximum probability alignment for every sentence pairs and apply to the training corpus
# 3. Extract the translation table from the Viterbi alignments

# 4. Evaluate the alignment quality against the quality of the alignments provided by Giza++ (aligned by the assistants for your use). Report alignment precision and recall.
# 5. Implement a simple improvement of your own choice over IBM Model I in terms of the alignments to reduce the effects of the assumptions made by IBM Model I
# 6. Write and submit a PDF report max. 2 A4 in ACL article format with
#       - Introduction
#       - IBM Model I;
#       - EM training formula;
#       - Describe your own improvement over IBM Model I and
#       - Experiments section with results.
#       - Conclusions

import corpus as C

corpusNL = C.Corpus('../../data/lab1/corpus.nl')
corpusEN = C.Corpus('../../data/lab1/corpus.en')

print str(corpusNL.get_allignment_count(corpusEN, 'tot', 'finally'))
print str(corpusNL.get_allignment_count(corpusEN, 'tot'))
corpusNL.allignment_count.save()
print str(corpusNL.get_total_word_count())
