{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# Week 2\n",
    "\n",
    "In this week, we are talking about three interesting topics in NLP:\n",
    "1. Morphology\n",
    "2. Edit Distance\n",
    "3. Language Model"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 1、Morphology\n",
    "\n",
    "Morphology (形态学) is the study of the ways that words are built up from smaller meaningful units called morphemes.We can usefully divide morphemes into two classes:\n",
    "+ Stems (词干): The core meaning-bearing units\n",
    "+ Affixes (词缀): Bits and pieces that adhere to stems to change their meanings and grammatical functions\n",
    "\n",
    "We can also devide morphology into two classes:\n",
    "+ Inflectional (屈折): affix doesn't change word-class. (walk, walking)\n",
    "+ Derivational (派生): meaning change, word class change (clue, clueless; compute, computerization).(clue, clueless; compute, computerization)\n",
    "\n",
    "One of the technich here is called Stemming(词干还原), which can be used in IR area such as matching. One interesing stemmer is Poster Stemmer: it just base on a set of staged sets of rewrite rules that strip suffixes. And it doesn't guarantee that the resulting stem is really a stem.\n",
    "\n",
    "Another technich here is called Segmentation, which segments words in running text and segments sentences in running text. Take cares! cannot segmenting just periods, white-space or punctuation.\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 2. Edit Distance\n",
    "\n",
    "Edit distance is such a technich that can be used to measure the distance of two sentences. And it is just based on three basic edit operations: insert, delete and substitution.\n",
    "\n",
    "As the definition, edit distance is the minimum edit distance between two strings. To achieve this, we need two steps:\n",
    "1. Alignment: two strings align.\n",
    "2. Edit Distance: Calculate the edit distance of the aligned strings.\n",
    "\n",
    "As we see, this is a search problem and its searching space is unbelievable large. However, this problem can be solve by dynamic programming with time complexity $O(mn)$.\n",
    "\n",
    "As the dynamic programming problem LCS, the bellman equation here is similar:\n",
    "\n",
    "\\begin{align*}\n",
    "    D(i,j) = \\min\n",
    "    \\begin{cases}\n",
    "        D(i-1,j) + 1\\\\\n",
    "        D(i,j-1) + 1 \\\\\n",
    "        D(i-1,j-1) + 2 \\qquad\\text{when $X(i)\\ne Y(j)$} \\\\\n",
    "        D(i-1,j-1) \\qquad\\text{when $X(i) = Y(j)$} \\\\\n",
    "    \\end{cases}\n",
    "\\end{align*}\n",
    "\n",
    "And the boundary condition is:\n",
    "\\begin{align*}\n",
    "    D(i,0) = i\\\\\n",
    "    D(0,j) = j\n",
    "\\end{align*}\n",
    "As the bellman equation above, we know the time complexity of it is $O(mn)$ and space complexity here is $O(mn)$ too.\n",
    "\n",
    "What's more, we can generalize into weighted edit distance as:\n",
    "\\begin{align*}\n",
    "    D(i,j) = \\min\n",
    "    \\begin{cases}\n",
    "        D(i-1,j) + del[x(i)]\\\\\n",
    "        D(i,j-1) + ins[y(j)] \\\\\n",
    "        D(i-1,j-1) + sub[x(i),y(j)]\n",
    "    \\end{cases}\n",
    "\\end{align*}"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 3. Language Model\n",
    "We model the laguage as a probable problem, which can be expressed as $P(W) = P(w_1,\\dots,w_n)$, and using the chain rule we know that $P(W) = P(w_1,\\dots,w_n) = \\prod_{i=1}^nP(w_i|w_1,\\dots,w_{i-1})$. \n",
    "\n",
    "Trivially, if the vocabulary is $V$ and the longest length of sentence is n, then this model is tremendously large, which is $|V|^{n+1}$. And we know that it is sparse in that space. Therefore, we have the markov assumption such as $P(w_n|w_n^{n-1})\\approx P(w_n|w_{n-N+1}^{n-1})$, whose complexity is $|V|^N$ for N-gram.\n",
    "\n",
    "As we know, this model can be solved by the famous algorithm, Naive Bayesian.\n",
    "\n",
    "With Zipf's Law we know that a small number of events occur with high frequency and a large number of events occur with low frequency. That is, our estimation are sparse, some of the zeroes in the table are really zeros But others are simply low frequency events you haven't seen yet. After all, *ANYTHING CAN HAPPEN*\n",
    "\n",
    "Thus, we introduce a technich such as Laplace Smoothing to figure out this problem.\n",
    "+ MLE estimation: $P(w_i)= \\frac{c_i}{N}$ \n",
    "+ Laplace estimation: $P(w_i)= \\frac{c_i + 1}{N + |V|}$ \n",
    "\n",
    "Another important thing in language model is to evaluate our N-gram model, we have an crucial indicator called *Perplexity*, which is defined as:\n",
    "\\begin{align*}\n",
    "    PP(W) = \\sqrt[N]{ \\prod_{i=1}^N\\frac{1}{P(w_i|w_{i-1})} }\n",
    "\\end{align*}\n",
    "*The best language model is one that best predicts an unseen test set*"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": []
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "torch",
   "language": "python",
   "name": "torch"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.7.3"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 2
}
