{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# 正则表达式\n",
    "    As definition, regular expression (正则表达式) is a formula in a special language that specifies simple classes of strings.\n",
    "    \n",
    "    And regular expression search requires a pattern (模式) that we want to search for a corpus (语料库) of texts to search through\n",
    "    \n",
    "    As below, we will introduce many useful technique to apply regular expression in pratice."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 1.1 Disjunction\n",
    "    The disjuction operation has the form as: [ A - Z ], [ a - z ], [ 0 - 9 ], [wW]oodchuck and so on\n",
    "    \n",
    "    Second, we have negation in Disjuction, which has the form as [^Ss], means that we search a pattern except that in the bracket. \n",
    "    \n",
    "    * Warning: Caret means negation only when first in [ ] *"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 33,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "<re.Match object; span=(0, 1), match='D'>\n",
      "<re.Match object; span=(0, 1), match='m'>\n",
      "<re.Match object; span=(8, 9), match='1'>\n",
      "<re.Match object; span=(1, 2), match='y'>\n",
      "<re.Match object; span=(0, 1), match='I'>\n",
      "<re.Match object; span=(6, 7), match='e'>\n",
      "<re.Match object; span=(8, 11), match='a^b'>\n"
     ]
    }
   ],
   "source": [
    "# Examples for disjunction\n",
    "import re\n",
    "\n",
    "string = 'Drenched Blossoms'\n",
    "print( re.search('[A-Z]',string) )\n",
    "\n",
    "string = 'my beans were impatient'\n",
    "print( re.search('[a-z]',string) )\n",
    "\n",
    "string = 'Chapter 1: Down the Rabbit Hole'\n",
    "print( re.search('[0-9]',string) )\n",
    "\n",
    "string = 'Oyfn pripetchik'\n",
    "print( re.search('[^A-Z]',string) )\n",
    "\n",
    "string = 'I have no exquisite reason\"'\n",
    "print( re.search('[^Ss]',string) )\n",
    "\n",
    "string = 'Look here'\n",
    "print( re.search('[e^]',string) )\n",
    "\n",
    "string = 'Look up a^b now'\n",
    "print( re.search('a\\^b', string) )\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "And here we have more useful technich for disjunction.\n",
    "+ pipe |: means or, like 'groundhog|woodchuck','yours|mine','a|b|c' \n",
    "+ question mark ?: means optoinal previous char, like 'colou?r'\n",
    "+ asterisk *: means 0 or more of previous char\n",
    "+ plus sign +: means 1 or more of previous char\n",
    "+ full stop .: means any signal hear"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 24,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "<re.Match object; span=(0, 5), match='color'>\n",
      "<re.Match object; span=(0, 6), match='colour'>\n",
      "<re.Match object; span=(0, 2), match='oh'>\n",
      "<re.Match object; span=(0, 3), match='ooh'>\n",
      "<re.Match object; span=(0, 4), match='oooh'>\n",
      "<re.Match object; span=(0, 2), match='oh'>\n",
      "<re.Match object; span=(0, 3), match='ooh'>\n",
      "<re.Match object; span=(0, 4), match='oooh'>\n",
      "<re.Match object; span=(0, 5), match='begin'>\n",
      "<re.Match object; span=(0, 5), match='began'>\n",
      "<re.Match object; span=(0, 5), match='begun'>\n",
      "<re.Match object; span=(0, 5), match='beg3n'>\n"
     ]
    }
   ],
   "source": [
    "string = 'color'\n",
    "print( re.search('colou?r',string) )\n",
    "\n",
    "string = 'colour'\n",
    "print( re.search('colou?r',string) )\n",
    "\n",
    "string = 'oh!'\n",
    "print( re.search('oo*h',string) )\n",
    "\n",
    "string = 'ooh!'\n",
    "print( re.search('oo*h',string) )\n",
    "\n",
    "string = 'oooh!'\n",
    "print( re.search('oo*h',string) )\n",
    "\n",
    "string = 'oh!'\n",
    "print( re.search('o+h',string) )\n",
    "\n",
    "string = 'ooh!'\n",
    "print( re.search('o+h',string) )\n",
    "\n",
    "string = 'oooh!'\n",
    "print( re.search('o+h',string) )\n",
    "\n",
    "string = 'begin'\n",
    "print( re.search('beg.n',string) )\n",
    "\n",
    "string = 'began'\n",
    "print( re.search('beg.n',string) )\n",
    "\n",
    "string = 'begun'\n",
    "print( re.search('beg.n',string) )\n",
    "\n",
    "string = 'beg3n'\n",
    "print( re.search('beg.n',string) )"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Then another useful mark is Anchors ^ $. Anchors are special characters that anchor REs to particular places in a string.\n",
    "\n",
    "+ ^: means match the first character.\n",
    "+ $: means match the last character.\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 35,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "<re.Match object; span=(0, 1), match='P'>\n",
      "<re.Match object; span=(0, 1), match='1'>\n",
      "<re.Match object; span=(7, 8), match='.'>\n",
      "<re.Match object; span=(7, 8), match='?'>\n",
      "<re.Match object; span=(7, 8), match='!'>\n"
     ]
    }
   ],
   "source": [
    "string = 'Palo Alto'\n",
    "print( re.search('^[A-Z]', string) )\n",
    "\n",
    "string = '1 \"Hello\"'\n",
    "print( re.search('^[^A-Za-z]', string) )\n",
    "\n",
    "string = 'The end.'\n",
    "print( re.search('\\.$', string) )\n",
    "\n",
    "string = 'The end?'\n",
    "print( re.search('.$', string) )\n",
    "\n",
    "string = 'The end!'\n",
    "print( re.search('.$', string) )"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# Finite State Automata\n",
    "## 2.1 Why FSA? \n",
    "\n",
    "Regular expressoin can be viewed as a textual way of specifying the structure of finite-state automata. And both regular expression and FSA can be used to describe regular language.Regular language is a particular kind of formal language.\n",
    "\n",
    "Second, FSAs and their probabilistic relatives are at the core of what we'll be doing all semester. They also conveniently correspond to exactly what linguists say we need for morphology (词语形态学) and parts of syntax (句法)\n",
    "\n",
    "The formally definition of FSA is:\n",
    "1. The set of state: Q\n",
    "2. A finite alphabet: $\\Sigma$\n",
    "3. A start state \n",
    "4. A set of accept/final states\n",
    "5. A transition function that maps $q: Q\\times\\Sigma\\to Q$\n",
    "\n",
    "## 2.2 Non-Determistic FSA\n",
    "**Equivalence:** Non-deterministic machines can be converted to deterministic ones with a fairly simple construction.  That means that they have the same power; non-deterministic machines are not more powerful than deterministic ones in terms of the languages they can accept"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Formal Language\n",
    "Formal Languages (形式语言) are sets of strings composed of symbols from a finite set of symbols. And finite-state automata define formal languages (without having to enumerate all the strings in the language)\n",
    "\n",
    "FSAs can be viewed from two perspectives:\n",
    "1. Generators to produce all and only the strings in the language\n",
    "2. Acceptors that can tell you if a string is in the language"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# Word Segmentation\n",
    "## 3.1 Text Normaliztion\n",
    "Every NLP task needs to do text normalization:\n",
    "1. Segmenting/tokenizing words in running text\n",
    "2. Normalizing word formats\n",
    "3. Segmenting sentences in running text\n",
    "\n",
    "Some basic concepts in NLP:\n",
    "+ Lemma(词元): same stem (词干), part of speech (词 性), rough word sense (词义)\n",
    "+ Wordform(词形): the full inflected surface form\n",
    "+ Type(词型): an element of the vocabulary\n",
    "+ Token(词例): an instance of that type in running text."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": []
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "torch",
   "language": "python",
   "name": "torch"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.7.3"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 2
}
