{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "<br>\n",
    "<center><font face=\"黑体\" size=4>《机器学习基础实践》课程实验指导书</font></center>\n",
    "<br>\n",
    "<br>\n",
    "<center><font face=\"黑体\",size=4>第10章  概率图模型</font></center>\n",
    "\n",
    "$\\textbf{实验目标}$\n",
    "\n",
    "理解和掌握概率图相关理论知识，理解并实现隐马尔可夫模型和话题模型（LDA）.\n",
    "\n",
    "$\\textbf{实验内容}$\n",
    "\n",
    "$\\textbf{10.1 概率图模型简介}$\n",
    "\n",
    "机器学习最重要的任务，是根据一些已观测到的证据(例如训练样本)来对感兴趣的未知变量(例如类别标签)进行估计和推断。概率模型提供了一种描述框架，将学习任务归结于计算变量的概率分布。概率图模型是用图来表示变量概率依赖关系的理论，结合概率论与图论的知识，利用图来表示与模型有关的变量的联合概率分布。本章将介绍概率图的相关理论知识和实现方法，主要内容包括：隐马尔科夫模型以及主题模型。"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "$\\textbf{11.2 隐马尔可夫模型}$\n",
    "\n",
    "隐马尔可夫模型(Hidden Markov model, HMM)是一种结构最简单的动态贝叶斯网的生成模型，它也是一种著名的有向图模型。它是典型的自然语言中处理标注问题的统计机器学模型。\n",
    "\n",
    "隐马尔科夫模型是关于时序的概率模型，描述由一个隐藏的马尔可夫链随机生成不可观测的状态随机序列，再由各个状态生成一个可观测的随机序列的过程，隐藏的马尔可夫链随机生成的状态序列，称为状态序列；每个状态生成一个观测，而由此产生的观测随机序列，称为观测序列。序列的每个位置又可以看作是一个时刻。下图通过掷骰子的示例来描述隐马尔可夫模型，假设有三种骰子D6,D4和D8，分别可以掷出1-6,1-4和1-8的数字，假设总共掷骰子10次，每次掷骰子时从D6,D4和D8中随机选取一个，通过观察得到的数字序列为{1,6,3,5,2,7,3,5,2,4}，可以通过隐马尔可夫模型求解掷骰子时选取的骰子序列是{D6,D8,D8,D6,D4,D8,D6,D6,D4,D8}的概率。\n",
    "\n",
    "<img src=picture/PG1.png>"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "从上图可以看出，隐马尔可夫模型中有两种变量，即状态变量和观测变量：状态变量：$Y=\\{y_1,y_2,…,y_n\\}$，$y_i$表示第$i$时刻的系统状态，通常假定状态变量是隐藏的、不可观测的，系统通常在多个状态$\\{s_1,s_2,…,s_N\\}$之间转换。观测变量：$X=\\{x_1,x_2,…,x_n\\}$，$x_i$表示第$i$时刻的观测值，假定观测变量的取值范围为$\\{o_1, o_2,…,o_M\\}$。隐马尔可夫模型的结构如下图所示。\n",
    "\n",
    "<img src=picture/PG2.png>\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "在隐马尔科夫模型中，在任一时刻$t$，观测变量的取值$x_t$只依赖于状态变量$y_t$，$t$时刻的状态$y_t$仅依赖于$t-1$时刻的状态$y_{t-1}$。基于这种依赖关系，所有变量的联合概率分布为\n",
    "\n",
    "\\begin{equation}\n",
    "P(x_1,x_2,\\cdots,x_n,y_n)=P(y_1)P(x_1|y_1)\\prod_{i=2}^{n}P(y_i|y_{i-1})P(x_i,y_i)\n",
    "\\end{equation}\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "要确定一个马尔科夫模型，需要以下三组参数："
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "(1) 状态转移概率：模型在各个状态间转换的概率，记为矩阵$\\textbf{A}=[a_{ij}]_{N*N}$\n",
    "\n",
    "\\begin{equation}\n",
    "a_{ij} = P(y_{t+1}=s_j|y_t = s_i)\n",
    "\\end{equation}"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "(2) 输出观测概率：模型根据当前状态获得各个观测值的概率，记为矩阵$\\textbf{B}=[b_{ij}]_{N*M}$\n",
    "\n",
    "\\begin{equation}\n",
    "b_{ij} = P(x_t=o_j|y_t=s_i)\n",
    "\\end{equation}"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "(3) 初始状态概率：模型初始时刻各状态出现的概率，记为$\\boldsymbol{\\pi}=[\\pi_1, \\pi_2,…,\\pi_N]$\n",
    "\n",
    "\\begin{equation}\n",
    "\\pi_i = P(y_1=s_i)\n",
    "\\end{equation}"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "在实际应用中，关于隐马尔可夫模型，我们通常关系以下三个问题：\n",
    "\n",
    "(1) 给定模型$\\boldsymbol{\\lambda}=[\\textbf{A},\\textbf{B},\\boldsymbol{\\pi}]$，如何计算其产生观测序列$x=[x_1,x_2,…,x_n]$的概率$P(x|\\boldsymbol{\\lambda})$,即如何评估模型与观测序列的匹配程度?\n",
    "\n",
    "(2) 给定模型$\\boldsymbol{\\lambda}=[\\textbf{A},\\textbf{B},\\boldsymbol{\\pi}]$和观测序列$x=[x_1,x_2,…,x_n]$，如何找到与此观测序列最匹配的状态序列$y=[y_1,y_2,…,y_n]$,即如何根据观测序列推断出隐藏的模型状态?\n",
    "\n",
    "(3) 给定观测序列$x=[x_1,x_2,…,x_n]$，如何调整模型参数$\\boldsymbol{\\lambda}=[\\textbf{A},\\textbf{B},\\boldsymbol{\\pi}]$，使得该序列出现的概率$P(x|\\boldsymbol{\\lambda})$最大,即如何训练模型使其能更好地描述观测数据?"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "$\\textbf{10.2 主题模型}$\n",
    "\n",
    "主题模型是一种有向图概率图模型，在信息检索、自然语言处理等领域有着广泛的应用。隐狄利克雷分配模型(Latent Dirichlet Allocation，简称LDA)是一种文档生成模型。它认为一篇文章是有多个主题的，而每个主题又对应着不同的词。一篇文章的构造过程，首先是以一定的概率选择某个主题，然后再在这个主题下以一定的概率选出某一个词，这样就生成了这篇文章的第一个词。不断重复这个过程，就生成了整片文章。LDA模型生成文档的过程如下图所示。\n",
    "\n",
    "<img src=picture/PG3.png>"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "关于主题模型，有以下术语：\n",
    "\n",
    "(1) 词：是待处理数据的基本离散单元，例如一个英文单词或者具有独立意义的中文词。\n",
    "\n",
    "(2) 文档：是待处理的数据对象，由一组词组成，词在文当中不计顺序，例如一篇论文。\n",
    "\n",
    "(3) 词袋：文档的一种表示方式，有多个词组成，且词不具有顺序性。\n",
    "\n",
    "(4) 话题：表示一个概念，具体表现为一系列相关的词，以及它们在该概念下出现的概率。"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "假定数据集中一共包含$K$个话题和$T$个文档，文档中的词来自一个包含$N$个词的词典。可以用$T$个$N$维向量$\\textbf{W}=[\\textbf{w}_1,\\textbf_{w}_2,…,\\textbf{w}_{T}]$表示文档的集合，$K$个$N$维向量$\\boldsymbol{\\beta}=[\\boldsymbol{\\beta}_1,\\boldsymbol{\\beta}_2,…,\\boldsymbol{\\beta}_K]$表示话题，其中$\\textbf_{w}_t$的第$n$个分量$w_{tn}$表示文档$t$中词$n$的词频，$\\boldsymbol{\\beta}_k$中第$n$个分量$\\beta_{kn}$表示话题$k$中词$n$的词频。LDA认为每个文档包含多个话题，$\\boldsymbol{\\Theta}_t$表示文档$t$中所包含的每个话题的比例，$\\Theta_{tk}$表示文档$t$中包含话题$k$的比例。\n",
    "\n",
    "文档t的生成如下:\n",
    "\n",
    "(1)根据参数为$\\boldsymbol{\\alpha}$的狄利克雷分布随机采样一个话题分布$\\boldsymbol{\\beta}_k$;\n",
    "\n",
    "(2)按照如下步骤生成文档中的$N$个词：\n",
    "\n",
    "(a) 根据话题分布$\\boldsymbol{\\beta}_k$进行话题指派，得到文档$t$中词$n$的话题$z_{tn}$;\n",
    "\n",
    "(b) 根据指派的话题所对应的词频分布$\\boldsymbol{\\beta}_k$随机采样生成词."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "$\\textbf{10.3 实践任务：基于HMM的词性标注}$\n",
    "\n",
    "词性标注（Part-of-Speech tagging 或 POS tagging)是指对于句子中的每个词都指派一个合适的词性，也就是要确定每个词是名词、动词、形容词或其他词性的过程，又称词类标注或者简称标注。词性标注是自然语言处理中的一项基础任务，在语音识别、信息检索及自然语言处理的许多领域都发挥着重要的作用。以人民日报标注语料库为原材料，综合使用中文分词方法、Viterbi算法以及隐马尔可夫模型，构造一个中文词性标注系统。实践任务涉及的数据集见datasets文件。 \n",
    "\n",
    "基于隐马尔可夫模型进行词性标注，需要解决的主要问题在于以下两个方面：\n",
    "\n",
    "(1)基于语料库的统计分析，求得隐马尔可夫模型的参数：$\\boldsymbol{\\lambda}=[\\textbf{A},\\textbf{B},\\boldsymbol{\\pi}]$，其中，$\\textbf{A}$表示词性之间的转换矩阵，$\\textbf{B}$表示词性到词的发射矩阵，$\\boldsymbol{\\pi}$表示词性的先验概率。\n",
    "\n",
    "(2)得到好隐马尔可夫模型的参数后，给定一个观测序列，即待标注的语句，求出语句中每个词的词性，即得到最可能的隐藏状态。这个问题可以通过Viterbi算法求解。\n",
    "\n",
    "$\\textbf{实现代码如下：}$"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 1,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "今天/t 和/p 明天/t 我/r 弹/v 琴/n 。/w \n"
     ]
    }
   ],
   "source": [
    "import numpy as np\n",
    "import Viterbi\n",
    "import word_segment\n",
    "\n",
    "\n",
    "def cal_hmm_matrix(observation):\n",
    "    # 得到所有标签\n",
    "    word_pos_file = open('datasets/ChineseDic.txt',encoding=\"utf-8\").readlines()\n",
    "    tags_num = {}\n",
    "    for line in word_pos_file:\n",
    "        word_tags = line.strip().split(',')[1:]\n",
    "        for tag in word_tags:\n",
    "            if tag not in tags_num.keys():\n",
    "                tags_num[tag] = 0\n",
    "    tags_list = list(tags_num.keys())\n",
    "\n",
    "    # 转移矩阵、发射矩阵\n",
    "    transaction_matrix = np.zeros((len(tags_list), len(tags_list)), dtype=float)\n",
    "    emission_matrix = np.zeros((len(tags_list), len(observation)), dtype=float)\n",
    "\n",
    "    # 计算转移矩阵和发射矩阵\n",
    "    word_file = open('datasets/199801.txt',encoding=\"utf-8\").readlines()\n",
    "    for line in word_file:\n",
    "        if line.strip() != '':\n",
    "            word_pos_list = line.strip().split('  ')\n",
    "            for i in range(1, len(word_pos_list)):\n",
    "                tag = word_pos_list[i].split('/')[1]\n",
    "                pre_tag = word_pos_list[i - 1].split('/')[1]\n",
    "                try:\n",
    "                    transaction_matrix[tags_list.index(pre_tag)][tags_list.index(tag)] += 1\n",
    "                    tags_num[tag] += 1\n",
    "                except ValueError:\n",
    "                    if ']' in tag:\n",
    "                        tag = tag.split(']')[0]\n",
    "                    else:\n",
    "                        pre_tag = tag.split(']')[0]\n",
    "                    transaction_matrix[tags_list.index(pre_tag)][tags_list.index(tag)] += 1\n",
    "                    tags_num[tag] += 1\n",
    "\n",
    "            for o in observation:\n",
    "                # 注意这里用in去找（' 我/'，' **我/'的区别），用空格和‘/’才能把词拎出来\n",
    "                if ' ' + o in line:\n",
    "                    pos_tag = line.strip().split(o)[1].split('  ')[0].strip('/')\n",
    "                    if ']' in pos_tag:\n",
    "                        pos_tag = pos_tag.split(']')[0]\n",
    "                    emission_matrix[tags_list.index(pos_tag)][observation.index(o)] += 1\n",
    "\n",
    "    for row in range(transaction_matrix.shape[0]):\n",
    "        n = np.sum(transaction_matrix[row])\n",
    "        transaction_matrix[row] += 1e-16\n",
    "        transaction_matrix[row] /= n + 1\n",
    "\n",
    "    for row in range(emission_matrix.shape[0]):\n",
    "        emission_matrix[row] += 1e-16\n",
    "        emission_matrix[row] /= tags_num[tags_list[row]] + 1\n",
    "\n",
    "    times_sum = sum(tags_num.values())\n",
    "    for item in tags_num.keys():\n",
    "        tags_num[item] = tags_num[item] / times_sum\n",
    "\n",
    "    # 返回隐状态，初始概率，转移概率，发射矩阵概率\n",
    "    return tags_list, list(tags_num.values()), transaction_matrix, emission_matrix\n",
    "\n",
    "\n",
    "if __name__ == '__main__':\n",
    "\n",
    "    input_str = \"今天和明天我弹琴。\"\n",
    "    obs = word_segment.seg(input_str).strip().split(' ')\n",
    "    hid, init_p, trans_p, emit_p = cal_hmm_matrix(obs)\n",
    "\n",
    "    result = Viterbi.viterbi(len(obs), len(hid), init_p, trans_p, emit_p)\n",
    "    \n",
    "    tag_line = ''\n",
    "    for k in range(len(result)):\n",
    "        tag_line += obs[k] + hid[int(result[k])] + ' '\n",
    "    print(tag_line)\n",
    "\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "2 使用LDA算法，以希拉里邮件数据为语料库，抽取希拉里邮件门事件的主题，并查看任意一封邮件在这些主题中的分布情况。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 2,
   "metadata": {},
   "outputs": [
    {
     "ename": "ModuleNotFoundError",
     "evalue": "No module named 'gensim'",
     "output_type": "error",
     "traceback": [
      "\u001b[1;31m---------------------------------------------------------------------------\u001b[0m",
      "\u001b[1;31mModuleNotFoundError\u001b[0m                       Traceback (most recent call last)",
      "\u001b[1;32m<ipython-input-2-70f62c3471a5>\u001b[0m in \u001b[0;36m<module>\u001b[1;34m\u001b[0m\n\u001b[0;32m      2\u001b[0m \u001b[1;32mimport\u001b[0m \u001b[0mpandas\u001b[0m \u001b[1;32mas\u001b[0m \u001b[0mpd\u001b[0m\u001b[1;33m\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n\u001b[0;32m      3\u001b[0m \u001b[1;32mimport\u001b[0m \u001b[0mre\u001b[0m\u001b[1;33m\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n\u001b[1;32m----> 4\u001b[1;33m \u001b[1;32mfrom\u001b[0m \u001b[0mgensim\u001b[0m\u001b[1;33m.\u001b[0m\u001b[0mmodels\u001b[0m \u001b[1;32mimport\u001b[0m \u001b[0mdoc2vec\u001b[0m\u001b[1;33m,\u001b[0m \u001b[0mldamodel\u001b[0m\u001b[1;33m\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n\u001b[0m\u001b[0;32m      5\u001b[0m \u001b[1;32mfrom\u001b[0m \u001b[0mgensim\u001b[0m \u001b[1;32mimport\u001b[0m \u001b[0mcorpora\u001b[0m\u001b[1;33m\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n\u001b[0;32m      6\u001b[0m \u001b[1;33m\u001b[0m\u001b[0m\n",
      "\u001b[1;31mModuleNotFoundError\u001b[0m: No module named 'gensim'"
     ]
    }
   ],
   "source": [
    "import numpy as np\n",
    "import pandas as pd\n",
    "import re\n",
    "from gensim.models import doc2vec, ldamodel\n",
    "from gensim import corpora\n",
    " \n",
    " \n",
    " \n",
    "def clean_email_text(text):\n",
    "    # 数据清洗\n",
    "    text = text.replace('\\n', \" \")  # 新行，我们是不需要的\n",
    "    text = re.sub(r\"-\", \" \", text)  # 把 \"-\" 的两个单词，分开。（比如：july-edu ==> july edu）\n",
    "    text = re.sub(r\"\\d+/\\d+/\\d+\", \"\", text)  # 日期，对主体模型没什么意义\n",
    "    text = re.sub(r\"[0-2]?[0-9]:[0-6][0-9]\", \"\", text)  # 时间，没意义\n",
    "    text = re.sub(r\"[\\w]+@[\\.\\w]+\", \"\", text)  # 邮件地址，没意义\n",
    "    text = re.sub(r\"/[a-zA-Z]*[:\\//\\]*[A-Za-z0-9\\-_]+\\.+[A-Za-z0-9\\.\\/%&=\\?\\-_]+/i\", \"\", text)  # 网址，没意义\n",
    "    pure_text = ''\n",
    "    # 以防还有其他特殊字符（数字）等等，我们直接把他们loop一遍，过滤掉\n",
    "    for letter in text:\n",
    "        # 只留下字母和空格\n",
    "        if letter.isalpha() or letter == ' ':\n",
    "            pure_text += letter\n",
    "    # 再把那些去除特殊字符后落单的单词，直接排除。\n",
    "    # 我们就只剩下有意义的单词了。\n",
    "    text = ' '.join(word for word in pure_text.split() if len(word) > 1)  # 而且单词长度必须是2以上\n",
    "    return text\n",
    " \n",
    "def remove_stopword():\n",
    "    stopword = []\n",
    "    with open('pic/english_stop_words.txt', 'r', encoding='utf8') as f:\n",
    "        lines = f.readlines()\n",
    "        for line in lines:\n",
    "            line = line.replace('\\n', '')\n",
    "            stopword.append(line)\n",
    "    return stopword\n",
    "\n",
    "\n",
    "if __name__ == '__main__':\n",
    "    # 加载数据\n",
    "    df = pd.read_csv('datesets/HillaryEmails.csv')\n",
    "    df = df[['Id', 'ExtractedBodyText']].dropna()  # 这两列主要有空缺值，这条数据就不要了。\n",
    "    #print(df.head())\n",
    "    #print(df.shape)   # (6742, 2)\n",
    "\n",
    "    docs = df['ExtractedBodyText']   # 获取邮件\n",
    "    docs = docs.apply(lambda s: clean_email_text(s))   # 对邮件清洗\n",
    "\n",
    "    #print(docs.head(1).values)\n",
    "    doclist = docs.values   # 直接将内容拿出来\n",
    "    #print(docs)\n",
    "\n",
    "    stop_word = remove_stopword()\n",
    "\n",
    "    texts = [[word for word in doc.lower().split() if word not in stop_word] for doc in doclist]\n",
    "    #print(texts[0])  # 第一个文本现在的样子\n",
    "\n",
    "    dictionary = corpora.Dictionary(texts)\n",
    "    corpus = [dictionary.doc2bow(text) for text in texts]\n",
    "    #print(corpus[0])  # [(36, 1), (505, 1), (506, 1), (507, 1), (508, 1)]\n",
    "\n",
    "    lda = ldamodel.LdaModel(corpus=corpus, id2word=dictionary, num_topics=20)\n",
    "    #print(lda.print_topic(10, topn=5))  # 第10个主题最关键的五个词\n",
    "    print(\"主题及其关键词\")\n",
    "    topics = lda.print_topics(num_topics=20, num_words=5)\n",
    "    for i in range(len(topics)):\n",
    "        print(topics[i])\n",
    "\n",
    "    # 保存模型\n",
    "    lda.save('zhutimoxing.model')\n",
    "\n",
    "    # 加载模型\n",
    "    lda = ldamodel.LdaModel.load('zhutimoxing.model')\n",
    "\n",
    "    # 新鲜数据，判读主题\n",
    "    text = 'Remarks of the Spokesman of the Islamic Emirate of Afghanistan about the Sudden Death of Holbrooke. According to credible news agencies of the world，the American president Special Envoy for Afghanistan and PakistanRichard Holbrooke died in George Washington University hospital at the age of 69. '\n",
    "    print(\"邮件内容：\",text)\n",
    "    text = clean_email_text(text)\n",
    "    texts = [word for word in text.lower().split() if word not in stop_word]\n",
    "    bow = dictionary.doc2bow(texts)    \n",
    "    print(\"邮件主题：\")\n",
    "    print(lda.get_document_topics(bow))  "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": []
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.8.3"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 2
}
