{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# 机器读心术之文本挖掘与自然语言处理第12课书面作业\n",
    "学号：207402  \n",
    "\n",
    "**作业内容：**  \n",
    "编写程序，将课程资源中“作业素材”里的三篇文章提取摘要，可参考堂上讲过的一些方法，可使用任何编程语言实现。"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "**作业思路：**  "
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "* 首先没有大量语料库训练，只有三篇待用文章。采用统计模型来处理会比较好。我的想法就是用**LDA**模型来做，训练用的语料就是每篇文章。\n",
    "* 每篇文章（语料）用LDA训练完后，会得到语料的Topic信息。然后再用每个句子向量化后输入LDA模型，得到该句子在隐Topic上的权重（或者叫概率）信息，我在本次作业中采用的是gensim库，用gensim库的**LDA**模型训练后输入某个句子向量，会得到如下的信息：  \n",
    "  [(0,0,8712),  \n",
    "   (1,0.0101),  \n",
    "   (2,0.0091),  \n",
    "   ...  \n",
    "   ]  \n",
    "   一共会有n项，n为指定的topic数量（属于模型超参数），列表中每个元组中第一项表示主题号，第二项就是句子针对该topic的权重或者概率。  \n",
    "* 我制定了一个计算句子的权重公式：  \n",
    "  有句子集合：$S = \\{s_i|1\\le i \\le n\\}$  \n",
    "  $$W_i = \\frac{1}{T}\\sum_{j=1}^{T}P_{ij}$$  \n",
    "  其中：$W_i$表示句子$s_i$的权重，$T$表示Topic数目，$P_{ij}$表示句子$s_i$在第$j$个Topic上的概率。  \n",
    "  取这个公式的目的是：兼顾每个topic，选出来的句子最好是覆盖所有topic并概率最高的。\n",
    "* 在具体处理上，在句子权重排序时，会剔除那些没有标点符号结尾的句子。这些句子通过观察通常是标题或者小标题，不适合做摘要。  \n",
    "* 排序后，取出前m的句子作为摘要用句子。这里m也是通过抽取比例（ratio）来确定的。\n",
    "* 输出摘要时，还要按照句子在原文档中的秩序来输出，不然会出现句子意思前后颠倒的现象。\n",
    "* 在准备时，需要用到中文分词，本作业中采用了jieba中文分词处理库。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 28,
   "metadata": {},
   "outputs": [],
   "source": [
    "import numpy as np\n",
    "import jieba,os,re\n",
    "from gensim import corpora, models, similarities\n",
    "\n",
    "def stopwordslist():\n",
    "    '''\n",
    "    创建停用词列表\n",
    "    返回：停用词列表\n",
    "    '''\n",
    "    stopwords = [line.strip() for line in open('./stopwords.txt',encoding='UTF-8').readlines()]\n",
    "    return stopwords\n",
    "\n",
    "\n",
    "def seg_depart(sentence):\n",
    "    '''\n",
    "    对句子进行中文分词\n",
    "    sentence: 待分词处理的字符串\n",
    "    返回：\n",
    "    分完词后的字符串，形如：'黄蜂 湖人 首发 科比 带伤 战 保罗 加索尔 ...'     \n",
    "    '''\n",
    "    sentence_depart = jieba.cut(sentence.strip())\n",
    "    stopwords = stopwordslist()\n",
    "    outstr = ''\n",
    "    for word in sentence_depart:\n",
    "        if word not in stopwords:\n",
    "            outstr += word\n",
    "            outstr += \" \"\n",
    "    return outstr\n",
    "\n",
    "def cut_sent(para):\n",
    "    '''\n",
    "    中文句子分句\n",
    "    para: 待分句处理的字符串\n",
    "    返回：\n",
    "    [\"句子1\",\"句子2\",...]\n",
    "    '''\n",
    "    para = re.sub('([。！？\\?])([^”’])', r\"\\1\\n\\2\", para)  # 单字符断句符\n",
    "    para = re.sub('(\\.{6})([^”’])', r\"\\1\\n\\2\", para)  # 英文省略号\n",
    "    para = re.sub('(\\…{2})([^”’])', r\"\\1\\n\\2\", para)  # 中文省略号\n",
    "    para = re.sub('([。！？\\?][”’])([^，。！？\\?])', r'\\1\\n\\2', para)\n",
    "    # 如果双引号前有终止符，那么双引号才是句子的终点，把分句符\\n放到双引号后，注意前面的几句都小心保留了双引号\n",
    "    para = para.rstrip()  # 段尾如果有多余的\\n就去掉它\n",
    "    # 很多规则中会考虑分号;，但是这里我把它忽略不计，破折号、英文双引号等同样忽略，需要的再做些简单调整即可。\n",
    "    return para.split(\"\\n\")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 73,
   "metadata": {},
   "outputs": [],
   "source": [
    "def get_summary(srctxtfile,ratio=0.2, topic_num=10):\n",
    "    '''\n",
    "    自动文档摘要处理\n",
    "    srctxtfile: 待处理的中文文本文件名\n",
    "    ratio: 摘要的提取比例，以句子为单位处理，如果原文中有100个句子，ratio=0.2，则摘要输出20个句子。\n",
    "    topic_num: 隐主题数目，模型超参数\n",
    "    返回：\n",
    "    摘要字符串。\n",
    "    '''\n",
    "    # 分词处理，结果写入split_[srctxtfile]文件\n",
    "    filename = srctxtfile\n",
    "    outfilename = 'split_'+srctxtfile\n",
    "    inputs = open(filename, 'r')\n",
    "    outputs = open(outfilename, 'w', encoding='UTF-8')\n",
    "    for line in inputs:\n",
    "        if len(line.strip()) == 0: # 不要空行\n",
    "            continue\n",
    "        line = re.sub(r'[^\\u4e00-\\u9fa5]+','',line)\n",
    "        line_seg = seg_depart(line.strip())\n",
    "        outputs.write(line_seg.strip() + '\\n')\n",
    "    outputs.close()\n",
    "    inputs.close()\n",
    "    \n",
    "    # 准备训练语料，整理成gensim需要的输入格式\n",
    "    fr = open(outfilename, 'r',encoding='utf-8')\n",
    "    train = []\n",
    "    for line in fr.readlines():\n",
    "        line = [word.strip() for word in line.split(' ')]\n",
    "        train.append(line)\n",
    "\n",
    "    # 构建词频矩阵，即语料库的向量化\n",
    "    dictionary = corpora.Dictionary(train)\n",
    "    corpus = [dictionary.doc2bow(text) for text in train]\n",
    "    # 训练LDA模型\n",
    "    lda = models.LdaModel(corpus=corpus, id2word=dictionary, num_topics=topic_num)\n",
    "    \n",
    "    #再次读入源文件，本次的目的是获得单个句子，并用LDA模型得到每个句子的隐topic\n",
    "    inputs = open(filename, 'r')\n",
    "    sentences = [] #存放句子\n",
    "    for line in inputs:\n",
    "        line = line.strip()\n",
    "        if len(line) == 0: #剔除空行\n",
    "            continue\n",
    "        sentences+=cut_sent(line)#这里做句子拆分\n",
    "    inputs.close()\n",
    "    # 下面去除句子中没有以标点符号结尾的句子，这些很可能是标题或者小标题，不适合做为摘要\n",
    "    cnt_sent = [] # 要来存放摘要候选句子\n",
    "    sentMap={}\n",
    "    index = 0\n",
    "    cutLineFlag = ['？','。','！','”']\n",
    "    for i,s in enumerate(sentences):\n",
    "        if s[-1] not in cutLineFlag:\n",
    "            continue\n",
    "        cnt_sent.append(s)\n",
    "        sentMap[index] = i #句子的映射表\n",
    "        index += 1\n",
    "    \n",
    "    #把每个句子向量化，方便LDA模型调用\n",
    "    test = []\n",
    "    for line in cnt_sent:\n",
    "        line = re.sub(r'[^\\u4e00-\\u9fa5]+','',line)\n",
    "        line = seg_depart(line.strip())\n",
    "        line = [word.strip() for word in line.split(' ')]\n",
    "        test.append(line)\n",
    "        \n",
    "    test_corpus = [dictionary.doc2bow(text) for text in test]\n",
    "    sent_len = len(test_corpus)\n",
    "    selected = int(ratio*sent_len) #按比例需要选择作为摘要的句子总数\n",
    "    # 下面计算每个句子的隐Topic\n",
    "    features = []\n",
    "    for i in range(sent_len):\n",
    "        features.append(lda[test_corpus[i]])\n",
    "    # 下面计算每个句子的权重\n",
    "    weights = []\n",
    "    for fea in features:\n",
    "        t = 0\n",
    "        for f in fea:\n",
    "            t+=(1.0/topic_num)*f[1]\n",
    "        weights.append(t)\n",
    "    # 对权重排序，取出权重最高的selected个数的句子\n",
    "    weights_np = np.zeros((sent_len,2))\n",
    "    for i,v in enumerate(weights):\n",
    "        weights_np[i][0] = weights[i]\n",
    "        weights_np[i][1] = i\n",
    "    weights_np = weights_np[weights_np[:,0].argsort()]\n",
    "    summary = []\n",
    "    for i in range(1,selected+1):\n",
    "        j = -1 * i\n",
    "        index = int(weights_np[j][1])\n",
    "        index = sentMap[index]\n",
    "        summary.append(index)\n",
    "    # 最终摘要输出还要保持原句子在原文档中的秩序\n",
    "    summary.sort()\n",
    "    summary = [sentences[i] for i in summary]\n",
    "    return ''.join(summary)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 79,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "01.txt文件的摘要：\n",
      "历时四年多的建设，一条22.9公里长的巨龙已经绵延驰骋在仃伶洋海面上，世界最大的跨海大桥港珠澳大桥桥梁主体工程今天将全面贯通，“中国结”、“海豚”、“风帆”三个巨型景观在伶何洋面上熠熠生辉，已成为港珠澳大桥以及伶仃洋面上的标志性景观。港珠澳大桥管理局行政总监韦东庆介绍，港珠澳大桥三个通航桥各具特色，其中青州航道桥设计是港珠澳大桥最具特色的部分，为双塔空间双索面钢箱梁斜拉桥，主梁采用扁平流线型整体式钢箱梁，索塔采用横向H形框架结构，163米的塔上端采用象征港珠澳三地紧密相连的“中国结”造型钢结构结形撑。建设以来，更有60多个国家的专家和团队前来参观。今年9月25日，谭国顺在港珠澳大桥施工现场度过了64岁生日。中国著名桥梁专家、中铁大桥局原总经理谭国顺赶在退休之前参加了这个世纪工程的建设，他告诉记者，为了保证使用寿命120年，港珠澳大桥建设几乎用了世界最苛刻的标准，比方说平均长度130余米、直径2.5米的深海桩基必须保证10cm以内的平面偏差和1/250以内的倾斜度，但凡对桥梁工程技术略有研究的人，都会为以上几个数字而惊叹：技术和质量要求太高啦！全新的自动化生产线，智能化的板单元组装和焊接机器人系统，先进的超声波相控阵检测设备，工厂化的“长线”法拼装，代替了过去以手工操作为主的生产模式，大大提高了成品的质量和稳定性，使港珠澳大桥钢结构制造技术总体达到世界先进水平，进而推动了整个行业的技术进步。\n"
     ]
    }
   ],
   "source": [
    "print('01.txt文件的摘要：')\n",
    "print(get_summary('01.txt'))"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 75,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "02.txt文件的摘要：\n",
      "新浪科技讯 北京时间9月27日凌晨消息，美国东部时间26日14：00（北京时间27日凌晨2点）美国宇航局刚刚召开新闻发布会，对外界发布有关木卫二“欧罗巴”研究的重要进展，科学家们宣布他们利用哈勃空间望远镜在木星的一颗卫星表面发现了疑似水汽喷流现象，这项发现或许将为未来飞往这颗冰卫星，并探查其冰层下海洋的探测项目提供全新的选择。他们最初的观测目的是想确认这颗卫星是否存在一层稀薄的大气层。因此，当木卫二通过木星前方时，我们重点对木卫二的边缘吸收特征进行了观测。”这项工作为此前认为木卫二上可能存在水汽喷流的理论提供了支持。美国宇航局华盛顿总部天体物理学分部主管保罗·赫兹（Paul Hertz）表示：“哈勃空间望远镜的独特能力使其能够捕捉到这些喷流的信号，这再一次展示了哈勃望远镜在其设计目标之外作出重要发现的能力。”美国宇航局戈达德空间飞行中心负责望远镜的管理工作，位于巴尔的摩的空间望远镜研究所（STScI）为美国大学天文研究协会（AURA）负责哈勃空间望远镜的日常科学运行工作。\n"
     ]
    }
   ],
   "source": [
    "print('02.txt文件的摘要：')\n",
    "print(get_summary('02.txt'))"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 76,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "03.txt文件的摘要：\n",
      "梅在公投后出任英国新首相并承诺执行人民意志。但是目前英国和欧盟仍未就“脱欧”条件启动正式谈判。\n"
     ]
    }
   ],
   "source": [
    "print('03.txt文件的摘要：')\n",
    "print(get_summary('03.txt'))"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": []
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.8.8"
  },
  "varInspector": {
   "cols": {
    "lenName": 16,
    "lenType": 16,
    "lenVar": 40
   },
   "kernels_config": {
    "python": {
     "delete_cmd_postfix": "",
     "delete_cmd_prefix": "del ",
     "library": "var_list.py",
     "varRefreshCmd": "print(var_dic_list())"
    },
    "r": {
     "delete_cmd_postfix": ") ",
     "delete_cmd_prefix": "rm(",
     "library": "var_list.r",
     "varRefreshCmd": "cat(var_dic_list()) "
    }
   },
   "types_to_exclude": [
    "module",
    "function",
    "builtin_function_or_method",
    "instance",
    "_Feature"
   ],
   "window_display": false
  }
 },
 "nbformat": 4,
 "nbformat_minor": 2
}
