{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 两段论文摘要"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 4,
   "metadata": {},
   "outputs": [],
   "source": [
    "docs = ['''Beyond 5G (B5G) in mobile network technologies is the latest communication technology cur- \n",
    "rently under development. B5G is expected to achieve superior capabilities in ultra-high network \n",
    "transmission speed, low latency, low energy consumption, and high coverage, comparing to cur- \n",
    "rent 5G network performance. Although B5G is still in the development and implementation \n",
    "stage, there are many patents and non-patent literature depicting B5G innovative technologies \n",
    "and applications. The landscapes of B5G technologies are great references for governments and \n",
    "industries to understand the advances in mobile communication for R&D strategies. Thus, this \n",
    "research focuses on developing a formal tech-mining workflow integrating semantic-based patent \n",
    "and non-patent literature analysis for ontology building, patent technological topic clustering, and \n",
    "graph convolutional network (GCN) modeling for depicting key technology interactions among \n",
    "clusters of sub-domain topics. This research emphasizes the study of B5G patent landscape and key \n",
    "technology interaction roadmap in comprehensive steps as a valuable reference for B5G mobile \n",
    "network R&D, as well as for conducting tech-mining of other technology domains of interests. ''','''Future work sentences (FWS) are the particular sentences in academic papers that contain the author’s description of their proposed follow-up \n",
    "research direction. This paper presents methods to automatically extract FWS from academic papers and classify them according to the different \n",
    "future directions embodied in the paper’s content. FWS recognition methods will enable subsequent researchers to locate future work sentences \n",
    "more accurately and quickly and reduce the time and cost of acquiring the corpus. At the same time, changes in the content of future work will \n",
    "be illuminated, and a foundation will be laid for a more in-depth semantic analysis of future work sentences. The current work on automatic \n",
    "identification of future work sentences is relatively small, and the existing research cannot accurately identify FWS from academic papers, and thus \n",
    "cannot conduct data mining on a large scale. Furthermore, there are many aspects to the content of future work, and the subdivision of the content \n",
    "is conducive to the analysis of specific development directions. In this paper, Nature Language Processing (NLP) is used as a case study, and FWS are \n",
    "extracted from academic papers and classified into different types. We manually build an annotated corpus with six different types of FWS. Then, \n",
    "automatic recognition and classification of FWS are implemented using machine learning models, and the performance of these models is compared \n",
    "based on the evaluation metrics. The results show that the Bernoulli Bayesian model has the best performance in the automatic recognition task, \n",
    "with the Macro F 1 reaching 90.73%, and the SCIBERT model has the best performance in the automatic classification task, with the weighted \n",
    "average F 1 reaching 72.63%. Finally, we extract keywords from FWS and gain a deep understanding of the key content described in FWS, and we \n",
    "also demonstrate that content determination in FWS will be reflected in the subsequent research work by measuring the similarity between future \n",
    "work sentences and the abstracts.''']"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 1.直接抽关键词"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 5,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "application/vnd.jupyter.widget-view+json": {
       "model_id": "93244063f8884b32a5969af52916e897",
       "version_major": 2,
       "version_minor": 0
      },
      "text/plain": [
       "Downloading (…)e9125/.gitattributes:   0%|          | 0.00/1.18k [00:00<?, ?B/s]"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "data": {
      "application/vnd.jupyter.widget-view+json": {
       "model_id": "7a7a54b7614344a0b126fdac42fe19a3",
       "version_major": 2,
       "version_minor": 0
      },
      "text/plain": [
       "Downloading (…)_Pooling/config.json:   0%|          | 0.00/190 [00:00<?, ?B/s]"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "data": {
      "application/vnd.jupyter.widget-view+json": {
       "model_id": "a5a22d231be8471c9ed34322ecd46106",
       "version_major": 2,
       "version_minor": 0
      },
      "text/plain": [
       "Downloading (…)7e55de9125/README.md:   0%|          | 0.00/10.6k [00:00<?, ?B/s]"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "data": {
      "application/vnd.jupyter.widget-view+json": {
       "model_id": "283c44381f4e4fc69fc15cbc847292f5",
       "version_major": 2,
       "version_minor": 0
      },
      "text/plain": [
       "Downloading (…)55de9125/config.json:   0%|          | 0.00/612 [00:00<?, ?B/s]"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "data": {
      "application/vnd.jupyter.widget-view+json": {
       "model_id": "ac1d79251d1b4fcd891589d1f965a909",
       "version_major": 2,
       "version_minor": 0
      },
      "text/plain": [
       "Downloading (…)ce_transformers.json:   0%|          | 0.00/116 [00:00<?, ?B/s]"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "data": {
      "application/vnd.jupyter.widget-view+json": {
       "model_id": "742cd962f9c549f6adb41ba40a028778",
       "version_major": 2,
       "version_minor": 0
      },
      "text/plain": [
       "Downloading (…)125/data_config.json:   0%|          | 0.00/39.3k [00:00<?, ?B/s]"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "data": {
      "application/vnd.jupyter.widget-view+json": {
       "model_id": "9d582aa93fcb4827b39770661501a06a",
       "version_major": 2,
       "version_minor": 0
      },
      "text/plain": [
       "Downloading pytorch_model.bin:   0%|          | 0.00/90.9M [00:00<?, ?B/s]"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "data": {
      "application/vnd.jupyter.widget-view+json": {
       "model_id": "dd4ced3c35334f719e2ab20cf99dce4f",
       "version_major": 2,
       "version_minor": 0
      },
      "text/plain": [
       "Downloading (…)nce_bert_config.json:   0%|          | 0.00/53.0 [00:00<?, ?B/s]"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "data": {
      "application/vnd.jupyter.widget-view+json": {
       "model_id": "4f628d169df6452998222820c43cf9a7",
       "version_major": 2,
       "version_minor": 0
      },
      "text/plain": [
       "Downloading (…)cial_tokens_map.json:   0%|          | 0.00/112 [00:00<?, ?B/s]"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "data": {
      "application/vnd.jupyter.widget-view+json": {
       "model_id": "e22631063ae3473aac19e88cdfa0fac2",
       "version_major": 2,
       "version_minor": 0
      },
      "text/plain": [
       "Downloading (…)e9125/tokenizer.json:   0%|          | 0.00/466k [00:00<?, ?B/s]"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "data": {
      "application/vnd.jupyter.widget-view+json": {
       "model_id": "c59f75a2fa1b423c96bbd97d7099afa9",
       "version_major": 2,
       "version_minor": 0
      },
      "text/plain": [
       "Downloading (…)okenizer_config.json:   0%|          | 0.00/350 [00:00<?, ?B/s]"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "data": {
      "application/vnd.jupyter.widget-view+json": {
       "model_id": "aed257b72181451ebc718e18289dafc9",
       "version_major": 2,
       "version_minor": 0
      },
      "text/plain": [
       "Downloading (…)9125/train_script.py:   0%|          | 0.00/13.2k [00:00<?, ?B/s]"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "data": {
      "application/vnd.jupyter.widget-view+json": {
       "model_id": "6a0fc062ca0f4a11b87e84593ed54f97",
       "version_major": 2,
       "version_minor": 0
      },
      "text/plain": [
       "Downloading (…)7e55de9125/vocab.txt:   0%|          | 0.00/232k [00:00<?, ?B/s]"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "data": {
      "application/vnd.jupyter.widget-view+json": {
       "model_id": "89c851207c7946c19f6cdf1c5bd41971",
       "version_major": 2,
       "version_minor": 0
      },
      "text/plain": [
       "Downloading (…)5de9125/modules.json:   0%|          | 0.00/349 [00:00<?, ?B/s]"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "第1段文字的关键词为 [('technologies', 0.3665), ('technological', 0.3594), ('patents', 0.3369), ('technology', 0.3317), ('network', 0.3291)]\n",
      "第2段文字的关键词为 [('corpus', 0.491), ('nlp', 0.441), ('annotated', 0.4301), ('semantic', 0.418), ('sentences', 0.3916)]\n"
     ]
    }
   ],
   "source": [
    "from keybert import KeyBERT\n",
    "\n",
    "kw_model = KeyBERT()\n",
    "\n",
    "for idx,doc in enumerate(docs):\n",
    "    keywords = kw_model.extract_keywords(doc)\n",
    "    print('第' + str(1+idx) + '段文字的关键词为',keywords)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 2.设置生成的关键字、短语的长度"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 6,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "第1段文字的关键词为 [('b5g patent landscape', 0.5509), ('technological topic clustering', 0.5294), ('tech mining', 0.5052), ('b5g technologies are', 0.4963), ('formal tech mining', 0.491)]\n",
      "第2段文字的关键词为 [('an annotated corpus', 0.5971), ('annotated corpus', 0.5864), ('processing nlp', 0.5816), ('annotated corpus with', 0.5781), ('nature language processing', 0.563)]\n"
     ]
    }
   ],
   "source": [
    "for idx,doc in enumerate(docs):\n",
    "    keywords = kw_model.extract_keywords(doc, keyphrase_ngram_range=(1, 3), stop_words=None)  # 这里设置为1-3以内的词\n",
    "    print('第' + str(1+idx) + '段文字的关键词为',keywords)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 3.高亮突出显示文档中的关键字"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 10,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "第1段文字:\n"
     ]
    },
    {
     "data": {
      "text/html": [
       "<pre style=\"white-space:pre;overflow-x:auto;line-height:normal;font-family:Menlo,'DejaVu Sans Mono',consolas,'Courier New',monospace\">Beyond 5G B5G in mobile <span style=\"color: #000000; text-decoration-color: #000000; background-color: #ffff00\">network</span> <span style=\"color: #000000; text-decoration-color: #000000; background-color: #ffff00\">technologies</span> is the latest communication <span style=\"color: #000000; text-decoration-color: #000000; background-color: #ffff00\">technology</span> cur rently under development \n",
       "B5G is expected to achieve superior capabilities in ultra high <span style=\"color: #000000; text-decoration-color: #000000; background-color: #ffff00\">network</span> transmission speed low latency low energy \n",
       "consumption and high coverage comparing to cur rent 5G <span style=\"color: #000000; text-decoration-color: #000000; background-color: #ffff00\">network</span> performance Although B5G is still in the development\n",
       "and implementation stage there are many <span style=\"color: #000000; text-decoration-color: #000000; background-color: #ffff00\">patents</span> and non patent literature depicting B5G innovative <span style=\"color: #000000; text-decoration-color: #000000; background-color: #ffff00\">technologies</span> and\n",
       "applications The landscapes of B5G <span style=\"color: #000000; text-decoration-color: #000000; background-color: #ffff00\">technologies</span> are great references for governments and industries to understand \n",
       "the advances in mobile communication for strategies Thus this research focuses on developing formal tech mining \n",
       "workflow integrating semantic based patent and non patent literature analysis for ontology building patent \n",
       "<span style=\"color: #000000; text-decoration-color: #000000; background-color: #ffff00\">technological</span> topic clustering and graph convolutional <span style=\"color: #000000; text-decoration-color: #000000; background-color: #ffff00\">network</span> GCN modeling for depicting key <span style=\"color: #000000; text-decoration-color: #000000; background-color: #ffff00\">technology</span> \n",
       "interactions among clusters of sub domain topics This research emphasizes the study of B5G patent landscape and key\n",
       "<span style=\"color: #000000; text-decoration-color: #000000; background-color: #ffff00\">technology</span> interaction roadmap in comprehensive steps as valuable reference for B5G mobile <span style=\"color: #000000; text-decoration-color: #000000; background-color: #ffff00\">network</span> as well as for \n",
       "conducting tech mining of other <span style=\"color: #000000; text-decoration-color: #000000; background-color: #ffff00\">technology</span> domains of interests\n",
       "</pre>\n"
      ],
      "text/plain": [
       "Beyond 5G B5G in mobile \u001b[30;48;2;255;255;0mnetwork\u001b[0m \u001b[30;48;2;255;255;0mtechnologies\u001b[0m is the latest communication \u001b[30;48;2;255;255;0mtechnology\u001b[0m cur rently under development \n",
       "B5G is expected to achieve superior capabilities in ultra high \u001b[30;48;2;255;255;0mnetwork\u001b[0m transmission speed low latency low energy \n",
       "consumption and high coverage comparing to cur rent 5G \u001b[30;48;2;255;255;0mnetwork\u001b[0m performance Although B5G is still in the development\n",
       "and implementation stage there are many \u001b[30;48;2;255;255;0mpatents\u001b[0m and non patent literature depicting B5G innovative \u001b[30;48;2;255;255;0mtechnologies\u001b[0m and\n",
       "applications The landscapes of B5G \u001b[30;48;2;255;255;0mtechnologies\u001b[0m are great references for governments and industries to understand \n",
       "the advances in mobile communication for strategies Thus this research focuses on developing formal tech mining \n",
       "workflow integrating semantic based patent and non patent literature analysis for ontology building patent \n",
       "\u001b[30;48;2;255;255;0mtechnological\u001b[0m topic clustering and graph convolutional \u001b[30;48;2;255;255;0mnetwork\u001b[0m GCN modeling for depicting key \u001b[30;48;2;255;255;0mtechnology\u001b[0m \n",
       "interactions among clusters of sub domain topics This research emphasizes the study of B5G patent landscape and key\n",
       "\u001b[30;48;2;255;255;0mtechnology\u001b[0m interaction roadmap in comprehensive steps as valuable reference for B5G mobile \u001b[30;48;2;255;255;0mnetwork\u001b[0m as well as for \n",
       "conducting tech mining of other \u001b[30;48;2;255;255;0mtechnology\u001b[0m domains of interests\n"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "第2段文字:\n"
     ]
    },
    {
     "data": {
      "text/html": [
       "<pre style=\"white-space:pre;overflow-x:auto;line-height:normal;font-family:Menlo,'DejaVu Sans Mono',consolas,'Courier New',monospace\">Future work <span style=\"color: #000000; text-decoration-color: #000000; background-color: #ffff00\">sentences</span> FWS are the particular <span style=\"color: #000000; text-decoration-color: #000000; background-color: #ffff00\">sentences</span> in academic papers that contain the author description of \n",
       "their proposed follow up research direction This paper presents methods to automatically extract FWS from academic \n",
       "papers and classify them according to the different future directions embodied in the paper content FWS recognition\n",
       "methods will enable subsequent researchers to locate future work <span style=\"color: #000000; text-decoration-color: #000000; background-color: #ffff00\">sentences</span> more accurately and quickly and reduce \n",
       "the time and cost of acquiring the <span style=\"color: #000000; text-decoration-color: #000000; background-color: #ffff00\">corpus</span> At the same time changes in the content of future work will be \n",
       "illuminated and foundation will be laid for more in depth <span style=\"color: #000000; text-decoration-color: #000000; background-color: #ffff00\">semantic</span> analysis of future work <span style=\"color: #000000; text-decoration-color: #000000; background-color: #ffff00\">sentences</span> The current \n",
       "work on automatic identification of future work <span style=\"color: #000000; text-decoration-color: #000000; background-color: #ffff00\">sentences</span> is relatively small and the existing research cannot \n",
       "accurately identify FWS from academic papers and thus cannot conduct data mining on large scale Furthermore there \n",
       "are many aspects to the content of future work and the subdivision of the content is conducive to the analysis of \n",
       "specific development directions In this paper Nature Language Processing <span style=\"color: #000000; text-decoration-color: #000000; background-color: #ffff00\">NLP</span> is used as case study and FWS are \n",
       "extracted from academic papers and classified into different types We manually build an <span style=\"color: #000000; text-decoration-color: #000000; background-color: #ffff00\">annotated</span> <span style=\"color: #000000; text-decoration-color: #000000; background-color: #ffff00\">corpus</span> with six \n",
       "different types of FWS Then automatic recognition and classification of FWS are implemented using machine learning \n",
       "models and the performance of these models is compared based on the evaluation metrics The results show that the \n",
       "Bernoulli Bayesian model has the best performance in the automatic recognition task with the Macro reaching 90 73 \n",
       "and the SCIBERT model has the best performance in the automatic classification task with the weighted average \n",
       "reaching 72 63 Finally we extract keywords from FWS and gain deep understanding of the key content described in FWS\n",
       "and we also demonstrate that content determination in FWS will be reflected in the subsequent research work by \n",
       "measuring the similarity between future work <span style=\"color: #000000; text-decoration-color: #000000; background-color: #ffff00\">sentences</span> and the abstracts\n",
       "</pre>\n"
      ],
      "text/plain": [
       "Future work \u001b[30;48;2;255;255;0msentences\u001b[0m FWS are the particular \u001b[30;48;2;255;255;0msentences\u001b[0m in academic papers that contain the author description of \n",
       "their proposed follow up research direction This paper presents methods to automatically extract FWS from academic \n",
       "papers and classify them according to the different future directions embodied in the paper content FWS recognition\n",
       "methods will enable subsequent researchers to locate future work \u001b[30;48;2;255;255;0msentences\u001b[0m more accurately and quickly and reduce \n",
       "the time and cost of acquiring the \u001b[30;48;2;255;255;0mcorpus\u001b[0m At the same time changes in the content of future work will be \n",
       "illuminated and foundation will be laid for more in depth \u001b[30;48;2;255;255;0msemantic\u001b[0m analysis of future work \u001b[30;48;2;255;255;0msentences\u001b[0m The current \n",
       "work on automatic identification of future work \u001b[30;48;2;255;255;0msentences\u001b[0m is relatively small and the existing research cannot \n",
       "accurately identify FWS from academic papers and thus cannot conduct data mining on large scale Furthermore there \n",
       "are many aspects to the content of future work and the subdivision of the content is conducive to the analysis of \n",
       "specific development directions In this paper Nature Language Processing \u001b[30;48;2;255;255;0mNLP\u001b[0m is used as case study and FWS are \n",
       "extracted from academic papers and classified into different types We manually build an \u001b[30;48;2;255;255;0mannotated\u001b[0m \u001b[30;48;2;255;255;0mcorpus\u001b[0m with six \n",
       "different types of FWS Then automatic recognition and classification of FWS are implemented using machine learning \n",
       "models and the performance of these models is compared based on the evaluation metrics The results show that the \n",
       "Bernoulli Bayesian model has the best performance in the automatic recognition task with the Macro reaching 90 73 \n",
       "and the SCIBERT model has the best performance in the automatic classification task with the weighted average \n",
       "reaching 72 63 Finally we extract keywords from FWS and gain deep understanding of the key content described in FWS\n",
       "and we also demonstrate that content determination in FWS will be reflected in the subsequent research work by \n",
       "measuring the similarity between future work \u001b[30;48;2;255;255;0msentences\u001b[0m and the abstracts\n"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    }
   ],
   "source": [
    "for idx,doc in enumerate(docs):\n",
    "    print('第' + str(1+idx) + '段文字:')\n",
    "    kw_model.extract_keywords(doc, highlight=True)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 4.使抽取结果多样化的方法"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### 4.1Max Sum Distance"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 15,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "第1段文字的关键词为：\n",
      " [('b5g mobile network', 0.4536), ('reference b5g mobile', 0.4643), ('technology interactions clusters', 0.4797), ('tech mining workflow', 0.4875), ('semantic based patent', 0.4887)]\n",
      "第2段文字的关键词为：\n",
      " [('extract keywords fws', 0.4728), ('annotated corpus different', 0.509), ('automatic classification task', 0.5091), ('future work sentences', 0.5221), ('nature language processing', 0.563)]\n"
     ]
    }
   ],
   "source": [
    "for idx,doc in enumerate(docs):\n",
    "    keywords = kw_model.extract_keywords(doc, keyphrase_ngram_range=(1, 3), stop_words='english',\n",
    "                              use_maxsum=True, nr_candidates=20, top_n=5)  # 与下面的区别在于这一行\n",
    "    print('第' + str(1+idx) + '段文字的关键词为：\\n',keywords)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### 4.2Maximal Marginal Relevance"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 16,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "第1段文字的关键词为：\n",
      " [('b5g patent landscape', 0.5509), ('workflow integrating semantic', 0.3689), ('topic clustering graph', 0.3276), ('network transmission speed', 0.2455), ('coverage comparing cur', 0.0773)]\n",
      "第2段文字的关键词为：\n",
      " [('annotated corpus', 0.5864), ('different future', 0.2285), ('types manually build', 0.1857), ('determination fws reflected', 0.0886), ('average reaching 72', -0.1012)]\n"
     ]
    }
   ],
   "source": [
    "for idx,doc in enumerate(docs):\n",
    "    keywords = kw_model.extract_keywords(doc, keyphrase_ngram_range=(1, 3), stop_words='english',\n",
    "                              use_mmr=True, diversity=0.7)\n",
    "    print('第' + str(1+idx) + '段文字的关键词为：\\n',keywords)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 5.也可以采用其他嵌入模型\n",
    "##### 从下方链接中就可以选择\n",
    "- https://maartengr.github.io/KeyBERT/guides/embeddings.html"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### 5.1Sentence Transformers"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "##### 以下两种方法都会得到同样结果"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 20,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "第1段文字的关键词为：\n",
      " [('technologies', 0.3665), ('technological', 0.3594), ('patents', 0.3369), ('technology', 0.3317), ('network', 0.3291)]\n",
      "第2段文字的关键词为：\n",
      " [('corpus', 0.491), ('nlp', 0.441), ('annotated', 0.4301), ('semantic', 0.418), ('sentences', 0.3916)]\n"
     ]
    }
   ],
   "source": [
    "# 用法一\n",
    "from keybert import KeyBERT\n",
    "kw_model = KeyBERT(model=\"all-MiniLM-L6-v2\")\n",
    "\n",
    "for idx,doc in enumerate(docs):\n",
    "    keywords = kw_model.extract_keywords(doc)\n",
    "    print('第' + str(1+idx) + '段文字的关键词为：\\n',keywords)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 21,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "第1段文字的关键词为：\n",
      " [('technologies', 0.3665), ('technological', 0.3594), ('patents', 0.3369), ('technology', 0.3317), ('network', 0.3291)]\n",
      "第2段文字的关键词为：\n",
      " [('corpus', 0.491), ('nlp', 0.441), ('annotated', 0.4301), ('semantic', 0.418), ('sentences', 0.3916)]\n"
     ]
    }
   ],
   "source": [
    "# 用法二\n",
    "from sentence_transformers import SentenceTransformer\n",
    "\n",
    "sentence_model = SentenceTransformer(\"all-MiniLM-L6-v2\")\n",
    "kw_model = KeyBERT(model=sentence_model)\n",
    "\n",
    "for idx,doc in enumerate(docs):\n",
    "    keywords = kw_model.extract_keywords(doc)\n",
    "    print('第' + str(1+idx) + '段文字的关键词为：\\n',keywords)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 5.2Hugging Face Transformers"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "##### 采用 https://huggingface.co/models 的模型"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 22,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "application/vnd.jupyter.widget-view+json": {
       "model_id": "b5d33109017a43d3b7abb004b3e4e05c",
       "version_major": 2,
       "version_minor": 0
      },
      "text/plain": [
       "Downloading (…)lve/main/config.json:   0%|          | 0.00/411 [00:00<?, ?B/s]"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "D:\\annconda\\lib\\site-packages\\huggingface_hub\\file_download.py:133: UserWarning: `huggingface_hub` cache-system uses symlinks by default to efficiently store duplicated files but your machine does not support them in C:\\Users\\Administrator\\.cache\\huggingface\\hub. Caching files will still work but in a degraded version that might require more space on your disk. This warning can be disabled by setting the `HF_HUB_DISABLE_SYMLINKS_WARNING` environment variable. For more details, see https://huggingface.co/docs/huggingface_hub/how-to-cache#limitations.\n",
      "To support symlinks on Windows, you either need to activate Developer Mode or to run Python as an administrator. In order to see activate developer mode, see this article: https://docs.microsoft.com/en-us/windows/apps/get-started/enable-your-device-for-development\n",
      "  warnings.warn(message)\n"
     ]
    },
    {
     "data": {
      "application/vnd.jupyter.widget-view+json": {
       "model_id": "45f0e04427574c91a3aa8673d646cb97",
       "version_major": 2,
       "version_minor": 0
      },
      "text/plain": [
       "Downloading pytorch_model.bin:   0%|          | 0.00/263M [00:00<?, ?B/s]"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "Some weights of the model checkpoint at distilbert-base-cased were not used when initializing DistilBertModel: ['vocab_layer_norm.bias', 'vocab_projector.weight', 'vocab_transform.bias', 'vocab_projector.bias', 'vocab_transform.weight', 'vocab_layer_norm.weight']\n",
      "- This IS expected if you are initializing DistilBertModel from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model).\n",
      "- This IS NOT expected if you are initializing DistilBertModel from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).\n"
     ]
    },
    {
     "data": {
      "application/vnd.jupyter.widget-view+json": {
       "model_id": "cca011d5544348c48865bd022c43fa98",
       "version_major": 2,
       "version_minor": 0
      },
      "text/plain": [
       "Downloading (…)okenizer_config.json:   0%|          | 0.00/29.0 [00:00<?, ?B/s]"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "data": {
      "application/vnd.jupyter.widget-view+json": {
       "model_id": "0e976b1e2d084f0283a039bae41a1864",
       "version_major": 2,
       "version_minor": 0
      },
      "text/plain": [
       "Downloading (…)solve/main/vocab.txt:   0%|          | 0.00/213k [00:00<?, ?B/s]"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "data": {
      "application/vnd.jupyter.widget-view+json": {
       "model_id": "394d77637e3b48d5a936be9ec18360eb",
       "version_major": 2,
       "version_minor": 0
      },
      "text/plain": [
       "Downloading (…)/main/tokenizer.json:   0%|          | 0.00/436k [00:00<?, ?B/s]"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "第1段文字的关键词为：\n",
      " [('convolutional', 0.6968), ('roadmap', 0.6921), ('b5g', 0.6057), ('clustering', 0.6023), ('latency', 0.5944)]\n",
      "第2段文字的关键词为：\n",
      " [('conducive', 0.6277), ('bernoulli', 0.6258), ('bayesian', 0.6025), ('annotated', 0.5965), ('metrics', 0.5796)]\n"
     ]
    }
   ],
   "source": [
    "from transformers.pipelines import pipeline\n",
    "\n",
    "hf_model = pipeline(\"feature-extraction\", model=\"distilbert-base-cased\")\n",
    "kw_model = KeyBERT(model=hf_model)\n",
    "\n",
    "for idx,doc in enumerate(docs):\n",
    "    keywords = kw_model.extract_keywords(doc)\n",
    "    print('第' + str(1+idx) + '段文字的关键词为：\\n',keywords)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": []
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.7.0"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 2
}
