{
 "cells": [
  {
   "cell_type": "markdown",
   "id": "59ac952c",
   "metadata": {},
   "source": [
    "# Tutorial 3: Word Embeddings"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "459abf42",
   "metadata": {},
   "source": [
    "## Stacked Embeddings"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 1,
   "id": "adbddebf",
   "metadata": {
    "ExecuteTime": {
     "end_time": "2021-09-12T05:40:45.513059Z",
     "start_time": "2021-09-12T05:40:38.291310Z"
    }
   },
   "outputs": [],
   "source": [
    "from flair.embeddings import WordEmbeddings, FlairEmbeddings\n",
    "\n",
    "# init standard GloVe embedding  # 初始化glove  embedding\n",
    "glove_embedding = WordEmbeddings('glove')\n",
    "\n",
    "# init Flair forward and backwards embeddings  # 初始化flair的前向embedding和反向embedding\n",
    "flair_embedding_forward = FlairEmbeddings('news-forward')\n",
    "flair_embedding_backward = FlairEmbeddings('news-backward')"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "9991b82e",
   "metadata": {},
   "source": [
    "现在实例化stackedEmbeddings类并将包含这两个嵌入的列表传递给它"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 2,
   "id": "ed762a09",
   "metadata": {
    "ExecuteTime": {
     "end_time": "2021-09-12T05:40:45.528972Z",
     "start_time": "2021-09-12T05:40:45.515056Z"
    }
   },
   "outputs": [],
   "source": [
    "from flair.embeddings import StackedEmbeddings\n",
    "\n",
    "# create a StackedEmbedding object that combines glove and forward/backward flair embeddings\n",
    "# \n",
    "stacked_embeddings = StackedEmbeddings([\n",
    "glove_embedding,\n",
    "flair_embedding_forward,\n",
    "flair_embedding_backward,\n",
    "])"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "410400b1",
   "metadata": {},
   "source": [
    "单词现在是用三种不同的嵌入方式串联而成的。这意味着得到的嵌入向量仍然是一个pytorch向量"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "e9866e30",
   "metadata": {},
   "source": [
    "现在像使用其他embedding方法一样，调用emded()方法就可以了"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 3,
   "id": "0e8e8e48",
   "metadata": {
    "ExecuteTime": {
     "end_time": "2021-09-12T05:40:45.699657Z",
     "start_time": "2021-09-12T05:40:45.530287Z"
    }
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Token: 1 The\n",
      "tensor([-0.0382, -0.2449,  0.7281,  ..., -0.0065, -0.0053,  0.0090])\n",
      "Token: 2 grass\n",
      "tensor([-0.8135,  0.9404, -0.2405,  ...,  0.0354, -0.0255, -0.0143])\n",
      "Token: 3 is\n",
      "tensor([-5.4264e-01,  4.1476e-01,  1.0322e+00,  ..., -5.3691e-04,\n",
      "        -9.6750e-03, -2.7541e-02])\n",
      "Token: 4 green\n",
      "tensor([-0.6791,  0.3491, -0.2398,  ..., -0.0007, -0.1333,  0.0161])\n",
      "Token: 5 .\n",
      "tensor([-0.3398,  0.2094,  0.4635,  ...,  0.0005, -0.0177,  0.0032])\n"
     ]
    }
   ],
   "source": [
    "from flair.data import Sentence\n",
    "sentence = Sentence('The grass is green .')\n",
    "\n",
    "# just embed a sentence using the StackedEmbedding as you would with any single embedding.\n",
    "stacked_embeddings.embed(sentence)\n",
    "\n",
    "# now check out the embedded tokens.\n",
    "for token in sentence:\n",
    "    print(token)\n",
    "    print(token.embedding)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "1b5a4a8b",
   "metadata": {},
   "source": [
    "# Tutorial 4: List of All Word Embeddings "
   ]
  },
  {
   "cell_type": "markdown",
   "id": "a9136f09",
   "metadata": {},
   "source": [
    "## Combining BERT and Flair"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "e05b27e9",
   "metadata": {},
   "source": [
    "假设我们想要结合多语言Flair和BERT嵌入来训练一个超强大的多语言下游任务模型。首先，实例化你想要组合的嵌入:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 4,
   "id": "cdb448c6",
   "metadata": {
    "ExecuteTime": {
     "end_time": "2021-09-12T05:41:02.053464Z",
     "start_time": "2021-09-12T05:40:45.701306Z"
    },
    "scrolled": true
   },
   "outputs": [
    {
     "data": {
      "application/vnd.jupyter.widget-view+json": {
       "model_id": "63bd5eed536c4c0b873e1aee907e91b0",
       "version_major": 2,
       "version_minor": 0
      },
      "text/plain": [
       "Downloading:   0%|          | 0.00/29.0 [00:00<?, ?B/s]"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    }
   ],
   "source": [
    "from flair.embeddings import FlairEmbeddings, TransformerWordEmbeddings\n",
    "\n",
    "# init Flair embeddings \n",
    "flair_forward_embedding = FlairEmbeddings('multi-forward')\n",
    "flair_backward_embedding = FlairEmbeddings('multi-backward')\n",
    "\n",
    "# init multilingual BERT  # 初始化使用多语言的BERT\n",
    "bert_embedding = TransformerWordEmbeddings('bert-base-multilingual-cased') # 下载预训练模型"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "63cff91a",
   "metadata": {},
   "source": [
    "现在实例化StackedEmbeddings类并将包含这三个嵌入的列表传递给它。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 5,
   "id": "052b2365",
   "metadata": {
    "ExecuteTime": {
     "end_time": "2021-09-12T05:41:02.068845Z",
     "start_time": "2021-09-12T05:41:02.054493Z"
    }
   },
   "outputs": [],
   "source": [
    "from flair.embeddings import StackedEmbeddings\n",
    "\n",
    "# now create the StackedEmbedding object that combines all embeddings\n",
    "stacked_embeddings = StackedEmbeddings(\n",
    "    embeddings=[flair_forward_embedding, \n",
    "                flair_backward_embedding, \n",
    "                bert_embedding])"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "81537598",
   "metadata": {},
   "source": [
    "就是这样!现在就像其他的嵌入一样使用这个嵌入，也就是在你的句子上调用embed()方法。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 6,
   "id": "ebd6fb97",
   "metadata": {
    "ExecuteTime": {
     "end_time": "2021-09-12T05:41:02.347495Z",
     "start_time": "2021-09-12T05:41:02.070345Z"
    }
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Token: 1 The\n",
      "tensor([ 0.6800,  0.2429,  0.0012,  ...,  0.7343, -0.0732,  0.1896])\n",
      "shape: torch.Size([4864])\n",
      "Token: 2 grass\n",
      "tensor([ 2.9200e-01,  2.2066e-02,  4.5290e-05,  ...,  9.8494e-01,\n",
      "        -5.7341e-01,  6.8034e-01])\n",
      "shape: torch.Size([4864])\n",
      "Token: 3 is\n",
      "tensor([-0.5447,  0.0229,  0.0078,  ..., -0.2840, -0.1061, -0.0851])\n",
      "shape: torch.Size([4864])\n",
      "Token: 4 green\n",
      "tensor([0.1477, 0.1097, 0.0009,  ..., 0.0203, 0.5680, 0.0867])\n",
      "shape: torch.Size([4864])\n",
      "Token: 5 .\n",
      "tensor([-1.5555e-01,  6.7598e-03,  5.3829e-06,  ..., -4.0763e-01,\n",
      "         1.7429e-01,  3.1956e-02])\n",
      "shape: torch.Size([4864])\n"
     ]
    }
   ],
   "source": [
    "sentence = Sentence('The grass is green .')\n",
    "\n",
    "# just embed a sentence using the StackedEmbedding as you would with any single embedding.\n",
    "stacked_embeddings.embed(sentence)\n",
    "\n",
    "# now check out the embedded tokens.\n",
    "for token in sentence:\n",
    "    print(token)\n",
    "    print(token.embedding)\n",
    "    print('shape:', token.embedding.shape)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "77e7009d",
   "metadata": {},
   "source": [
    "尝试中文"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 7,
   "id": "1e0d6899",
   "metadata": {
    "ExecuteTime": {
     "end_time": "2021-09-12T05:41:02.598371Z",
     "start_time": "2021-09-12T05:41:02.351110Z"
    }
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Token: 1 你\n",
      "tensor([0.0496, 0.0218, 0.0042,  ..., 1.5776, 0.1912, 0.1734])\n",
      "shape: torch.Size([4864])\n",
      "Token: 2 好\n",
      "tensor([ 0.2921,  0.0714,  0.0501,  ...,  1.0838,  0.4766, -0.4249])\n",
      "shape: torch.Size([4864])\n",
      "Token: 3 啊\n",
      "tensor([-0.1395, -0.0005,  0.0060,  ..., -0.1362,  0.1712, -0.3007])\n",
      "shape: torch.Size([4864])\n",
      "Token: 4 ，\n",
      "tensor([4.9875e-01, 1.2800e-02, 2.3627e-04,  ..., 5.4766e-01, 5.9460e-01,\n",
      "        2.4918e-01])\n",
      "shape: torch.Size([4864])\n",
      "Token: 5 我\n",
      "tensor([0.0226, 0.0991, 0.0074,  ..., 0.3598, 0.6874, 0.1123])\n",
      "shape: torch.Size([4864])\n",
      "Token: 6 是\n",
      "tensor([-0.0689,  0.0251,  0.0788,  ...,  0.3989,  0.7988,  0.1105])\n",
      "shape: torch.Size([4864])\n",
      "Token: 7 你\n",
      "tensor([ 0.0062, -0.0023,  0.0344,  ...,  0.7846,  0.1295,  0.4847])\n",
      "shape: torch.Size([4864])\n",
      "Token: 8 爸爸\n",
      "tensor([-0.2086,  0.0043,  0.0069,  ...,  1.8276,  0.3253,  0.7029])\n",
      "shape: torch.Size([4864])\n"
     ]
    }
   ],
   "source": [
    "sentence = Sentence('你 好 啊 ， 我 是 你 爸爸')\n",
    "\n",
    "# just embed a sentence using the StackedEmbedding as you would with any single embedding.\n",
    "stacked_embeddings.embed(sentence)\n",
    "\n",
    "# now check out the embedded tokens.\n",
    "for token in sentence:\n",
    "    print(token)\n",
    "    print(token.embedding)\n",
    "    print('shape:', token.embedding.shape)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "f78d47fc",
   "metadata": {},
   "source": [
    "单词现在是用三种不同的嵌入方式串联而成的。这意味着得到的嵌入向量仍然是一个PyTorch向量。"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "fd22c603",
   "metadata": {},
   "source": [
    "# Tutorial 5: Document Embeddings"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "81637504",
   "metadata": {},
   "source": [
    "文档嵌入与单词嵌入不同，他们为整个文本提供一个嵌入，而单词嵌入则为单个单词提供嵌入。"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "159b6715",
   "metadata": {},
   "source": [
    "本教程，我们假设您熟悉这个库的基本类型（sentence，token），以及单词嵌入是如何工作的。"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "0ff8d7a0",
   "metadata": {},
   "source": [
    "## Embeddings"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "b4290df5",
   "metadata": {},
   "source": [
    "所有文档嵌入类都继承自DocumentEmbeddings类，并实现了嵌入文本时需要调用的embed()方法。这意味着对于大多数Flair用户来说，不同嵌入的复杂性仍然隐藏在这个界面后面。"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "b733601e",
   "metadata": {},
   "source": [
    "在Flair中有四种主要的文档嵌入:（1）DocumentPoolEmbeddings简单地对句子中所有的单词嵌入做一个平均值。（2）DocumentRNNEmbeddings训练RNN在句子中所有的单词嵌入。（3）TransformerDocumentEmbeddings使用预先训练过的transformer，推荐用于大多数文本分类任务。（4）SentenceTransformerDocumentEmbeddings使用预先训练过的transformer，如果你需要一个好的句子向量表示，推荐使用它。"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "32a5ac8f",
   "metadata": {},
   "source": [
    "初始化这四个选项中的一个，然后调用embed()来嵌入你的句子。"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "a707c6a4",
   "metadata": {},
   "source": [
    "我们在以下给出了所有四种文档嵌入的详细信息:"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "b114830f",
   "metadata": {},
   "source": [
    "## Document Pool Embeddings"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "680bf723",
   "metadata": {},
   "source": [
    "最简单的文档嵌入类型对一个句子中的所有单词嵌入进行池操作，以获得整个句子的嵌入。默认值是平均池，即使用所有单词嵌入的平均值。"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "9edadb8e",
   "metadata": {},
   "source": [
    "要实例化，你需要传递一个单词嵌入的列表到pool over:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 8,
   "id": "5859ce30",
   "metadata": {
    "ExecuteTime": {
     "end_time": "2021-09-12T05:41:03.888832Z",
     "start_time": "2021-09-12T05:41:02.600902Z"
    }
   },
   "outputs": [],
   "source": [
    "from flair.embeddings import WordEmbeddings, DocumentPoolEmbeddings\n",
    "\n",
    "# initialize the word embeddings\n",
    "glove_embedding = WordEmbeddings('glove')\n",
    "\n",
    "# initialize the document embeddings, mode = mean\n",
    "document_embeddings = DocumentPoolEmbeddings([glove_embedding])"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "7a07320c",
   "metadata": {},
   "source": [
    "现在，创建一个示例句子并调用嵌入的embed()方法。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 9,
   "id": "8e8102b3",
   "metadata": {
    "ExecuteTime": {
     "end_time": "2021-09-12T05:41:03.919688Z",
     "start_time": "2021-09-12T05:41:03.890645Z"
    }
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "tensor([-3.1970e-01,  2.6206e-01,  4.0371e-01, -4.8223e-01,  2.1118e-01,\n",
      "         8.5380e-02, -6.0909e-02,  2.2149e-01, -2.4234e-01, -1.0128e-01,\n",
      "         8.6213e-02, -1.6874e-01,  3.4736e-01,  6.7267e-02,  2.2750e-01,\n",
      "        -2.5534e-01,  3.9017e-01,  9.6975e-03, -9.5909e-03,  2.8388e-02,\n",
      "        -3.2033e-02, -5.7822e-03,  2.8569e-01,  4.0082e-02,  5.8185e-01,\n",
      "         2.3183e-01,  5.9500e-02, -5.7468e-01, -2.0337e-01, -1.7826e-01,\n",
      "        -1.8182e-01,  4.7222e-01,  9.8503e-02,  1.0854e-01,  1.9359e-01,\n",
      "         2.9041e-01,  1.5739e-04,  4.3389e-01,  1.8119e-01, -1.1405e-01,\n",
      "        -3.4222e-01, -4.9730e-01,  1.6268e-02, -1.8057e-01,  2.5105e-02,\n",
      "         1.4868e-02,  2.3021e-01, -8.9935e-02, -4.4742e-02, -5.3620e-01,\n",
      "        -1.3269e-01, -1.3503e-01,  2.4511e-01,  1.2051e+00, -4.5334e-01,\n",
      "        -2.6632e+00,  2.7964e-02,  4.9859e-02,  1.5550e+00,  5.0574e-01,\n",
      "        -8.0093e-02,  6.9114e-01, -1.5679e-01,  2.3944e-01,  9.0704e-01,\n",
      "        -1.1536e-01,  3.8778e-01,  6.8844e-02,  3.2989e-01, -1.5260e-01,\n",
      "        -4.2541e-02, -2.4333e-01, -1.6738e-01, -3.1495e-01,  2.4115e-01,\n",
      "         6.8981e-02,  1.1922e-01,  1.4478e-01, -6.6563e-01,  6.9832e-02,\n",
      "         5.8356e-01,  8.6527e-02, -4.8388e-01,  1.5086e-01, -8.9072e-01,\n",
      "        -3.4816e-01, -2.6635e-02, -2.1770e-01,  3.8983e-01,  7.9085e-02,\n",
      "        -2.3757e-02, -5.3694e-01, -3.1125e-01,  4.7508e-01, -6.1099e-01,\n",
      "         9.1333e-02, -5.4229e-01, -2.9515e-01,  5.2432e-01,  1.9662e-01])\n"
     ]
    },
    {
     "data": {
      "text/plain": [
       "torch.Size([100])"
      ]
     },
     "execution_count": 9,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "# create an example sentence\n",
    "sentence = Sentence('The grass is green . And the sky is blue .')\n",
    "\n",
    "# embed the sentence with our document embedding\n",
    "document_embeddings.embed(sentence)\n",
    "\n",
    "# now check out the embedded sentence.\n",
    "print(sentence.embedding)\n",
    "sentence.embedding.shape"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "4e0869dc",
   "metadata": {
    "ExecuteTime": {
     "end_time": "2021-09-11T14:41:30.417776Z",
     "start_time": "2021-09-11T14:41:30.412726Z"
    }
   },
   "source": [
    "这将打印出文档的嵌入。由于文档嵌入来源于单词嵌入，它的维数取决于您正在使用的单词嵌入的维数。有关这些嵌入的更多细节，请点击这里。"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "eeaab0dd",
   "metadata": {},
   "source": [
    "DocumentPoolEmbeddings的一个优点是，它们不需要经过训练，您可以立即使用它们来嵌入您的文档。"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "ae51ce6f",
   "metadata": {},
   "source": [
    "## Document RNN Embeddings"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "ed186fce",
   "metadata": {},
   "source": [
    "这些嵌入在句子中的所有单词上运行RNN，并使用RNN的最终状态作为整个文档的嵌入。为了使用documentnnembeddings，你需要通过向它传递一个token嵌入列表来初始化它们:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 10,
   "id": "31fdf3f4",
   "metadata": {
    "ExecuteTime": {
     "end_time": "2021-09-12T05:41:05.336597Z",
     "start_time": "2021-09-12T05:41:03.921974Z"
    }
   },
   "outputs": [],
   "source": [
    "from flair.embeddings import WordEmbeddings, DocumentRNNEmbeddings\n",
    "\n",
    "glove_embedding = WordEmbeddings('glove')\n",
    "\n",
    "document_embeddings = DocumentRNNEmbeddings([glove_embedding])"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "d8fe0504",
   "metadata": {},
   "source": [
    "缺省情况下，实例化一个gru类型RNN。现在，创建一个示例句子并调用嵌入的embed()方法。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 11,
   "id": "740d0695",
   "metadata": {
    "ExecuteTime": {
     "end_time": "2021-09-12T05:41:05.367133Z",
     "start_time": "2021-09-12T05:41:05.339511Z"
    }
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "tensor([-0.0340, -0.3352,  0.1210, -0.1654,  0.1235, -0.1208,  0.0079,  0.2060,\n",
      "        -0.1232, -0.2512,  0.0893, -0.4172,  0.3370,  0.0961,  0.1751,  0.2979,\n",
      "        -0.3542, -0.3157, -0.1072, -0.1309,  0.1961,  0.0537,  0.1877,  0.1638,\n",
      "        -0.0955, -0.0339,  0.1385,  0.1403, -0.0618, -0.1745,  0.3863, -0.2458,\n",
      "        -0.0775, -0.2071,  0.2854,  0.0330, -0.3152,  0.1557,  0.1594, -0.1175,\n",
      "         0.1512, -0.0497, -0.0114, -0.0674, -0.2849,  0.0901, -0.1630,  0.4579,\n",
      "        -0.1048,  0.0203, -0.4319, -0.0205, -0.3376,  0.0239, -0.0238, -0.1273,\n",
      "         0.1797,  0.3270,  0.4331,  0.1940, -0.0676, -0.0482, -0.3308, -0.0726,\n",
      "        -0.1037, -0.0846,  0.2341, -0.2229, -0.2095,  0.0865, -0.0403, -0.2971,\n",
      "         0.0639, -0.1101,  0.3477,  0.0100, -0.3020,  0.2504,  0.1825, -0.2522,\n",
      "        -0.1172, -0.1189, -0.1316,  0.1422, -0.0675,  0.0140,  0.2346,  0.1045,\n",
      "         0.4651,  0.1878,  0.3542, -0.0865,  0.1632,  0.4904,  0.2301, -0.3429,\n",
      "        -0.0782,  0.1863, -0.2349, -0.0577,  0.4073,  0.1726,  0.0204, -0.1007,\n",
      "        -0.2256, -0.5296,  0.1500, -0.0250, -0.0632,  0.5743, -0.2586, -0.1833,\n",
      "        -0.1141, -0.0428,  0.0642, -0.4074,  0.2051, -0.3782,  0.3899,  0.1443,\n",
      "        -0.3637,  0.1972,  0.0686, -0.2338,  0.1032, -0.0471, -0.1561,  0.2363],\n",
      "       grad_fn=<CatBackward>)\n",
      "torch.Size([128])\n"
     ]
    }
   ],
   "source": [
    "# create an example sentence\n",
    "sentence = Sentence('The grass is green . And the sky is blue .')\n",
    "\n",
    "# embed the sentence with our document embedding\n",
    "document_embeddings.embed(sentence)\n",
    "\n",
    "# now check out the embedded sentence.\n",
    "print(sentence.get_embedding())\n",
    "print(sentence.embedding.shape)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "61892ed1",
   "metadata": {},
   "source": [
    "这将输出完整句子的嵌入.嵌入维数取决于你使用的隐藏状态的数量，以及RNN是否是双向的。有关这些嵌入的更多细节，请点击这里。"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "641e679e",
   "metadata": {},
   "source": [
    "注意，当您初始化这个嵌入时，RNN权值是随机初始化的。所以这种嵌入需要经过训练才能有意义。"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "fdc43240",
   "metadata": {},
   "source": [
    "## TransformerDocumentEmbeddings"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "7b08189e",
   "metadata": {},
   "source": [
    "你可以直接从一个预先训练过的transform中嵌入整句话。对于您用不同标识符实例化的所有transform嵌入，它将得到不同的transform。例如，要加载标准BERT transform模型，请执行以下操作:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 12,
   "id": "59b2942a",
   "metadata": {
    "ExecuteTime": {
     "end_time": "2021-09-12T05:41:17.851582Z",
     "start_time": "2021-09-12T05:41:05.368132Z"
    }
   },
   "outputs": [
    {
     "data": {
      "application/vnd.jupyter.widget-view+json": {
       "model_id": "1c43b0494c674ad49d077517a2b0b28d",
       "version_major": 2,
       "version_minor": 0
      },
      "text/plain": [
       "Downloading:   0%|          | 0.00/28.0 [00:00<?, ?B/s]"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "tensor([-1.2379e-01, -7.1736e-02, -4.1324e-01, -3.6507e-01,  1.9898e-02,\n",
      "        -6.1426e-01, -5.2476e-02,  1.2074e+00, -8.5157e-02, -3.3311e-01,\n",
      "         7.5330e-02, -3.0805e-01, -2.4355e-01,  6.2644e-01,  8.6127e-02,\n",
      "         1.7616e-01, -5.4266e-01,  4.5181e-01,  5.2220e-01, -2.1766e-03,\n",
      "         2.4609e-01, -2.9748e-01, -3.9866e-01, -3.9266e-02,  1.9254e-01,\n",
      "        -1.9127e-01,  1.9122e-01,  9.1355e-02,  8.0262e-02, -2.2298e-02,\n",
      "        -2.0039e-01,  5.1144e-01, -4.6748e-01, -1.9359e-01,  3.4374e-01,\n",
      "         4.3602e-02,  6.9202e-01, -6.4944e-03, -2.8228e-01,  1.6069e-01,\n",
      "        -5.9182e-02,  6.3777e-02,  3.2847e-01, -4.1113e-01,  3.9246e-01,\n",
      "        -6.8195e-01, -2.1295e+00, -3.0087e-02,  1.0433e-01, -3.6390e-03,\n",
      "         1.9925e-01, -3.4364e-01,  1.1795e-01,  8.7102e-01,  4.1166e-01,\n",
      "         9.1923e-01, -3.9618e-01,  7.2940e-01,  5.6748e-01,  2.6772e-01,\n",
      "         1.6463e-01,  4.8314e-02,  2.9039e-02,  2.0972e-01,  2.3602e-01,\n",
      "         6.4453e-01, -4.0465e-02, -2.0497e-02,  6.4228e-02,  2.1590e-01,\n",
      "        -6.7151e-01, -2.7227e-01,  6.6066e-01, -3.6342e-01, -1.5487e-01,\n",
      "         1.0432e-01, -4.8833e-01,  1.2698e-01, -6.6280e-01, -4.9248e-01,\n",
      "         3.6090e-01,  5.2993e-01,  1.8964e-01,  6.0593e-01, -7.4108e-02,\n",
      "         5.5553e-01, -7.6141e-01, -7.1126e-01, -5.5973e-02,  3.0334e-01,\n",
      "        -2.0399e-01,  3.3475e-01,  1.0568e-01,  8.6477e-01,  1.2650e-01,\n",
      "         1.6102e-01, -8.0360e-02,  1.4763e-01,  4.5942e-01,  4.2764e-01,\n",
      "         2.5352e-01,  4.8794e-01,  3.9262e-01, -2.2615e-01, -2.0717e-01,\n",
      "         2.1893e-02, -2.4947e-01, -5.7979e-01,  2.6557e-01, -2.2696e+00,\n",
      "         9.0447e-02,  1.5556e-01, -4.7196e-02,  1.3352e-01, -5.0180e-01,\n",
      "         1.0427e+00,  6.6042e-02, -4.0329e-02, -1.1802e-01,  1.6701e-01,\n",
      "         8.1990e-02,  5.3425e-02,  1.8084e-01, -6.3766e-01, -1.1362e-01,\n",
      "         2.7503e-01,  1.6799e-01,  2.5062e-01,  3.0076e-01,  3.4261e-01,\n",
      "         3.1537e-01,  7.6417e-01,  1.8187e-01, -3.2328e-01, -1.3885e-01,\n",
      "         7.7366e-02,  1.0090e+00, -1.3191e-01, -2.8290e-01, -1.0610e-01,\n",
      "        -6.5212e-01, -2.0568e-01, -3.0403e+00,  2.8077e-01,  6.6538e-01,\n",
      "         1.8085e-01, -2.3270e-01, -3.8090e-02, -9.1900e-03, -3.3024e-02,\n",
      "         7.5398e-01, -1.7157e-01,  1.7486e-01,  2.1718e-02, -3.7168e-01,\n",
      "        -2.2061e-01, -5.1511e-01,  2.5011e-02,  3.2882e-01,  3.5338e-01,\n",
      "         7.5705e-01, -3.6898e-01,  1.8107e-01, -2.2905e-01, -3.7372e-01,\n",
      "         2.0887e-01,  7.4625e-01,  2.5206e-01, -2.2033e-01,  1.0829e-01,\n",
      "        -3.2834e-01,  2.6543e-01,  1.3630e-01, -1.9449e-01,  1.5879e-01,\n",
      "        -5.9119e-01, -1.4987e-01,  6.1639e-01,  1.5222e-01, -1.2724e-01,\n",
      "        -6.7128e-02,  4.4147e-01,  1.5661e-01,  4.1892e-02,  4.6715e-01,\n",
      "        -4.3955e-01,  2.9791e-01, -9.8744e-02, -3.8943e-01,  2.5416e-01,\n",
      "         1.8193e-01, -2.0932e-01, -1.6235e-01,  2.3009e-01,  1.5340e-01,\n",
      "        -3.3672e-01,  1.7551e-01, -1.0694e+00, -1.6851e-01,  4.2456e-01,\n",
      "         8.3628e-02, -3.9452e-02, -2.4752e-01,  6.5056e-01, -2.8881e-01,\n",
      "         4.0099e+00,  2.8141e-01, -8.7182e-01,  2.5033e-01,  1.3811e-01,\n",
      "        -5.3517e-01, -4.2847e-03, -1.7797e-01, -1.9366e-01,  2.2027e-01,\n",
      "        -2.4429e-01,  3.7465e-01,  1.9875e-01, -7.9465e-03, -3.7142e-01,\n",
      "         4.6877e-01,  3.6391e-01, -4.1122e-01,  6.4485e-01, -5.3261e-01,\n",
      "        -1.8993e-01,  2.2226e-01,  9.0126e-01, -4.1928e-01, -1.2096e+00,\n",
      "         3.3100e-01, -1.0124e-01, -2.7215e-01,  4.2887e-01, -5.3996e-01,\n",
      "        -1.3608e-01, -4.7538e-01, -7.6099e-01,  5.0010e-01,  1.1501e-01,\n",
      "         2.4796e-02, -1.1558e-03, -5.8932e-02,  6.1211e-02, -1.9241e-02,\n",
      "         5.0781e-01,  1.7196e-01,  2.1157e-02,  2.0202e-01, -4.1000e-01,\n",
      "         6.1553e-01, -2.8829e-01,  4.3389e-01, -4.8317e-01, -5.8389e-01,\n",
      "         3.9776e-02,  2.8242e-01,  2.2416e-01, -4.6592e-01,  1.1809e-01,\n",
      "        -2.5521e-01,  7.4023e-02,  5.4015e-01,  3.0925e-03, -4.7843e-01,\n",
      "        -5.2549e-01,  2.2602e-01,  7.1190e-02, -3.9816e-01, -2.3396e-01,\n",
      "        -7.5192e-02, -1.8097e-01, -2.6927e-01, -3.5664e+00, -1.1872e-01,\n",
      "         9.4691e-02,  3.4470e-01,  3.0915e-01, -6.9287e-01,  5.9637e-01,\n",
      "         3.2432e-01,  2.8254e-01, -7.1234e-01,  7.1152e-01, -9.3223e-02,\n",
      "         2.7513e-01, -1.5879e-02,  3.1129e-02,  1.7534e-01, -2.3809e-01,\n",
      "        -2.8541e-01, -1.0587e-01, -1.1629e-01, -1.8081e-01, -2.0531e-03,\n",
      "        -1.1664e-01, -1.5688e-02, -1.0189e-01, -5.0784e-02, -3.4578e-01,\n",
      "        -2.7286e-01,  1.5687e-01, -3.2202e-01, -8.8490e-02, -5.6357e-01,\n",
      "         4.7134e-01, -1.3940e-01,  2.8582e-02, -1.7892e+00,  8.0909e-01,\n",
      "        -2.4978e-01,  5.2194e-02,  1.1957e-01, -9.8101e-02,  6.2372e-01,\n",
      "        -3.1535e-01, -1.0200e+00,  4.0500e-01,  4.0230e-01, -2.2831e-01,\n",
      "         1.2112e-01,  5.8751e-01,  7.8676e-01,  3.1792e-01,  2.4750e-01,\n",
      "        -3.0073e-01,  1.9734e-01, -3.6639e-02, -2.8118e-01,  1.4713e-01,\n",
      "        -2.7859e-01, -3.1107e-01,  3.4646e-01,  6.1774e-01,  9.3268e-02,\n",
      "        -1.4150e-01, -5.5272e-01, -8.7644e-02, -2.5787e-01,  3.0838e-02,\n",
      "         1.3823e-01, -2.7650e-01, -1.2215e-01, -3.7569e-01,  3.3749e-01,\n",
      "         3.3921e-02,  5.8080e-01,  7.9561e-02, -6.3691e-02,  5.5453e-01,\n",
      "         2.2945e-01,  4.8697e-01,  8.5361e-01,  9.7716e-02, -1.5864e-01,\n",
      "        -3.1817e-01,  1.7988e-01,  4.0080e-01, -2.8430e-01,  4.1118e-01,\n",
      "         1.3688e+00,  3.0907e-03, -3.5115e-01, -2.7589e-01,  6.2701e-01,\n",
      "        -4.2110e-02,  8.3642e-02,  2.9702e-01,  5.2578e-01, -5.1422e-01,\n",
      "        -6.6937e-02, -2.0965e-01,  2.3934e-01, -6.0201e-01,  7.1871e-01,\n",
      "        -3.6842e-01,  2.0518e-01,  1.7497e-01,  2.1329e-01,  2.3085e-02,\n",
      "        -1.5742e-02, -9.1324e-01, -4.0592e-01, -9.7665e-02, -4.1863e-01,\n",
      "        -1.8369e-01,  2.5471e-01, -3.1509e-01, -8.6470e-03, -4.3014e-01,\n",
      "        -1.8631e-01,  8.4404e-01, -3.6093e-01, -4.0840e-01, -2.3038e-01,\n",
      "        -2.4138e-01, -3.7854e-01, -5.0431e-01, -4.6215e-01,  3.4603e-01,\n",
      "         3.4704e-02,  4.1334e-01, -3.5200e-01, -1.8906e-01,  3.0380e-01,\n",
      "        -1.1209e+00, -1.5769e-01, -3.0300e-01, -4.0770e-01, -6.5215e-01,\n",
      "        -2.4582e-02,  4.6418e-01, -4.3807e-01, -1.6509e-01, -1.0972e-02,\n",
      "         4.2441e-01, -9.1316e-02,  5.2491e-01,  1.2000e-01, -9.0181e-01,\n",
      "         2.9796e-01, -2.0480e-02,  9.9540e-01, -4.5083e-02,  4.6973e-01,\n",
      "         6.8810e-01,  4.9046e-01,  2.7237e-01,  4.5847e-01,  5.4757e-02,\n",
      "        -1.9312e-01, -1.2711e-01, -4.5554e-01, -8.4500e-02,  1.1554e-01,\n",
      "        -6.3526e-01, -6.9183e-01, -1.5278e-01,  5.6395e-02, -4.7268e-01,\n",
      "        -3.9363e-01, -6.0999e-01, -2.2242e-01,  1.0381e-01, -4.8040e-01,\n",
      "        -2.3978e-02,  4.8888e-01,  1.9995e-01,  7.6768e-02,  4.1828e-03,\n",
      "        -5.1393e-01,  1.3127e-01, -2.6067e-01,  4.1444e-01,  4.5285e-01,\n",
      "        -2.4557e-01, -1.8290e-01,  4.3223e-01, -1.9556e-01, -3.9737e-01,\n",
      "        -3.2445e-01, -3.4857e-01,  5.8700e-02, -4.8327e-01,  8.3562e-02,\n",
      "        -7.8714e-02, -1.4836e-01, -3.6679e-01, -1.4159e-01,  3.4513e-01,\n",
      "        -1.4530e+00,  4.2608e-01,  3.9304e-02,  4.2254e-01,  6.4986e-01,\n",
      "        -1.6618e-01, -7.0335e-01,  9.7124e-01,  8.7732e-02,  4.4665e-01,\n",
      "        -3.3122e-01, -3.1798e-02, -2.2860e-01,  6.4872e-02, -3.1887e-01,\n",
      "         2.2621e-01,  1.5384e-01, -6.9118e-01,  4.4898e-02, -2.3276e-01,\n",
      "        -6.9377e-01,  2.8041e-01,  1.1004e-01, -3.6528e-02, -1.8088e-01,\n",
      "        -1.8032e-01, -8.5044e-02,  5.2053e-01,  7.3275e-02,  3.2020e-01,\n",
      "        -1.1020e-01, -7.1917e-01, -4.6747e-01, -2.6169e-01, -9.9039e-03,\n",
      "         2.2493e-01,  2.0352e-01, -4.7724e-01,  7.1913e-01,  5.0846e-01,\n",
      "        -4.6557e-01,  7.5830e-01, -1.2853e-01, -5.0938e-01,  6.1053e-01,\n",
      "         2.2914e-01, -2.5169e-01,  4.3295e-01, -2.3096e-01, -3.8358e-01,\n",
      "        -1.4029e-01, -3.6119e-01, -4.2979e-02,  6.0075e-02,  3.5109e-01,\n",
      "        -1.4576e-01, -1.6173e-01,  3.1335e-02, -3.9025e-01, -1.4854e-01,\n",
      "         6.4848e-01, -3.6335e-01, -9.9983e-01,  5.7384e-02, -6.3506e-02,\n",
      "        -4.8941e-01, -5.1297e-02, -5.2885e-01, -3.8798e-01,  3.0468e-01,\n",
      "         1.0583e+00, -7.4607e-02,  6.4055e-02,  3.2222e-01, -7.3155e-01,\n",
      "         9.6908e-02,  1.5196e-01,  2.1903e-01,  2.6442e-01, -2.6002e-01,\n",
      "         1.7422e-01,  3.9115e-02, -3.2359e-01,  2.1959e-01, -2.1681e-01,\n",
      "         3.3182e-01, -4.1750e-01, -2.2879e-01, -2.4267e-01,  1.5977e-01,\n",
      "        -6.9230e-01, -3.8281e-01,  5.8493e-01, -1.4365e-01,  9.3003e-02,\n",
      "         9.5742e-02,  2.8918e-01,  7.1080e-02,  2.3979e-01, -5.4381e-01,\n",
      "        -3.0050e-02,  6.3815e-01,  1.5361e-01, -2.4273e-01,  9.4823e-02,\n",
      "         5.1781e-01,  7.4240e-01, -4.6390e-01, -3.4130e-01, -2.4070e-01,\n",
      "        -2.3224e-01, -7.9983e-02, -2.5278e-01,  1.4878e-01, -2.7010e-01,\n",
      "        -5.5851e-01,  9.6145e-02, -6.9816e-01,  2.3820e+00,  8.5176e-01,\n",
      "         3.1401e-01, -5.6032e-01, -1.2034e-01, -3.6357e-02, -2.0890e-01,\n",
      "         6.6365e-01, -3.4948e-01, -3.4559e-02, -1.1513e-01, -2.4221e-01,\n",
      "        -5.2332e-03,  3.2340e-01,  7.2015e-01,  3.6196e-01, -3.3635e-01,\n",
      "        -2.8878e-01, -7.3527e-01,  3.2471e-02, -6.7112e-01,  1.0215e+00,\n",
      "         2.6034e-01, -8.5129e-02,  2.1972e-01,  1.6136e-01,  2.1255e-01,\n",
      "        -6.3793e-02, -2.4907e-02,  6.5250e-01, -1.5987e-01,  5.9132e-01,\n",
      "         1.3447e-01,  4.2199e-01, -3.4041e-01, -2.9813e-02, -2.0882e-01,\n",
      "        -1.4038e-01,  3.6891e-02,  2.9338e-01,  2.6023e-01, -5.0035e-01,\n",
      "         8.0525e-01,  1.8971e-01, -1.2781e-01,  6.2918e-01, -4.7216e-01,\n",
      "         3.7106e-03,  6.8154e-01,  4.4186e-01, -5.1669e-01, -2.8660e-01,\n",
      "        -4.0142e-01,  2.8405e-01, -4.2225e-01,  1.1962e-03,  1.2764e-01,\n",
      "        -3.3692e-01,  4.5807e-02,  1.3660e-02,  7.2032e-02,  2.6548e-01,\n",
      "         6.9284e-01, -5.1047e-01,  3.0849e-01, -1.4904e-01, -1.8758e-01,\n",
      "        -2.9365e-01,  4.3472e-01, -8.7540e-01,  2.6072e-01,  9.4495e-02,\n",
      "         6.8608e-01,  2.3990e-01,  2.5060e-01, -3.4801e-01,  1.5291e-01,\n",
      "         1.7698e-01, -5.3341e-01, -2.7064e+00,  6.6686e-02,  3.4016e-02,\n",
      "        -2.1024e-01,  9.4976e-02,  1.8824e-01,  3.8124e-01, -5.2173e-01,\n",
      "        -2.0668e-01, -3.7319e-01,  2.4519e-01,  6.8704e-01,  3.5944e-01,\n",
      "         5.5925e-01,  1.8446e-01,  8.5324e-02,  3.2510e-01, -2.4930e-01,\n",
      "        -7.8921e-02, -2.2265e-02,  1.8749e-01,  4.0944e-01,  1.5029e-01,\n",
      "        -4.4557e-01, -7.3447e-01,  7.6378e-01, -2.3159e-01, -2.2915e-01,\n",
      "        -1.7743e-01,  7.2161e-01, -8.6581e-02,  5.3476e-01,  3.4724e-04,\n",
      "        -5.1092e-02, -1.7553e-02, -1.7996e-01, -8.7415e-03,  3.4285e-02,\n",
      "         4.3627e-01,  2.1653e-01, -4.0717e-01,  2.6129e-01,  1.5059e-01,\n",
      "        -1.2278e-01,  7.2595e-02, -4.2775e-02,  6.6943e-01, -5.0008e-01,\n",
      "         5.1648e-01, -5.5753e-01, -6.8059e-02, -1.7289e-01,  3.9372e-01,\n",
      "        -5.6854e-02,  3.1331e-01, -1.4697e-02, -5.5419e-02, -3.2951e-01,\n",
      "        -3.1072e-01, -5.0492e-01,  3.8354e-01,  2.7133e-01, -1.0743e-01,\n",
      "         1.7193e-01,  5.7659e-01, -1.3387e-01, -2.9962e-01,  2.1225e-02,\n",
      "         1.4035e-01, -8.2608e-02, -3.2866e-01,  5.2600e-02,  4.4881e-01,\n",
      "         1.0438e-01, -1.1448e-01, -1.2498e-01,  5.8274e-01,  1.7686e-01,\n",
      "         3.3115e-01, -2.1358e-01,  2.0151e-01,  6.9759e-02, -7.6946e-01,\n",
      "         1.4261e-01,  2.4008e-02, -7.6200e+00, -8.6089e-02, -4.5848e-01,\n",
      "        -5.7740e-01,  2.7229e-01, -3.6690e-01, -1.1208e-01, -1.4547e-01,\n",
      "         1.6900e-01,  1.7095e-01, -2.5736e-01,  3.1358e-01,  5.7578e-02,\n",
      "        -5.7866e-01,  3.7992e-01,  4.8344e-01], grad_fn=<CatBackward>) torch.Size([768])\n"
     ]
    }
   ],
   "source": [
    "from flair.embeddings import TransformerDocumentEmbeddings\n",
    "\n",
    "# init embedding\n",
    "embedding = TransformerDocumentEmbeddings('bert-base-uncased')  # 是一个预训练模型\n",
    "\n",
    "# create a sentence\n",
    "sentence = Sentence('The grass is green .')\n",
    "\n",
    "# embed the sentence\n",
    "embedding.embed(sentence)\n",
    "print(sentence.embedding, sentence.embedding.shape)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "aff31120",
   "metadata": {},
   "source": [
    "如果你想用RoBERTa，可以:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 13,
   "id": "8afb967b",
   "metadata": {
    "ExecuteTime": {
     "end_time": "2021-09-12T05:41:30.358162Z",
     "start_time": "2021-09-12T05:41:17.853581Z"
    }
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "tensor([-5.4827e-02,  8.1445e-02, -2.3627e-02, -1.1889e-01,  8.8263e-02,\n",
      "        -1.0158e-01, -3.9760e-02,  2.5344e-02,  5.7176e-02, -4.3611e-02,\n",
      "        -1.6021e-02,  6.7109e-02,  4.5533e-02, -3.7056e-02,  8.3872e-02,\n",
      "         4.0680e-02, -3.2502e-02,  4.3945e-03,  2.0103e-02, -1.8845e-02,\n",
      "        -1.0430e-01,  5.6291e-02, -5.3668e-02,  6.8832e-02,  5.4544e-02,\n",
      "         2.9801e-02,  8.3801e-02,  7.5989e-02, -6.0098e-02, -8.6468e-03,\n",
      "         1.2593e-02,  3.3508e-03,  2.5977e-02, -5.2778e-03,  7.5220e-02,\n",
      "         8.6121e-02,  6.5642e-02, -3.4260e-02, -1.1228e-01,  2.4857e-02,\n",
      "        -4.3046e-03,  3.6223e-02,  5.6927e-02,  6.7388e-03,  8.7083e-02,\n",
      "         1.2816e-02,  7.2610e-03,  2.3794e-02, -4.2810e-02,  1.6011e-02,\n",
      "         6.6762e-03,  3.4390e-02, -6.0827e-02, -1.2070e-02, -6.6790e-02,\n",
      "         2.3046e-03,  2.3293e-02,  5.2836e-02,  9.3991e-02, -2.1734e-02,\n",
      "         8.3373e-03, -1.5891e-01, -1.0182e-01, -4.1488e-02,  3.6775e-02,\n",
      "        -3.3367e-02, -1.5829e-03, -3.2216e-03,  4.7659e-02,  5.7026e-02,\n",
      "         9.2923e-02, -4.7413e-02,  4.1319e-02, -3.6055e-02, -2.1909e-02,\n",
      "         8.5008e-03,  1.7627e-02,  5.4501e-01, -7.0114e-02, -2.9610e-03,\n",
      "         6.7217e-02, -5.4679e-02,  3.5746e-01,  4.7030e-02,  1.6688e-02,\n",
      "         2.4228e-02,  6.0980e-02,  7.8160e-02,  3.5369e-02,  4.8774e-02,\n",
      "         2.0300e-02,  5.1057e-02, -6.4572e-02, -1.3899e-03,  2.7564e-02,\n",
      "         5.5429e-02,  4.1266e-03,  1.9818e-02, -5.4610e-02, -6.6787e-02,\n",
      "        -3.7188e-02, -7.1578e-02,  9.5168e-02,  1.9103e-02, -1.4510e-02,\n",
      "         2.5661e-02,  6.6585e-02, -1.9870e-02,  4.4247e-02, -3.4706e-02,\n",
      "         3.4513e-02,  7.1793e-02,  2.0476e-02,  2.6594e-02, -3.2872e-02,\n",
      "        -6.1313e-02, -3.5511e-02,  1.4434e-02,  5.2620e-03, -6.9571e-04,\n",
      "         4.3106e-02,  8.0944e-02,  1.1230e-01, -1.1124e-02, -3.4352e-02,\n",
      "        -1.5012e-02, -3.6499e-02,  9.1881e-03, -6.5359e-02,  3.3049e-02,\n",
      "        -5.0201e-04, -8.5642e-02, -4.4620e-02,  1.0113e-01,  5.8650e-02,\n",
      "         3.7986e-02,  2.0919e-02, -1.1386e-02, -1.8483e-02, -3.3618e-02,\n",
      "        -5.7533e-03,  5.6117e-02,  6.1482e-02,  7.0443e-03,  9.9833e-02,\n",
      "        -4.2373e-03, -2.6846e-03, -5.8405e-02,  2.7864e-02, -1.1359e-02,\n",
      "         7.1416e-02, -7.9805e-02, -3.3905e-02,  1.7906e-02, -1.3418e-02,\n",
      "         4.5040e-01,  1.0102e-01,  3.5279e-02, -3.1934e-02,  2.9351e-02,\n",
      "         1.5968e-01,  2.9736e-02,  2.0919e-02,  2.0128e-02, -1.7733e-02,\n",
      "        -1.8200e-02, -3.1096e-02,  3.7626e-02,  8.6605e-02,  8.3986e-03,\n",
      "         5.0077e-02,  4.8837e-02,  5.0681e-03, -6.4222e-02, -7.9315e-02,\n",
      "        -4.9714e-02, -1.0483e-02, -1.6188e-03, -6.6630e-02, -1.0980e-02,\n",
      "         5.2782e-02,  4.4549e-02, -9.4187e-02,  3.1535e-02, -4.0366e-02,\n",
      "         2.0813e-02,  1.1267e-02,  6.6288e-02,  3.7262e-03,  2.3706e-02,\n",
      "         5.0103e-02, -1.5548e-02, -3.4186e-02,  4.4493e-03, -4.6676e-02,\n",
      "         7.6852e-02, -1.9102e-02, -1.2332e-02,  3.1462e-02, -3.0470e-02,\n",
      "         5.2544e-02, -9.1785e-02,  9.3373e-02, -8.0038e-02,  1.0034e-01,\n",
      "        -2.5186e-02,  5.2326e-02,  9.6404e-02,  2.2117e-02, -6.9771e-02,\n",
      "        -5.7390e-02,  6.4454e-02, -1.1386e-02,  8.0425e-02,  1.4689e-02,\n",
      "        -2.3187e-02,  7.0384e-02,  1.3252e-01, -9.1714e-03, -4.2402e-02,\n",
      "         2.5911e-02,  4.7596e-02, -1.3052e-02,  7.4536e-02, -7.0718e-02,\n",
      "         3.7421e-02,  5.8073e-02, -6.5355e-03,  1.9682e-02, -6.5519e-02,\n",
      "        -3.3283e-02,  2.9776e-02,  2.3239e-02,  5.6556e-03,  5.3448e-02,\n",
      "        -1.0866e-01, -3.6131e-03, -5.8317e-02, -3.7987e-02,  1.2850e-02,\n",
      "        -1.5075e-01,  4.3522e-02,  3.9137e-02,  1.9704e-02,  7.3400e-03,\n",
      "         2.9869e-02, -1.8250e-03,  1.1101e-01, -2.2339e-02,  6.3212e-02,\n",
      "        -1.7008e-02,  2.1105e-02, -1.5788e-02, -1.7854e-02,  4.5433e-02,\n",
      "        -4.0029e-02, -7.7721e-02, -1.0729e-03,  6.2888e-02,  1.3580e-02,\n",
      "        -2.0028e-02, -8.0912e-02,  2.1328e-02, -2.5247e-02, -9.4675e-02,\n",
      "        -4.6597e-02, -1.4152e-02,  1.6550e-02,  2.5444e-02, -8.2267e-02,\n",
      "        -2.6560e-02, -4.3458e-02, -6.0448e-02,  1.1405e-02,  2.6824e-02,\n",
      "         2.3077e-02, -4.4988e-02, -1.8990e-02, -7.1403e-02, -2.8097e-02,\n",
      "         2.4021e-02, -3.1108e-02, -1.2393e-01,  3.8979e-03, -5.4995e-03,\n",
      "         3.9098e-02, -4.1577e-02, -7.8664e-02,  6.9781e-02,  8.2226e-02,\n",
      "         4.8239e-02,  6.0568e-04, -2.3526e-02,  4.5924e-02, -7.0123e-02,\n",
      "         6.0051e-02,  8.4560e-02,  4.4369e-04,  4.2019e-02, -3.2234e-04,\n",
      "        -1.0486e-02, -1.0025e-01, -8.3303e-02,  2.3788e-02, -1.5065e-02,\n",
      "        -1.0473e-01, -7.0200e-03, -3.5866e-02,  4.9317e-02, -1.4014e-02,\n",
      "        -6.9076e-02, -1.8609e-02, -1.2404e-01,  1.2768e-01,  3.8208e-02,\n",
      "         3.2259e-02,  2.3419e-02, -4.7818e-02,  4.5656e-02,  4.2127e-02,\n",
      "         2.5294e-03,  2.7964e-02,  1.5888e-02,  3.4274e-03, -2.8943e-02,\n",
      "         2.2304e-02,  7.9073e-02, -2.1871e-02,  3.6980e-02,  3.6699e-01,\n",
      "        -2.9250e-01,  1.6832e-02,  7.5110e-02,  2.2816e-02,  9.6255e-02,\n",
      "         1.6879e-03,  6.9392e-02,  1.6309e-02,  9.3691e-02,  3.7209e-02,\n",
      "         6.1206e-03,  4.9207e-02, -1.6376e-02,  3.7074e-02,  5.3004e-02,\n",
      "         1.0222e-02, -4.3676e-02, -2.1717e-02,  3.4286e-02, -2.2806e-03,\n",
      "         6.1242e-03, -4.4925e-02, -8.0623e-03, -8.1504e-02, -3.3810e-02,\n",
      "         4.4732e-02,  2.7228e-02, -4.4500e-02, -1.4512e-02,  9.8316e-03,\n",
      "         4.3227e-02, -3.2028e-03,  2.4783e-02,  2.3086e-02,  9.1689e-02,\n",
      "        -8.9744e-02,  1.0022e-02,  6.7034e-02,  9.8649e-03,  2.5752e-02,\n",
      "         7.0025e-02, -1.5524e-03, -3.4370e-02,  1.4197e-02,  1.4028e-02,\n",
      "        -8.4673e-03,  2.5348e-02,  7.6463e-03,  1.2309e-02,  7.7348e-02,\n",
      "         2.4373e-02,  4.5762e-02, -2.9858e-02,  3.2774e-03,  9.0658e-02,\n",
      "        -4.8615e-03,  4.9909e-02, -1.8963e-02,  8.0017e-02,  2.3916e-02,\n",
      "        -2.9582e-02, -1.1094e-01, -6.0112e-03, -6.0491e-02,  8.4948e-02,\n",
      "         2.4981e-02, -1.9195e-02, -1.7468e-01, -4.3558e-02, -4.6779e-02,\n",
      "         2.5827e-02,  4.4311e-02, -6.4644e-02,  9.9934e-03,  2.0946e-03,\n",
      "         8.4737e-03,  2.5527e-03, -3.2567e-02,  1.7828e-02, -9.8985e-03,\n",
      "        -3.3636e-02,  3.7695e-02,  8.8589e-02, -3.0832e-02,  6.1294e-03,\n",
      "        -4.2250e-02,  2.9284e-03,  1.0003e-02, -7.7535e-03, -2.4764e-02,\n",
      "        -6.1572e-02,  6.8609e-02,  4.5101e-02,  1.7584e-02, -1.0066e-01,\n",
      "        -2.9275e-02,  3.7838e-02, -3.0647e-02, -1.0567e-02, -9.4772e-03,\n",
      "        -2.6025e-02,  4.7696e-02, -4.4807e-02, -3.1320e-02, -3.3101e-02,\n",
      "         1.4092e-02,  2.1363e-02,  8.3491e-04, -4.7929e-02, -5.4467e-02,\n",
      "        -5.9788e-02, -2.2459e-02,  3.9006e-02,  3.2352e-02, -4.3872e-02,\n",
      "         3.3413e-02,  5.8252e-02,  8.1111e-03, -8.9923e-02, -1.6857e-02,\n",
      "         2.4052e-02,  5.5834e-02, -1.6174e-02, -6.3026e-01,  5.2775e-02,\n",
      "         4.8040e-02,  2.7355e-02,  5.1614e-02, -3.1207e-02, -1.5319e-02,\n",
      "         3.9529e-02,  3.1385e-02,  2.3441e-02, -1.2754e-02, -1.0075e-02,\n",
      "        -2.6712e-03, -3.0797e-02,  4.1589e-02, -2.3411e-02, -6.9374e-03,\n",
      "         2.3082e-02, -4.1792e-02, -4.2182e-02, -4.2437e-02,  1.5196e-02,\n",
      "        -5.7394e-02, -6.6671e-02,  3.8666e-02, -1.2304e-02, -8.3797e-02,\n",
      "        -5.8101e-02,  8.5699e-02,  7.3262e-02, -9.2892e-03, -8.8168e-02,\n",
      "        -6.7795e-03, -1.0079e-02,  3.1664e-02,  7.9370e-02,  2.7738e-02,\n",
      "        -5.4177e-02, -3.7422e-02,  2.9180e-02,  2.8686e-02,  2.2224e-01,\n",
      "         2.4088e-02,  2.0573e-01, -3.1038e-02,  1.1482e-03, -5.0361e-02,\n",
      "        -2.4910e-02, -2.4411e-02, -5.4723e-03, -1.4557e-02, -2.0756e-02,\n",
      "         1.9018e-02, -8.7923e-02, -3.4181e-02, -4.8489e-03,  7.9410e-02,\n",
      "        -2.7434e-03, -2.0347e-02,  1.6311e-01, -1.4172e-02,  3.5505e-03,\n",
      "         6.0059e-02, -1.8648e-02,  2.2260e-02, -2.4001e-02, -7.6569e-03,\n",
      "        -6.8389e-02,  2.2062e-03, -1.4123e-02, -1.3670e-03, -5.6625e-02,\n",
      "        -2.2846e-02,  1.3449e-02, -1.5274e-02,  1.0922e-01, -5.4948e-03,\n",
      "        -9.3415e-04, -3.7926e-02,  7.8272e-02, -2.6307e-02, -2.3433e-02,\n",
      "         4.4549e-02,  6.8324e-02,  3.5637e-02, -4.1386e-02, -4.5156e-02,\n",
      "        -2.4941e-02,  6.1408e-02,  1.0439e-01, -1.3588e-02,  1.0593e-01,\n",
      "         2.9050e-02, -2.1241e-02, -3.9857e-02,  2.8842e-02,  8.5906e-02,\n",
      "        -4.0611e-02, -6.7569e-01, -7.0227e-02,  6.0480e-02,  8.9378e-03,\n",
      "         2.8956e-02,  6.9838e-03, -1.2461e-02, -3.5321e-02,  6.6557e-02,\n",
      "        -3.2958e-02,  3.9478e-03,  2.5438e-02,  8.0638e-02, -8.6397e-02,\n",
      "         9.1957e-03,  4.0168e-02, -1.2450e-02, -6.1412e-02, -8.3439e-03,\n",
      "        -3.0653e-01,  1.2988e-02, -4.2882e-02,  9.4440e-02,  2.7890e-02,\n",
      "        -2.5074e-03,  4.4360e-02, -3.4050e-02,  5.1954e-02,  5.7023e-02,\n",
      "         7.4641e-02,  8.5470e-02,  7.3586e-02, -1.3784e-02,  2.2848e-02,\n",
      "        -2.6185e-02,  1.0648e-01,  1.4603e-02,  1.0842e+01, -1.5370e-02,\n",
      "         6.4096e-02, -1.6329e-02, -9.0943e-03, -7.0644e-02,  5.6730e-02,\n",
      "        -5.6151e-02,  3.1499e-02,  9.5073e-02, -5.7515e-03,  2.5433e-02,\n",
      "        -9.6183e-02, -2.4203e-02,  1.5920e-02,  1.7271e-02, -5.1137e-02,\n",
      "        -6.1558e-02,  3.9310e-02, -1.5504e-02,  2.3049e-02,  5.7585e-02,\n",
      "         1.5716e-02,  4.9845e-02, -8.9039e-02,  5.2851e-02,  1.1391e-02,\n",
      "        -2.3454e-02, -7.8539e-03,  4.6832e-02,  1.7557e-02,  4.1707e-02,\n",
      "         3.6056e-02,  3.6847e-02,  6.6716e-02,  1.8650e-02,  1.4223e-02,\n",
      "         9.7786e-02,  1.6068e-02,  9.0849e-02,  1.0035e-01,  4.8263e-03,\n",
      "        -1.2258e-02, -5.7446e-02,  4.6765e-02,  8.3854e-02,  2.6790e-03,\n",
      "         9.4056e-02,  8.2706e-03,  5.8763e-02,  9.0541e-02, -6.7630e-02,\n",
      "         3.7472e-02,  4.7699e-02,  3.4265e-02, -2.5149e-02,  2.3159e-02,\n",
      "        -3.1227e-02,  4.0825e-02,  3.4853e-02, -3.1539e-02,  4.7430e-02,\n",
      "         4.8306e-03,  4.6455e-02, -4.6054e-02,  1.3280e-01,  7.7235e-02,\n",
      "         1.2280e-01, -6.7540e-02,  1.6085e-02,  5.5233e-03, -7.1939e-02,\n",
      "        -3.2011e-02, -1.2092e-02,  1.9422e-02, -8.5761e-02,  3.1738e-02,\n",
      "        -3.2847e-02,  2.1005e-02, -2.5020e-02,  1.9298e-02,  2.6759e-02,\n",
      "        -4.4871e-04, -5.5594e-02,  6.2556e-02,  2.1886e-02,  5.0609e-02,\n",
      "         7.0339e-02,  9.5824e-03, -1.6018e-02, -8.6312e-02,  1.9917e-02,\n",
      "         2.2333e-02, -4.2614e-02,  1.7528e-03,  5.6140e-03,  2.0983e-02,\n",
      "        -7.5657e-02,  8.1358e-02,  7.7440e-02, -9.1359e-02, -6.1279e-04,\n",
      "        -4.2146e-02,  8.7820e-02,  2.6942e-02,  4.3830e-02, -4.4191e-02,\n",
      "        -2.5182e-02,  1.9773e-02,  2.9127e-02, -4.5252e-02,  1.7047e-02,\n",
      "         1.5754e-03, -3.9945e-03,  2.1588e-02,  4.0657e-02, -2.9719e-02,\n",
      "         1.6870e-02, -1.6984e-02,  4.4208e-02, -1.0120e-01, -7.1111e-03,\n",
      "        -1.8277e-02, -8.7239e-03, -4.2647e-02, -3.8302e-02, -2.5255e-02,\n",
      "         4.4227e-03, -2.4236e-02,  8.0160e-03, -9.4932e-03,  3.8076e-02,\n",
      "         1.1061e-01, -5.0217e-02,  4.0411e-02,  1.1565e-03, -5.4541e-02,\n",
      "         9.4918e-02,  5.0056e-02,  6.4322e-02, -2.5414e-02, -2.8741e-02,\n",
      "        -2.1844e-02, -9.4323e-02,  1.0376e-02,  2.7253e-02,  7.1411e-02,\n",
      "         3.7926e-02,  6.9148e-02, -8.4568e-03,  1.3670e-02,  9.4349e-04,\n",
      "         3.7587e-02,  1.5972e-02, -8.9620e-03, -1.7401e-03, -1.8130e-02,\n",
      "         6.4611e-02,  1.1442e-02, -3.0850e-02,  5.4780e-02,  7.2337e-03,\n",
      "        -1.1842e-02,  1.8305e-02, -6.1534e-02, -4.2031e-02, -2.2641e-02,\n",
      "         4.1579e-02,  7.7674e-02, -8.1423e-03, -3.4971e-03,  6.7355e-03,\n",
      "        -8.0391e-02, -8.3055e-02, -1.2363e-02,  1.0016e-01,  9.1859e-02,\n",
      "        -1.0090e-01, -5.5050e-02, -7.3082e-03], grad_fn=<CatBackward>) torch.Size([768])\n"
     ]
    }
   ],
   "source": [
    "from flair.embeddings import TransformerDocumentEmbeddings\n",
    "\n",
    "# init embedding\n",
    "embedding = TransformerDocumentEmbeddings('roberta-base')\n",
    "\n",
    "# create a sentence\n",
    "sentence = Sentence('The grass is green .')\n",
    "\n",
    "# embed the sentence\n",
    "embedding.embed(sentence)\n",
    "print(sentence.embedding, sentence.embedding.shape)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "fff874ba",
   "metadata": {},
   "source": [
    "这里是所有型号的完整列表(BERT, RoBERTa, XLM, XLNet等)。您可以在这个类中使用这些模型中的任何一个。"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "bb57f777",
   "metadata": {},
   "source": [
    "## SentenceTransformerDocumentEmbeddings"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "f5fe3e89",
   "metadata": {},
   "source": [
    "您还可以从sentence-transform库中获得几个嵌入。这些模型经过预先训练，可以为句子提供良好的通用向量表示。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 14,
   "id": "423e61db",
   "metadata": {
    "ExecuteTime": {
     "end_time": "2021-09-12T05:41:32.106694Z",
     "start_time": "2021-09-12T05:41:30.359664Z"
    }
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "torch.Size([768])\n"
     ]
    }
   ],
   "source": [
    "from flair.data import Sentence\n",
    "from flair.embeddings import SentenceTransformerDocumentEmbeddings\n",
    "\n",
    "# init embedding\n",
    "embedding = SentenceTransformerDocumentEmbeddings('bert-base-nli-mean-tokens')\n",
    "\n",
    "# create a sentence\n",
    "sentence = Sentence('The grass is green .')\n",
    "\n",
    "# embed the sentence\n",
    "embedding.embed(sentence)\n",
    "print(sentence.embedding.shape)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "f3c2e080",
   "metadata": {},
   "source": [
    "你可以在这里找到一个完整的sentence-transformer的预训练模型的列表."
   ]
  },
  {
   "cell_type": "markdown",
   "id": "ab5d5610",
   "metadata": {},
   "source": [
    "注意:要使用这个嵌入，你需要安装sentence-tansform。使用pip install sentence-transformers进行安装。"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "4faf8560",
   "metadata": {},
   "source": [
    "# Tutorial 6: Loading Training Data"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "e180ca7e",
   "metadata": {},
   "source": [
    "本教程的这一部分展示了如何加载用于训练模型的语料库。我们假设您熟悉这个库的基本类型。"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "ce8aaaa0",
   "metadata": {
    "ExecuteTime": {
     "end_time": "2021-09-11T15:45:09.885732Z",
     "start_time": "2021-09-11T15:45:09.868703Z"
    }
   },
   "source": [
    "## The Corpus Objectm"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "e6b54cad",
   "metadata": {},
   "source": [
    "语料库表示用于训练模型的数据集。它由训练句列表、开发句列表和测试句列表组成，分别对应模型训练时的训练、验证和测试拆分。"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "499c43df",
   "metadata": {},
   "source": [
    "下面的示例片段将英语的通用依赖树库实例化为语料库对象:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 15,
   "id": "e6946b28",
   "metadata": {
    "ExecuteTime": {
     "end_time": "2021-09-12T05:41:42.028573Z",
     "start_time": "2021-09-12T05:41:32.108289Z"
    },
    "scrolled": true
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "2021-09-12 13:41:32,169 Reading data from C:\\Users\\sunbe\\.flair\\datasets\\ud_english\n",
      "2021-09-12 13:41:32,170 Train: C:\\Users\\sunbe\\.flair\\datasets\\ud_english\\en_ewt-ud-train.conllu\n",
      "2021-09-12 13:41:32,171 Dev: C:\\Users\\sunbe\\.flair\\datasets\\ud_english\\en_ewt-ud-dev.conllu\n",
      "2021-09-12 13:41:32,172 Test: C:\\Users\\sunbe\\.flair\\datasets\\ud_english\\en_ewt-ud-test.conllu\n"
     ]
    }
   ],
   "source": [
    "import flair.datasets\n",
    "corpus = flair.datasets.UD_ENGLISH()  # 英语通用预料库"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "80bb5c42",
   "metadata": {},
   "source": [
    "第一次调用这个代码片段时，它会触发将英语通用依赖树库下载到硬盘上。然后它读取训练，测试和开发分割成它返回的语料库。检查这三个句子的长度，看看有多少个句子:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 16,
   "id": "59087713",
   "metadata": {
    "ExecuteTime": {
     "end_time": "2021-09-12T05:41:42.044270Z",
     "start_time": "2021-09-12T05:41:42.030109Z"
    }
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "12543\n",
      "2077\n",
      "2001\n"
     ]
    }
   ],
   "source": [
    "# print the number of Sentences in the train split\n",
    "print(len(corpus.train))\n",
    "\n",
    "# print the number of Sentences in the test split\n",
    "print(len(corpus.test))\n",
    "\n",
    "# print the number of Sentences in the dev split\n",
    "print(len(corpus.dev))"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "64cf52d6",
   "metadata": {},
   "source": [
    "您还可以直接访问每个拆分中的句子对象。例如，让我们看一下UD训练中的第一个句子:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 17,
   "id": "da13b744",
   "metadata": {
    "ExecuteTime": {
     "end_time": "2021-09-12T05:41:42.059122Z",
     "start_time": "2021-09-12T05:41:42.046302Z"
    }
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Sentence: \"What if Google Morphed Into GoogleOS ?\"   [− Tokens: 7  − Token-Labels: \"What <what/PRON/WP/root/Int> if <if/SCONJ/IN/mark> Google <Google/PROPN/NNP/nsubj/Sing> Morphed <morph/VERB/VBD/advcl/Ind/Past/Fin> Into <into/ADP/IN/case> GoogleOS <GoogleOS/PROPN/NNP/obl/Sing> ? <?/PUNCT/./punct>\"]\n"
     ]
    }
   ],
   "source": [
    "# print the first Sentence in the training split\n",
    "print(corpus.test[0])"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "501019ea",
   "metadata": {},
   "source": [
    "这个句子完全被标记了句法和形态信息。例如，打印带有PoS标签的句子:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 18,
   "id": "b743a0e7",
   "metadata": {
    "ExecuteTime": {
     "end_time": "2021-09-12T05:41:42.074368Z",
     "start_time": "2021-09-12T05:41:42.063692Z"
    }
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "What <WP> if <IN> Google <NNP> Morphed <VBD> Into <IN> GoogleOS <NNP> ? <.>\n"
     ]
    }
   ],
   "source": [
    "# print the first Sentence in the training split\n",
    "print(corpus.test[0].to_tagged_string('pos'))"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "9ad1c6bf",
   "metadata": {},
   "source": [
    "因此，语料库被标记好，可以进行训练。"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "ee86a86c",
   "metadata": {},
   "source": [
    "## Helper functions（助手函数）"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "75e87a4a",
   "metadata": {},
   "source": [
    "语料库包含许多有用的助手函数。例如，您可以通过调用downsample()并传递一个比率来对数据进行下采样。所以，如果你通常得到这样的语料库:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 19,
   "id": "b0c2f313",
   "metadata": {
    "ExecuteTime": {
     "end_time": "2021-09-12T05:41:52.274641Z",
     "start_time": "2021-09-12T05:41:42.077861Z"
    }
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "2021-09-12 13:41:42,081 Reading data from C:\\Users\\sunbe\\.flair\\datasets\\ud_english\n",
      "2021-09-12 13:41:42,082 Train: C:\\Users\\sunbe\\.flair\\datasets\\ud_english\\en_ewt-ud-train.conllu\n",
      "2021-09-12 13:41:42,083 Dev: C:\\Users\\sunbe\\.flair\\datasets\\ud_english\\en_ewt-ud-dev.conllu\n",
      "2021-09-12 13:41:42,083 Test: C:\\Users\\sunbe\\.flair\\datasets\\ud_english\\en_ewt-ud-test.conllu\n"
     ]
    }
   ],
   "source": [
    "import flair.datasets\n",
    "corpus = flair.datasets.UD_ENGLISH()"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "51bb2f1a",
   "metadata": {},
   "source": [
    "然后你可以向下取样语料，简单地像这样:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 20,
   "id": "10444663",
   "metadata": {
    "ExecuteTime": {
     "end_time": "2021-09-12T05:42:09.674580Z",
     "start_time": "2021-09-12T05:41:52.276905Z"
    }
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "2021-09-12 13:41:52,279 Reading data from C:\\Users\\sunbe\\.flair\\datasets\\ud_english\n",
      "2021-09-12 13:41:52,280 Train: C:\\Users\\sunbe\\.flair\\datasets\\ud_english\\en_ewt-ud-train.conllu\n",
      "2021-09-12 13:41:52,282 Dev: C:\\Users\\sunbe\\.flair\\datasets\\ud_english\\en_ewt-ud-dev.conllu\n",
      "2021-09-12 13:41:52,283 Test: C:\\Users\\sunbe\\.flair\\datasets\\ud_english\\en_ewt-ud-test.conllu\n"
     ]
    }
   ],
   "source": [
    "import flair.datasets\n",
    "downsampled_corpus = flair.datasets.UD_ENGLISH().downsample(0.1)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "c830bb95",
   "metadata": {
    "ExecuteTime": {
     "end_time": "2021-09-11T16:07:06.012294Z",
     "start_time": "2021-09-11T16:07:05.999254Z"
    }
   },
   "source": [
    "如果你把两个语料库都打印出来，你会看到第二个语料库被降采样到10%的数据。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 21,
   "id": "dff1eb3b",
   "metadata": {
    "ExecuteTime": {
     "end_time": "2021-09-12T05:42:09.690658Z",
     "start_time": "2021-09-12T05:42:09.676948Z"
    }
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "--- 1 Original ---\n",
      "Corpus: 12543 train + 2001 dev + 2077 test sentences\n",
      "--- 2 Downsampled ---\n",
      "Corpus: 1254 train + 200 dev + 208 test sentences\n"
     ]
    }
   ],
   "source": [
    "print(\"--- 1 Original ---\")\n",
    "print(corpus)\n",
    "\n",
    "print(\"--- 2 Downsampled ---\")\n",
    "print(downsampled_corpus)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "28e29c79",
   "metadata": {},
   "source": [
    "## Creating label dictionaries （创建标签字典）"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "670cc192",
   "metadata": {},
   "source": [
    "对于许多学习任务，你需要创建一个包含你想预测的所有标签的“字典”。您可以通过调用方法make_label_dictionary并传递所需的label_type，直接从语料库生成这个字典。"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "d9c7927e",
   "metadata": {},
   "source": [
    "例如，上面实例化的UD_ENGLISH语料库具有多层注释，如常规POS标签(' POS ')、通用POS标签('upos')、形态标签('时态'、'数字'..)等等。通过像这样传递label_type='upos'来为通用POS标签创建标签字典:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 22,
   "id": "0094b054",
   "metadata": {
    "ExecuteTime": {
     "end_time": "2021-09-12T05:42:10.782340Z",
     "start_time": "2021-09-12T05:42:09.692230Z"
    }
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "2021-09-12 13:42:09,693 Computing label dictionary. Progress:\n"
     ]
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "100%|█████████████████████████████████████████████████████████████████████████| 12543/12543 [00:01<00:00, 11675.68it/s]"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "2021-09-12 13:42:10,774 Corpus contains the labels: upos (#204585), lemma (#204584), pos (#204584), dependency (#204584), number (#68023), verbform (#35412), prontype (#33584), person (#21187), tense (#20238), mood (#16547), degree (#13649), definite (#13300), case (#12091), numtype (#4266), gender (#4038), poss (#3039), voice (#1205), typo (#332), abbr (#126), reflex (#100), style (#33), foreign (#18)\n",
      "2021-09-12 13:42:10,774 Created (for label 'upos') Dictionary with 17 tags: PROPN, PUNCT, ADJ, NOUN, VERB, DET, ADP, AUX, PRON, PART, SCONJ, NUM, ADV, CCONJ, X, INTJ, SYM\n",
      "Dictionary with 17 tags: PROPN, PUNCT, ADJ, NOUN, VERB, DET, ADP, AUX, PRON, PART, SCONJ, NUM, ADV, CCONJ, X, INTJ, SYM\n"
     ]
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "\n"
     ]
    }
   ],
   "source": [
    "# create label dictionary for a Universal Part-of-Speech tagging task （通用词性标注任务）\n",
    "upos_dictionary = corpus.make_label_dictionary(label_type='upos')\n",
    "\n",
    "# print dictionary\n",
    "print(upos_dictionary)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "fa27f2bf",
   "metadata": {},
   "source": [
    "### Dictionaries for other label types（其他标签类型的字典）"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "4bfe9811",
   "metadata": {},
   "source": [
    "当在上面的例子中调用make_label_dictionary时，同一语料库中所有标签类型的统计信息会被打印出来:"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "526be824",
   "metadata": {},
   "source": [
    "Corpus contains the labels: upos (#204585), lemma (#204584), pos (#204584), dependency (#204584), number (#68023), verbform (#35412), prontype (#33584), person (#21187), tense (#20238), mood (#16547), degree (#13649), definite (#13300), case (#12091), numtype (#4266), gender (#4038), poss (#3039), voice (#1205), typo (#332), abbr (#126), reflex (#100), style (#33), foreign (#18)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "26c5edbc",
   "metadata": {},
   "source": [
    "这意味着您可以为UD_ENGLISH语料库中的任何这些标签类型创建字典。让我们为常规词性标签和形态数字标签任务创建词典:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 23,
   "id": "5f4ee431",
   "metadata": {
    "ExecuteTime": {
     "end_time": "2021-09-12T05:42:12.774820Z",
     "start_time": "2021-09-12T05:42:10.785514Z"
    }
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "2021-09-12 13:42:10,787 Computing label dictionary. Progress:\n"
     ]
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "100%|█████████████████████████████████████████████████████████████████████████| 12543/12543 [00:01<00:00, 10645.73it/s]"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "2021-09-12 13:42:11,970 Corpus contains the labels: upos (#204585), pos (#204585), lemma (#204584), dependency (#204584), number (#68023), verbform (#35412), prontype (#33584), person (#21187), tense (#20238), mood (#16547), degree (#13649), definite (#13300), case (#12091), numtype (#4266), gender (#4038), poss (#3039), voice (#1205), typo (#332), abbr (#126), reflex (#100), style (#33), foreign (#18)\n",
      "2021-09-12 13:42:11,971 Created (for label 'pos') Dictionary with 50 tags: NNP, HYPH, :, JJ, NNS, VBD, ,, DT, NN, IN, ., -LRB-, MD, VB, VBG, PRP, TO, -RRB-, VBN, RP, CD, VBZ, RB, NNPS, VBP, PRP$, CC, WP, EX, WDT\n",
      "2021-09-12 13:42:11,974 Computing label dictionary. Progress:\n"
     ]
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "\n",
      "100%|█████████████████████████████████████████████████████████████████████████| 12543/12543 [00:00<00:00, 15942.66it/s]"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "2021-09-12 13:42:12,763 Corpus contains the labels: upos (#204585), pos (#204585), lemma (#204584), dependency (#204584), number (#68024), verbform (#35412), prontype (#33584), person (#21187), tense (#20238), mood (#16547), degree (#13649), definite (#13300), case (#12091), numtype (#4266), gender (#4038), poss (#3039), voice (#1205), typo (#332), abbr (#126), reflex (#100), style (#33), foreign (#18)\n",
      "2021-09-12 13:42:12,763 Created (for label 'number') Dictionary with 2 tags: Sing, Plur\n"
     ]
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "\n"
     ]
    }
   ],
   "source": [
    "# create label dictionary for a regular POS tagging task  （常规词性标注任务）\n",
    "pos_dictionary = corpus.make_label_dictionary(label_type='pos')\n",
    "\n",
    "# create label dictionary for a morphological number tagging task （形态数字标签任务）\n",
    "tense_dictionary = corpus.make_label_dictionary(label_type='number')"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "746f3a8c",
   "metadata": {},
   "source": [
    "如果你打印这些字典，你会发现对于这个语料库，POS字典包含50个标签，而数字字典只有2个(单数和复数)。"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "5bdb3391",
   "metadata": {},
   "source": [
    "### Dictionaries for other corpora types（其他语料库类型的字典）"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "6d8d0721",
   "metadata": {},
   "source": [
    "make_label_dictionary方法可以用于任何语料库，包括文本分类语料库:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 24,
   "id": "9c3aefd4",
   "metadata": {
    "ExecuteTime": {
     "end_time": "2021-09-12T05:42:13.369946Z",
     "start_time": "2021-09-12T05:42:12.776313Z"
    }
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "2021-09-12 13:42:12,778 Reading data from C:\\Users\\sunbe\\.flair\\datasets\\trec_6\n",
      "2021-09-12 13:42:12,779 Train: C:\\Users\\sunbe\\.flair\\datasets\\trec_6\\train.txt\n",
      "2021-09-12 13:42:12,780 Dev: None\n",
      "2021-09-12 13:42:12,780 Test: C:\\Users\\sunbe\\.flair\\datasets\\trec_6\\test.txt\n",
      "2021-09-12 13:42:13,356 Initialized corpus C:\\Users\\sunbe\\.flair\\datasets\\trec_6 (label type name is 'question_class')\n"
     ]
    }
   ],
   "source": [
    "# create label dictionary for a text classification task\n",
    "corpus = flair.datasets.TREC_6()"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 25,
   "id": "50257498",
   "metadata": {
    "ExecuteTime": {
     "end_time": "2021-09-12T05:42:13.400773Z",
     "start_time": "2021-09-12T05:42:13.379901Z"
    }
   },
   "outputs": [
    {
     "data": {
      "text/plain": [
       "(Sentence: \"What films featured the character Popeye Doyle ?\"   [− Tokens: 8  − Sentence-Labels: {'question_class': [ENTY (1.0)]}],\n",
       " 500,\n",
       " 4907)"
      ]
     },
     "execution_count": 25,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "corpus.train[0] , len(corpus.test),len(corpus.train) # "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 26,
   "id": "1317c306",
   "metadata": {
    "ExecuteTime": {
     "end_time": "2021-09-12T05:42:13.819957Z",
     "start_time": "2021-09-12T05:42:13.407043Z"
    }
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "2021-09-12 13:42:13,409 Computing label dictionary. Progress:\n"
     ]
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "100%|███████████████████████████████████████████████████████████████████████████| 4907/4907 [00:00<00:00, 12368.41it/s]"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "2021-09-12 13:42:13,810 Corpus contains the labels: question_class (#4907)\n",
      "2021-09-12 13:42:13,811 Created (for label 'question_class') Dictionary with 6 tags: ENTY, DESC, ABBR, HUM, LOC, NUM\n",
      "Dictionary with 6 tags: ENTY, DESC, ABBR, HUM, LOC, NUM\n"
     ]
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "\n"
     ]
    }
   ],
   "source": [
    "print(corpus.make_label_dictionary('question_class'))"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "00aa6a6e",
   "metadata": {},
   "source": [
    "## The MultiCorpus Object(多个语料库对象)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "d8ddc42a",
   "metadata": {},
   "source": [
    "如果您想一次训练多个任务，可以使用MultiCorpus对象。要初始化MultiCorpus，首先需要创建任意数量的Corpus对象。然后，您可以将一个Corpus列表传递给MultiCorpus对象。例如，下面的代码片段加载了一个由英语、德语和荷兰语通用依赖树库组成的组合语料库。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 27,
   "id": "4f31d0ae",
   "metadata": {
    "ExecuteTime": {
     "end_time": "2021-09-12T05:42:28.206920Z",
     "start_time": "2021-09-12T05:42:13.822613Z"
    }
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "2021-09-12 13:42:13,828 Reading data from C:\\Users\\sunbe\\.flair\\datasets\\ud_english\n",
      "2021-09-12 13:42:13,829 Train: C:\\Users\\sunbe\\.flair\\datasets\\ud_english\\en_ewt-ud-train.conllu\n",
      "2021-09-12 13:42:13,830 Dev: C:\\Users\\sunbe\\.flair\\datasets\\ud_english\\en_ewt-ud-dev.conllu\n",
      "2021-09-12 13:42:13,831 Test: C:\\Users\\sunbe\\.flair\\datasets\\ud_english\\en_ewt-ud-test.conllu\n"
     ]
    }
   ],
   "source": [
    "english_corpus = flair.datasets.UD_ENGLISH()"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 28,
   "id": "ef614064",
   "metadata": {
    "ExecuteTime": {
     "end_time": "2021-09-12T05:42:47.495939Z",
     "start_time": "2021-09-12T05:42:28.209699Z"
    },
    "scrolled": true
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "2021-09-12 13:42:28,214 Reading data from C:\\Users\\sunbe\\.flair\\datasets\\ud_german\n",
      "2021-09-12 13:42:28,216 Train: C:\\Users\\sunbe\\.flair\\datasets\\ud_german\\de_gsd-ud-train.conllu\n",
      "2021-09-12 13:42:28,217 Dev: C:\\Users\\sunbe\\.flair\\datasets\\ud_german\\de_gsd-ud-dev.conllu\n",
      "2021-09-12 13:42:28,219 Test: C:\\Users\\sunbe\\.flair\\datasets\\ud_german\\de_gsd-ud-test.conllu\n"
     ]
    }
   ],
   "source": [
    "german_corpus = flair.datasets.UD_GERMAN()"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "1f1bdc4b",
   "metadata": {
    "ExecuteTime": {
     "start_time": "2021-09-12T05:40:38.688Z"
    },
    "scrolled": true
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "2021-09-12 13:42:49,059 https://raw.githubusercontent.com/UniversalDependencies/UD_Dutch-Alpino/master/nl_alpino-ud-train.conllu not found in cache, downloading to C:\\Users\\sunbe\\AppData\\Local\\Temp\\tmpstr_s36k\n"
     ]
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "13287337B [00:18, 7650019.33B/s]                                                                                       "
     ]
    }
   ],
   "source": [
    "dutch_corpus = flair.datasets.UD_DUTCH()"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "8d4efaf8",
   "metadata": {
    "ExecuteTime": {
     "start_time": "2021-09-12T05:40:38.691Z"
    }
   },
   "outputs": [],
   "source": [
    "# make a multi corpus consisting of three UDs\n",
    "from flair.data import MultiCorpus\n",
    "multi_corpus = MultiCorpus([english_corpus, german_corpus, dutch_corpus])"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "8ba4db83",
   "metadata": {},
   "source": [
    "MultiCorpus继承自Corpus，所以您可以像使用任何其他语料库一样使用它来训练您的模型"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "c6f48066",
   "metadata": {},
   "source": [
    "## Datasets included in Flair （数据集包括在Flair）"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "11749ad6",
   "metadata": {},
   "source": [
    "Flair支持许多开箱即用的数据集。当您第一次调用相应的构造函数ID时，它会自动下载并设置数据。"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "f98aef63",
   "metadata": {
    "ExecuteTime": {
     "end_time": "2021-09-12T04:38:11.505790Z",
     "start_time": "2021-09-12T04:37:51.057Z"
    }
   },
   "source": [
    "支持以下数据集（单击类别以展开）："
   ]
  },
  {
   "cell_type": "markdown",
   "id": "4db123d2",
   "metadata": {},
   "source": [
    "（1）命名实体识别(NER)数据集。（2）生物医学实体识别(BioNER)数据集。（3）实体链接(NEL)数据集。（4）关系抽取(RE)数据集。（5）胶水基准数据集。（6）通用命题库(UP)数据集。（7）通用依赖树库(UD)数据集。（8）文本分类数据集。（9）回归文本数据集。（10）其他序列标记数据集。（11）相似的学习数据集。"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "3b5fd074",
   "metadata": {},
   "source": [
    "因此，要加载用于情感文本分类的IMDB语料库，只需:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "9d84a39a",
   "metadata": {
    "ExecuteTime": {
     "start_time": "2021-09-12T06:09:09.302Z"
    }
   },
   "outputs": [],
   "source": [
    "import flair.datasets\n",
    "corpus = flair.datasets.IMDB()"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "67664c37",
   "metadata": {},
   "source": [
    "这将下载并设置训练模型所需的所有内容。"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "d924fa95",
   "metadata": {},
   "source": [
    "## Reading Your Own Sequence Labeling Dataset （阅读您自己的序列标记数据集）"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "dcb5ff45",
   "metadata": {
    "ExecuteTime": {
     "end_time": "2021-09-12T04:56:00.763365Z",
     "start_time": "2021-09-12T04:56:00.750413Z"
    }
   },
   "source": [
    "如果您想对不在上述列表中的序列标记数据集进行训练，可以使用ColumnCorpus对象加载它们。大多数NLP中的序列标记数据集使用某种列格式，其中每一行是一个单词，每一列是一个层次的语言注释。例如这个句子:"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "1ecfcd14",
   "metadata": {},
   "source": [
    "George N B-PER\n",
    "Washington N I-PER\n",
    "went V O\n",
    "to P O\n",
    "Washington N B-LOC\n",
    "\n",
    "Sam N B-PER\n",
    "Houston N I-PER\n",
    "stayed V O\n",
    "home N O"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "e2e6e2ec",
   "metadata": {},
   "source": [
    "第一列是单词本身，第二列是粗糙的PoS标记，第三列是bio注释的NER标记。空行分隔句子。要读取这样的数据集，请将列结构定义为一个字典并实例化一个ColumnCorpus。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "e06ef658",
   "metadata": {
    "ExecuteTime": {
     "start_time": "2021-09-12T05:40:38.922Z"
    }
   },
   "outputs": [],
   "source": [
    "from flair.data import Corpus\n",
    "from flair.datasets import ColumnCorpus\n",
    "\n",
    "# define columns  # 定义列格式\n",
    "columns = {0: 'text', 1: 'pos', 2: 'ner'}\n",
    "\n",
    "# this is the folder in which train, test and dev files reside # 这是存放train，test，dev文件的文件夹\n",
    "data_folder = '/path/to/data/folder'\n",
    "\n",
    "# init a corpus using column format, data folder and the names of the train, dev and test files \n",
    "# 使用列格式，数据文件夹和train，dev，test文件的名称初始化语料库\n",
    "corpus: Corpus = ColumnCorpus(data_folder, columns,\n",
    "                              train_file='train.txt',\n",
    "                              test_file='test.txt',\n",
    "                              dev_file='dev.txt')"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "fc3d033a",
   "metadata": {},
   "source": [
    "这给了你一个Corpus对象，它包含train、test和dev分割，每个都有一个句子列表。所以，为了检查训练中有多少句子："
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "25b09037",
   "metadata": {
    "ExecuteTime": {
     "start_time": "2021-09-12T05:40:38.968Z"
    }
   },
   "outputs": [],
   "source": [
    "len(corpus.train)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "44dbded8",
   "metadata": {
    "ExecuteTime": {
     "start_time": "2021-09-12T05:40:38.971Z"
    }
   },
   "outputs": [],
   "source": [
    "corpus.trian[0]"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "03c27a56",
   "metadata": {},
   "source": [
    "您还可以访问一个句子并查看注释。让我们假设训练分割是从上面的示例中读取的，然后执行这些命令:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "be2c2871",
   "metadata": {
    "ExecuteTime": {
     "start_time": "2021-09-12T05:40:39.010Z"
    }
   },
   "outputs": [],
   "source": [
    "print(corpus.train[0].to_tagged_string('ner'))\n",
    "print(corpus.train[1].to_tagged_string('pos'))"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "da77dd32",
   "metadata": {},
   "source": [
    "## Reading a Text Classification Dataset(阅读文本分类数据集)`"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "a55f6a6f",
   "metadata": {},
   "source": [
    "如果你想使用自己的文本分类数据集，目前有两种方法可以做到这一点:从一个简单的CSV文件加载指定的文本和标签，或将数据格式化为FastText格式。"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "6737ac8d",
   "metadata": {},
   "source": [
    "### Load from simple CSV file(从简单的CSV文件加载)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "6deef216",
   "metadata": {},
   "source": [
    "许多文本分类数据集分布为简单的CSV文件，其中每一行对应一个数据点，列对应文本、标签和其他元数据。您可以使用CSVClassificationCorpus通过传入列格式(如上面的ColumnCorpus)来加载CSV格式分类数据集。这个列格式指示CSV中的哪个列保存文本，哪个字段保存标签。默认情况下，Python的CSV库假设您的文件是Excel CSV格式，但如果您使用自定义分隔符或引号字符，则可以指定其他参数。"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "be71dddd",
   "metadata": {},
   "source": [
    "注意:您需要将分割的CSV数据文件保存在data_folder路径下，每个文件的标题都适当，即:train.csv test.csv dev.csv.这是因为语料库初始化器会自动搜索文件夹中的train、dev、test拆分。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "45df9f0e",
   "metadata": {
    "ExecuteTime": {
     "start_time": "2021-09-12T05:40:39.110Z"
    }
   },
   "outputs": [],
   "source": [
    "from flair.data import Corpus\n",
    "from flair.datasets import CSVClassificationCorpus\n",
    "\n",
    "# this is the folder in which train, test and dev files reside # 这是存放train，test，dev文件的文件夹\n",
    "data_folder = '/path/to/data'\n",
    "\n",
    "# column format indicating which columns hold the text and label(s) # 列格式，指示哪些列保存文本和标签\n",
    "column_name_map = {4: \"text\", 1: \"label_topic\", 2: \"label_subtopic\"}\n",
    " \n",
    "# load corpus containing training, test and dev data and if CSV has a header, you can skip it # 加载包含train、test和dev数据的语料库，如果CSV有标题，你可以跳过它\n",
    "corpus: Corpus = CSVClassificationCorpus(data_folder,\n",
    "                                         column_name_map,\n",
    "                                         skip_header=True,\n",
    "                                         delimiter='\\t',    # tab-separated files # 列之间的分隔符\n",
    ") "
   ]
  },
  {
   "cell_type": "markdown",
   "id": "11e5dde8",
   "metadata": {},
   "source": [
    "### FastText Format（FastText格式）"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "215bbcff",
   "metadata": {},
   "source": [
    "如果使用CSVClassificationCorpus不实用，你可以将数据格式化为FastText格式，即文件中的每一行代表一个文本文档。文档可以有一个或多个标签，这些标签定义在以__label__开头的行首。它看起来像这样:"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "82de552b",
   "metadata": {},
   "source": [
    "__label__<label_1> <text>\n",
    "__label__<label_1> __label__<label_2> <text>"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "f9a860bc",
   "metadata": {},
   "source": [
    "如前所述，要为文本分类任务创建语料库，您需要将上述格式的三个文件(train、dev和test)放在一个文件夹中。例如，对于IMDB任务，这个数据文件夹结构看起来像这样:"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "ec727a23",
   "metadata": {},
   "source": [
    "/resources/tasks/imdb/train.txt\n",
    "/resources/tasks/imdb/dev.txt\n",
    "/resources/tasks/imdb/test.txt"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "8b4113d8",
   "metadata": {},
   "source": [
    "现在，通过指向这个文件夹(/resources/tasks/imdb)来创建一个ClassificationCorpus。因此，文件中的每一行都被转换为带有标签的句子对象。"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "8a34d732",
   "metadata": {},
   "source": [
    "注意:一行文字可能包含多个句子。因此，一个Sentence对象实际上可以由多个句子组成。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "c80731ec",
   "metadata": {
    "ExecuteTime": {
     "start_time": "2021-09-12T05:40:39.281Z"
    }
   },
   "outputs": [],
   "source": [
    "from flair.data import Corpus\n",
    "from flair.datasets import ClassificationCorpus\n",
    "\n",
    "# this is the folder in which train, test and dev files reside  # 这是存放train、test和dev文件的文件夹\n",
    "data_folder = '/path/to/data/folder'\n",
    "\n",
    "# load corpus containing training, test and dev data # 加载包含训练、测试和dev数据的语料库\n",
    "corpus: Corpus = ClassificationCorpus(data_folder,\n",
    "                                      test_file='test.txt',\n",
    "                                      dev_file='dev.txt',\n",
    "                                      train_file='train.txt',                                       \n",
    "                                      label_type='topic',\n",
    "                                      )"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "f7e8ca00",
   "metadata": {},
   "source": [
    "再次注意，我们的语料库初始化器有一些方法可以自动查找文件夹中的train、dev和test拆分。所以在大多数情况下，您不需要自己指定文件名。通常，这就足够了:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "cfd91083",
   "metadata": {
    "ExecuteTime": {
     "start_time": "2021-09-12T05:40:39.311Z"
    }
   },
   "outputs": [],
   "source": [
    "# this is the folder in which train, test and dev files reside\n",
    "data_folder = '/path/to/data/folder'\n",
    "\n",
    "# load corpus by pointing to folder. Train, dev and test gets identified automatically. \n",
    "corpus: Corpus = ClassificationCorpus(data_folder,                                                                            \n",
    "                                      label_type='topic',\n",
    "                                      )"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "f903608d",
   "metadata": {},
   "source": [
    "由于FastText格式没有列，您必须手动定义注释的名称。在这个例子中，我们选择了label_type='topic'来表示我们正在加载一个带有主题标签的语料库。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "a36a61e0",
   "metadata": {},
   "outputs": [],
   "source": []
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "79d73a79",
   "metadata": {},
   "outputs": [],
   "source": []
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "bbe4e640",
   "metadata": {},
   "outputs": [],
   "source": []
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "e7fba98d",
   "metadata": {},
   "outputs": [],
   "source": []
  },
  {
   "cell_type": "markdown",
   "id": "b1f33911",
   "metadata": {
    "ExecuteTime": {
     "start_time": "2021-09-12T05:46:14.050Z"
    }
   },
   "source": [
    "# Tutorial 7: Training a Model"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "d9a79069",
   "metadata": {},
   "source": [
    "本教程的这一部分展示了如何使用最先进的单词嵌入来训练自己的序列标签和文本分类模型。"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "c19c9621",
   "metadata": {},
   "source": [
    "对于本教程，我们假设您熟悉这个库的基本类型以及单词嵌入是如何工作的(理想情况下，您还知道Flair嵌入是如何工作的)。你还应该知道如何加载语料库。"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "8ae3ada2",
   "metadata": {},
   "source": [
    "## Training a Part-of-Speech Tagging Model（训练词性标注模型）"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "81f45872",
   "metadata": {},
   "source": [
    "下面是使用简单的GloVe嵌入，在UD_ENGLISH(英语通用依赖树库)数据上训练的小型词性标记器模型的示例代码。在这个例子中，我们将数据采样到原始数据的10%，以使它运行得更快，但通常情况下，你应该在完整的数据集上训练:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 1,
   "id": "f0297b72",
   "metadata": {
    "ExecuteTime": {
     "end_time": "2021-09-12T06:22:39.001455Z",
     "start_time": "2021-09-12T06:18:13.093043Z"
    },
    "scrolled": true
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "2021-09-12 14:18:17,895 Reading data from C:\\Users\\sunbe\\.flair\\datasets\\ud_english\n",
      "2021-09-12 14:18:17,896 Train: C:\\Users\\sunbe\\.flair\\datasets\\ud_english\\en_ewt-ud-train.conllu\n",
      "2021-09-12 14:18:17,897 Dev: C:\\Users\\sunbe\\.flair\\datasets\\ud_english\\en_ewt-ud-dev.conllu\n",
      "2021-09-12 14:18:17,897 Test: C:\\Users\\sunbe\\.flair\\datasets\\ud_english\\en_ewt-ud-test.conllu\n",
      "Corpus: 1254 train + 200 dev + 208 test sentences\n",
      "2021-09-12 14:18:27,012 Computing label dictionary. Progress:\n"
     ]
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "100%|███████████████████████████████████████████████████████████████████████████| 1254/1254 [00:00<00:00, 14364.29it/s]"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "2021-09-12 14:18:27,103 Corpus contains the labels: upos (#20110), lemma (#20109), pos (#20109), dependency (#20109), number (#6781), verbform (#3450), prontype (#3374), person (#2051), tense (#1997), mood (#1614), definite (#1366), degree (#1268), case (#1168), numtype (#426), gender (#361), poss (#293), voice (#111), typo (#27), reflex (#15), abbr (#9), style (#3)\n",
      "2021-09-12 14:18:27,103 Created (for label 'upos') Dictionary with 17 tags: DET, PROPN, ADP, AUX, ADJ, PUNCT, ADV, PRON, VERB, SCONJ, NOUN, PART, CCONJ, X, NUM, INTJ, SYM\n",
      "Dictionary with 17 tags: DET, PROPN, ADP, AUX, ADJ, PUNCT, ADV, PRON, VERB, SCONJ, NOUN, PART, CCONJ, X, NUM, INTJ, SYM\n"
     ]
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "\n"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "2021-09-12 14:18:29,515 ----------------------------------------------------------------------------------------------------\n",
      "2021-09-12 14:18:29,516 Model: \"SequenceTagger(\n",
      "  (embeddings): StackedEmbeddings(\n",
      "    (list_embedding_0): WordEmbeddings('glove')\n",
      "  )\n",
      "  (word_dropout): WordDropout(p=0.05)\n",
      "  (locked_dropout): LockedDropout(p=0.5)\n",
      "  (embedding2nn): Linear(in_features=100, out_features=100, bias=True)\n",
      "  (rnn): LSTM(100, 256, batch_first=True, bidirectional=True)\n",
      "  (linear): Linear(in_features=512, out_features=19, bias=True)\n",
      "  (beta): 1.0\n",
      "  (weights): None\n",
      "  (weight_tensor) None\n",
      ")\"\n",
      "2021-09-12 14:18:29,516 ----------------------------------------------------------------------------------------------------\n",
      "2021-09-12 14:18:29,517 Corpus: \"Corpus: 1254 train + 200 dev + 208 test sentences\"\n",
      "2021-09-12 14:18:29,518 ----------------------------------------------------------------------------------------------------\n",
      "2021-09-12 14:18:29,519 Parameters:\n",
      "2021-09-12 14:18:29,519  - learning_rate: \"0.1\"\n",
      "2021-09-12 14:18:29,520  - mini_batch_size: \"32\"\n",
      "2021-09-12 14:18:29,522  - patience: \"3\"\n",
      "2021-09-12 14:18:29,522  - anneal_factor: \"0.5\"\n",
      "2021-09-12 14:18:29,523  - max_epochs: \"10\"\n",
      "2021-09-12 14:18:29,523  - shuffle: \"True\"\n",
      "2021-09-12 14:18:29,524  - train_with_dev: \"False\"\n",
      "2021-09-12 14:18:29,525  - batch_growth_annealing: \"False\"\n",
      "2021-09-12 14:18:29,526 ----------------------------------------------------------------------------------------------------\n",
      "2021-09-12 14:18:29,526 Model training base path: \"resources\\taggers\\example-upos\"\n",
      "2021-09-12 14:18:29,527 ----------------------------------------------------------------------------------------------------\n",
      "2021-09-12 14:18:29,528 Device: cpu\n",
      "2021-09-12 14:18:29,529 ----------------------------------------------------------------------------------------------------\n",
      "2021-09-12 14:18:29,530 Embeddings storage mode: cpu\n",
      "2021-09-12 14:18:29,532 ----------------------------------------------------------------------------------------------------\n",
      "2021-09-12 14:18:32,479 epoch 1 - iter 4/40 - loss 3.04724295 - samples/sec: 43.45 - lr: 0.100000\n",
      "2021-09-12 14:18:34,721 epoch 1 - iter 8/40 - loss 2.91878427 - samples/sec: 57.20 - lr: 0.100000\n",
      "2021-09-12 14:18:36,257 epoch 1 - iter 12/40 - loss 2.85125319 - samples/sec: 83.40 - lr: 0.100000\n",
      "2021-09-12 14:18:37,857 epoch 1 - iter 16/40 - loss 2.75101505 - samples/sec: 80.03 - lr: 0.100000\n",
      "2021-09-12 14:18:40,040 epoch 1 - iter 20/40 - loss 2.65639808 - samples/sec: 58.70 - lr: 0.100000\n",
      "2021-09-12 14:18:42,779 epoch 1 - iter 24/40 - loss 2.59294173 - samples/sec: 46.78 - lr: 0.100000\n",
      "2021-09-12 14:18:44,707 epoch 1 - iter 28/40 - loss 2.52709253 - samples/sec: 66.42 - lr: 0.100000\n",
      "2021-09-12 14:18:46,729 epoch 1 - iter 32/40 - loss 2.46731343 - samples/sec: 63.42 - lr: 0.100000\n",
      "2021-09-12 14:18:48,909 epoch 1 - iter 36/40 - loss 2.41122855 - samples/sec: 58.81 - lr: 0.100000\n",
      "2021-09-12 14:18:51,314 epoch 1 - iter 40/40 - loss 2.36582731 - samples/sec: 53.24 - lr: 0.100000\n",
      "2021-09-12 14:18:51,316 ----------------------------------------------------------------------------------------------------\n",
      "2021-09-12 14:18:51,317 EPOCH 1 done: loss 2.3658 - lr 0.1000000\n",
      "2021-09-12 14:18:52,295 DEV : loss 1.738567590713501 - f1-score (micro avg)  0.4996\n",
      "2021-09-12 14:18:52,311 BAD EPOCHS (no improvement): 0\n",
      "2021-09-12 14:18:52,314 saving best model\n",
      "2021-09-12 14:18:55,712 ----------------------------------------------------------------------------------------------------\n",
      "2021-09-12 14:18:57,779 epoch 2 - iter 4/40 - loss 1.86329623 - samples/sec: 62.03 - lr: 0.100000\n",
      "2021-09-12 14:18:59,784 epoch 2 - iter 8/40 - loss 1.83685980 - samples/sec: 63.90 - lr: 0.100000\n",
      "2021-09-12 14:19:02,098 epoch 2 - iter 12/40 - loss 1.80936442 - samples/sec: 55.41 - lr: 0.100000\n",
      "2021-09-12 14:19:04,242 epoch 2 - iter 16/40 - loss 1.78698243 - samples/sec: 59.77 - lr: 0.100000\n",
      "2021-09-12 14:19:07,068 epoch 2 - iter 20/40 - loss 1.75499245 - samples/sec: 45.34 - lr: 0.100000\n",
      "2021-09-12 14:19:09,254 epoch 2 - iter 24/40 - loss 1.71682205 - samples/sec: 58.63 - lr: 0.100000\n",
      "2021-09-12 14:19:11,369 epoch 2 - iter 28/40 - loss 1.69684969 - samples/sec: 60.65 - lr: 0.100000\n",
      "2021-09-12 14:19:13,380 epoch 2 - iter 32/40 - loss 1.66987899 - samples/sec: 63.75 - lr: 0.100000\n",
      "2021-09-12 14:19:15,669 epoch 2 - iter 36/40 - loss 1.65510440 - samples/sec: 55.99 - lr: 0.100000\n",
      "2021-09-12 14:19:17,851 epoch 2 - iter 40/40 - loss 1.63972684 - samples/sec: 58.72 - lr: 0.100000\n",
      "2021-09-12 14:19:17,853 ----------------------------------------------------------------------------------------------------\n",
      "2021-09-12 14:19:17,854 EPOCH 2 done: loss 1.6397 - lr 0.1000000\n",
      "2021-09-12 14:19:19,191 DEV : loss 1.1901036500930786 - f1-score (micro avg)  0.6329\n",
      "2021-09-12 14:19:19,200 BAD EPOCHS (no improvement): 0\n",
      "2021-09-12 14:19:19,204 saving best model\n",
      "2021-09-12 14:19:22,595 ----------------------------------------------------------------------------------------------------\n",
      "2021-09-12 14:19:24,827 epoch 3 - iter 4/40 - loss 1.45282875 - samples/sec: 57.40 - lr: 0.100000\n",
      "2021-09-12 14:19:26,629 epoch 3 - iter 8/40 - loss 1.43171591 - samples/sec: 71.08 - lr: 0.100000\n",
      "2021-09-12 14:19:29,109 epoch 3 - iter 12/40 - loss 1.40510470 - samples/sec: 51.71 - lr: 0.100000\n",
      "2021-09-12 14:19:31,301 epoch 3 - iter 16/40 - loss 1.38785658 - samples/sec: 58.44 - lr: 0.100000\n",
      "2021-09-12 14:19:34,194 epoch 3 - iter 20/40 - loss 1.37222957 - samples/sec: 44.28 - lr: 0.100000\n",
      "2021-09-12 14:19:36,718 epoch 3 - iter 24/40 - loss 1.36363843 - samples/sec: 50.74 - lr: 0.100000\n",
      "2021-09-12 14:19:38,428 epoch 3 - iter 28/40 - loss 1.35094315 - samples/sec: 74.93 - lr: 0.100000\n",
      "2021-09-12 14:19:40,297 epoch 3 - iter 32/40 - loss 1.34438788 - samples/sec: 68.54 - lr: 0.100000\n",
      "2021-09-12 14:19:42,241 epoch 3 - iter 36/40 - loss 1.33288042 - samples/sec: 65.91 - lr: 0.100000\n",
      "2021-09-12 14:19:44,203 epoch 3 - iter 40/40 - loss 1.32501632 - samples/sec: 65.31 - lr: 0.100000\n",
      "2021-09-12 14:19:44,204 ----------------------------------------------------------------------------------------------------\n",
      "2021-09-12 14:19:44,205 EPOCH 3 done: loss 1.3250 - lr 0.1000000\n",
      "2021-09-12 14:19:45,452 DEV : loss 0.9644802808761597 - f1-score (micro avg)  0.6791\n",
      "2021-09-12 14:19:45,460 BAD EPOCHS (no improvement): 0\n",
      "2021-09-12 14:19:45,463 saving best model\n",
      "2021-09-12 14:19:48,823 ----------------------------------------------------------------------------------------------------\n",
      "2021-09-12 14:19:51,243 epoch 4 - iter 4/40 - loss 1.21204799 - samples/sec: 52.97 - lr: 0.100000\n",
      "2021-09-12 14:19:53,155 epoch 4 - iter 8/40 - loss 1.18791711 - samples/sec: 67.05 - lr: 0.100000\n",
      "2021-09-12 14:19:55,159 epoch 4 - iter 12/40 - loss 1.18281220 - samples/sec: 63.90 - lr: 0.100000\n",
      "2021-09-12 14:19:57,258 epoch 4 - iter 16/40 - loss 1.18493397 - samples/sec: 61.03 - lr: 0.100000\n",
      "2021-09-12 14:19:59,504 epoch 4 - iter 20/40 - loss 1.16257422 - samples/sec: 57.00 - lr: 0.100000\n",
      "2021-09-12 14:20:00,814 epoch 4 - iter 24/40 - loss 1.16605312 - samples/sec: 97.98 - lr: 0.100000\n",
      "2021-09-12 14:20:02,619 epoch 4 - iter 28/40 - loss 1.16237264 - samples/sec: 71.03 - lr: 0.100000\n",
      "2021-09-12 14:20:04,604 epoch 4 - iter 32/40 - loss 1.16099420 - samples/sec: 64.55 - lr: 0.100000\n",
      "2021-09-12 14:20:07,246 epoch 4 - iter 36/40 - loss 1.16004964 - samples/sec: 48.50 - lr: 0.100000\n",
      "2021-09-12 14:20:08,528 epoch 4 - iter 40/40 - loss 1.16057476 - samples/sec: 99.93 - lr: 0.100000\n",
      "2021-09-12 14:20:08,529 ----------------------------------------------------------------------------------------------------\n",
      "2021-09-12 14:20:08,529 EPOCH 4 done: loss 1.1606 - lr 0.1000000\n",
      "2021-09-12 14:20:09,219 DEV : loss 0.854692280292511 - f1-score (micro avg)  0.7247\n",
      "2021-09-12 14:20:09,229 BAD EPOCHS (no improvement): 0\n",
      "2021-09-12 14:20:09,231 saving best model\n",
      "2021-09-12 14:20:12,318 ----------------------------------------------------------------------------------------------------\n",
      "2021-09-12 14:20:14,055 epoch 5 - iter 4/40 - loss 1.10533881 - samples/sec: 73.83 - lr: 0.100000\n",
      "2021-09-12 14:20:16,201 epoch 5 - iter 8/40 - loss 1.06227143 - samples/sec: 59.73 - lr: 0.100000\n",
      "2021-09-12 14:20:18,649 epoch 5 - iter 12/40 - loss 1.05737160 - samples/sec: 52.31 - lr: 0.100000\n"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "2021-09-12 14:20:21,172 epoch 5 - iter 16/40 - loss 1.06878757 - samples/sec: 50.78 - lr: 0.100000\n",
      "2021-09-12 14:20:23,177 epoch 5 - iter 20/40 - loss 1.07085989 - samples/sec: 63.94 - lr: 0.100000\n",
      "2021-09-12 14:20:24,686 epoch 5 - iter 24/40 - loss 1.06256051 - samples/sec: 84.96 - lr: 0.100000\n",
      "2021-09-12 14:20:26,457 epoch 5 - iter 28/40 - loss 1.05921092 - samples/sec: 72.32 - lr: 0.100000\n",
      "2021-09-12 14:20:28,717 epoch 5 - iter 32/40 - loss 1.06685036 - samples/sec: 56.71 - lr: 0.100000\n",
      "2021-09-12 14:20:30,553 epoch 5 - iter 36/40 - loss 1.06126193 - samples/sec: 69.73 - lr: 0.100000\n",
      "2021-09-12 14:20:32,242 epoch 5 - iter 40/40 - loss 1.06015209 - samples/sec: 75.85 - lr: 0.100000\n",
      "2021-09-12 14:20:32,244 ----------------------------------------------------------------------------------------------------\n",
      "2021-09-12 14:20:32,244 EPOCH 5 done: loss 1.0602 - lr 0.1000000\n",
      "2021-09-12 14:20:33,188 DEV : loss 0.7507492899894714 - f1-score (micro avg)  0.7534\n",
      "2021-09-12 14:20:33,197 BAD EPOCHS (no improvement): 0\n",
      "2021-09-12 14:20:33,199 saving best model\n",
      "2021-09-12 14:20:36,316 ----------------------------------------------------------------------------------------------------\n",
      "2021-09-12 14:20:37,904 epoch 6 - iter 4/40 - loss 0.97245779 - samples/sec: 80.78 - lr: 0.100000\n",
      "2021-09-12 14:20:40,161 epoch 6 - iter 8/40 - loss 0.98686953 - samples/sec: 56.73 - lr: 0.100000\n",
      "2021-09-12 14:20:41,793 epoch 6 - iter 12/40 - loss 0.99799380 - samples/sec: 78.57 - lr: 0.100000\n",
      "2021-09-12 14:20:44,128 epoch 6 - iter 16/40 - loss 0.99830354 - samples/sec: 54.84 - lr: 0.100000\n",
      "2021-09-12 14:20:46,420 epoch 6 - iter 20/40 - loss 0.99298519 - samples/sec: 55.91 - lr: 0.100000\n",
      "2021-09-12 14:20:48,628 epoch 6 - iter 24/40 - loss 0.99306074 - samples/sec: 58.06 - lr: 0.100000\n",
      "2021-09-12 14:20:50,776 epoch 6 - iter 28/40 - loss 1.00697031 - samples/sec: 59.70 - lr: 0.100000\n",
      "2021-09-12 14:20:52,874 epoch 6 - iter 32/40 - loss 0.99983570 - samples/sec: 61.07 - lr: 0.100000\n",
      "2021-09-12 14:20:54,921 epoch 6 - iter 36/40 - loss 1.00203006 - samples/sec: 62.58 - lr: 0.100000\n",
      "2021-09-12 14:20:56,301 epoch 6 - iter 40/40 - loss 1.00165735 - samples/sec: 92.94 - lr: 0.100000\n",
      "2021-09-12 14:20:56,302 ----------------------------------------------------------------------------------------------------\n",
      "2021-09-12 14:20:56,303 EPOCH 6 done: loss 1.0017 - lr 0.1000000\n",
      "2021-09-12 14:20:57,423 DEV : loss 0.6820586919784546 - f1-score (micro avg)  0.7793\n",
      "2021-09-12 14:20:57,434 BAD EPOCHS (no improvement): 0\n",
      "2021-09-12 14:20:57,437 saving best model\n",
      "2021-09-12 14:21:00,606 ----------------------------------------------------------------------------------------------------\n",
      "2021-09-12 14:21:02,321 epoch 7 - iter 4/40 - loss 0.95338211 - samples/sec: 74.77 - lr: 0.100000\n",
      "2021-09-12 14:21:04,052 epoch 7 - iter 8/40 - loss 0.96060197 - samples/sec: 74.03 - lr: 0.100000\n",
      "2021-09-12 14:21:06,091 epoch 7 - iter 12/40 - loss 0.95325701 - samples/sec: 62.79 - lr: 0.100000\n",
      "2021-09-12 14:21:07,982 epoch 7 - iter 16/40 - loss 0.94529250 - samples/sec: 67.77 - lr: 0.100000\n",
      "2021-09-12 14:21:10,428 epoch 7 - iter 20/40 - loss 0.94632808 - samples/sec: 52.37 - lr: 0.100000\n",
      "2021-09-12 14:21:12,366 epoch 7 - iter 24/40 - loss 0.95112494 - samples/sec: 66.12 - lr: 0.100000\n",
      "2021-09-12 14:21:14,632 epoch 7 - iter 28/40 - loss 0.96172089 - samples/sec: 56.49 - lr: 0.100000\n",
      "2021-09-12 14:21:16,594 epoch 7 - iter 32/40 - loss 0.95944148 - samples/sec: 65.30 - lr: 0.100000\n",
      "2021-09-12 14:21:18,541 epoch 7 - iter 36/40 - loss 0.96169033 - samples/sec: 65.82 - lr: 0.100000\n",
      "2021-09-12 14:21:20,170 epoch 7 - iter 40/40 - loss 0.96012508 - samples/sec: 78.69 - lr: 0.100000\n",
      "2021-09-12 14:21:20,172 ----------------------------------------------------------------------------------------------------\n",
      "2021-09-12 14:21:20,173 EPOCH 7 done: loss 0.9601 - lr 0.1000000\n",
      "2021-09-12 14:21:21,085 DEV : loss 0.6539027690887451 - f1-score (micro avg)  0.7941\n",
      "2021-09-12 14:21:21,091 BAD EPOCHS (no improvement): 0\n",
      "2021-09-12 14:21:21,093 saving best model\n",
      "2021-09-12 14:21:24,382 ----------------------------------------------------------------------------------------------------\n",
      "2021-09-12 14:21:26,520 epoch 8 - iter 4/40 - loss 0.95757167 - samples/sec: 59.95 - lr: 0.100000\n",
      "2021-09-12 14:21:28,176 epoch 8 - iter 8/40 - loss 0.94037379 - samples/sec: 77.42 - lr: 0.100000\n",
      "2021-09-12 14:21:30,589 epoch 8 - iter 12/40 - loss 0.91781689 - samples/sec: 53.06 - lr: 0.100000\n",
      "2021-09-12 14:21:32,728 epoch 8 - iter 16/40 - loss 0.90918548 - samples/sec: 59.87 - lr: 0.100000\n",
      "2021-09-12 14:21:34,825 epoch 8 - iter 20/40 - loss 0.91916111 - samples/sec: 61.08 - lr: 0.100000\n",
      "2021-09-12 14:21:36,556 epoch 8 - iter 24/40 - loss 0.91465644 - samples/sec: 74.04 - lr: 0.100000\n",
      "2021-09-12 14:21:38,658 epoch 8 - iter 28/40 - loss 0.91037910 - samples/sec: 60.91 - lr: 0.100000\n",
      "2021-09-12 14:21:40,801 epoch 8 - iter 32/40 - loss 0.90909131 - samples/sec: 59.77 - lr: 0.100000\n",
      "2021-09-12 14:21:43,178 epoch 8 - iter 36/40 - loss 0.91031026 - samples/sec: 53.89 - lr: 0.100000\n",
      "2021-09-12 14:21:44,906 epoch 8 - iter 40/40 - loss 0.90851715 - samples/sec: 74.12 - lr: 0.100000\n",
      "2021-09-12 14:21:44,908 ----------------------------------------------------------------------------------------------------\n",
      "2021-09-12 14:21:44,909 EPOCH 8 done: loss 0.9085 - lr 0.1000000\n",
      "2021-09-12 14:21:46,103 DEV : loss 0.6110758185386658 - f1-score (micro avg)  0.8147\n",
      "2021-09-12 14:21:46,115 BAD EPOCHS (no improvement): 0\n",
      "2021-09-12 14:21:46,117 saving best model\n",
      "2021-09-12 14:21:49,247 ----------------------------------------------------------------------------------------------------\n",
      "2021-09-12 14:21:51,024 epoch 9 - iter 4/40 - loss 0.83283870 - samples/sec: 72.16 - lr: 0.100000\n",
      "2021-09-12 14:21:53,058 epoch 9 - iter 8/40 - loss 0.84616335 - samples/sec: 62.97 - lr: 0.100000\n",
      "2021-09-12 14:21:55,024 epoch 9 - iter 12/40 - loss 0.86952947 - samples/sec: 65.13 - lr: 0.100000\n",
      "2021-09-12 14:21:57,067 epoch 9 - iter 16/40 - loss 0.87457371 - samples/sec: 62.79 - lr: 0.100000\n",
      "2021-09-12 14:21:59,137 epoch 9 - iter 20/40 - loss 0.88786295 - samples/sec: 61.84 - lr: 0.100000\n",
      "2021-09-12 14:22:02,087 epoch 9 - iter 24/40 - loss 0.88834234 - samples/sec: 43.44 - lr: 0.100000\n",
      "2021-09-12 14:22:04,291 epoch 9 - iter 28/40 - loss 0.88542090 - samples/sec: 58.15 - lr: 0.100000\n",
      "2021-09-12 14:22:06,121 epoch 9 - iter 32/40 - loss 0.88182948 - samples/sec: 69.98 - lr: 0.100000\n",
      "2021-09-12 14:22:08,073 epoch 9 - iter 36/40 - loss 0.88162320 - samples/sec: 65.65 - lr: 0.100000\n",
      "2021-09-12 14:22:09,746 epoch 9 - iter 40/40 - loss 0.87693435 - samples/sec: 76.60 - lr: 0.100000\n",
      "2021-09-12 14:22:09,748 ----------------------------------------------------------------------------------------------------\n",
      "2021-09-12 14:22:09,749 EPOCH 9 done: loss 0.8769 - lr 0.1000000\n",
      "2021-09-12 14:22:10,774 DEV : loss 0.5772219300270081 - f1-score (micro avg)  0.8142\n",
      "2021-09-12 14:22:10,781 BAD EPOCHS (no improvement): 1\n",
      "2021-09-12 14:22:10,783 ----------------------------------------------------------------------------------------------------\n",
      "2021-09-12 14:22:12,908 epoch 10 - iter 4/40 - loss 0.83841672 - samples/sec: 60.31 - lr: 0.100000\n",
      "2021-09-12 14:22:15,226 epoch 10 - iter 8/40 - loss 0.86685564 - samples/sec: 55.27 - lr: 0.100000\n",
      "2021-09-12 14:22:16,774 epoch 10 - iter 12/40 - loss 0.86047732 - samples/sec: 82.73 - lr: 0.100000\n",
      "2021-09-12 14:22:18,675 epoch 10 - iter 16/40 - loss 0.86050282 - samples/sec: 67.41 - lr: 0.100000\n",
      "2021-09-12 14:22:20,382 epoch 10 - iter 20/40 - loss 0.85348012 - samples/sec: 75.07 - lr: 0.100000\n",
      "2021-09-12 14:22:22,523 epoch 10 - iter 24/40 - loss 0.85190324 - samples/sec: 59.85 - lr: 0.100000\n",
      "2021-09-12 14:22:24,621 epoch 10 - iter 28/40 - loss 0.84898018 - samples/sec: 61.07 - lr: 0.100000\n",
      "2021-09-12 14:22:26,456 epoch 10 - iter 32/40 - loss 0.84543635 - samples/sec: 69.81 - lr: 0.100000\n",
      "2021-09-12 14:22:28,590 epoch 10 - iter 36/40 - loss 0.85108934 - samples/sec: 60.04 - lr: 0.100000\n",
      "2021-09-12 14:22:30,545 epoch 10 - iter 40/40 - loss 0.85019496 - samples/sec: 65.50 - lr: 0.100000\n",
      "2021-09-12 14:22:30,546 ----------------------------------------------------------------------------------------------------\n"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "2021-09-12 14:22:30,548 EPOCH 10 done: loss 0.8502 - lr 0.1000000\n",
      "2021-09-12 14:22:31,533 DEV : loss 0.5807152986526489 - f1-score (micro avg)  0.8048\n",
      "2021-09-12 14:22:31,542 BAD EPOCHS (no improvement): 2\n",
      "2021-09-12 14:22:34,846 ----------------------------------------------------------------------------------------------------\n",
      "2021-09-12 14:22:34,849 loading file resources\\taggers\\example-upos\\best-model.pt\n",
      "2021-09-12 14:22:38,976 0.7941\t0.7941\t0.7941\t0.7941\n",
      "2021-09-12 14:22:38,977 \n",
      "Results:\n",
      "- F-score (micro) 0.7941\n",
      "- F-score (macro) 0.6524\n",
      "- Accuracy 0.7941\n",
      "\n",
      "By class:\n",
      "              precision    recall  f1-score   support\n",
      "\n",
      "        NOUN     0.7278    0.8214    0.7718       420\n",
      "       PUNCT     0.9288    0.9843    0.9557       318\n",
      "        VERB     0.7154    0.7100    0.7127       269\n",
      "         DET     0.9488    0.9577    0.9533       213\n",
      "       PROPN     0.6770    0.7927    0.7303       193\n",
      "        PRON     0.8824    0.8738    0.8780       206\n",
      "         ADP     0.7743    0.9615    0.8578       182\n",
      "         ADJ     0.6460    0.6118    0.6284       170\n",
      "         AUX     0.8583    0.8015    0.8289       136\n",
      "         ADV     0.6341    0.2342    0.3421       111\n",
      "       CCONJ     1.0000    0.9221    0.9595        77\n",
      "        PART     0.8333    0.8475    0.8403        59\n",
      "         NUM     0.7872    0.5286    0.6325        70\n",
      "       SCONJ     0.6061    0.5128    0.5556        39\n",
      "        INTJ     0.8000    0.3077    0.4444        13\n",
      "         SYM     0.0000    0.0000    0.0000        14\n",
      "           X     0.0000    0.0000    0.0000         6\n",
      "\n",
      "   micro avg     0.7941    0.7941    0.7941      2496\n",
      "   macro avg     0.6953    0.6393    0.6524      2496\n",
      "weighted avg     0.7857    0.7941    0.7827      2496\n",
      " samples avg     0.7941    0.7941    0.7941      2496\n",
      "\n",
      "2021-09-12 14:22:38,978 ----------------------------------------------------------------------------------------------------\n"
     ]
    },
    {
     "data": {
      "text/plain": [
       "{'test_score': 0.7940705128205128,\n",
       " 'dev_score_history': [0.4995523724261415,\n",
       "  0.6329453894359892,\n",
       "  0.6790510295434199,\n",
       "  0.724709042076992,\n",
       "  0.7533572068039391,\n",
       "  0.779319606087735,\n",
       "  0.7940913160250671,\n",
       "  0.8146821844225605,\n",
       "  0.8142345568487018,\n",
       "  0.8048343777976723],\n",
       " 'train_loss_history': [2.3658273104937466,\n",
       "  1.6397268394027453,\n",
       "  1.3250163243015993,\n",
       "  1.1605747645364501,\n",
       "  1.06015209054015,\n",
       "  1.0016573477647408,\n",
       "  0.9601250772325858,\n",
       "  0.9085171510913713,\n",
       "  0.8769343514309416,\n",
       "  0.8501949622360868],\n",
       " 'dev_loss_history': [tensor(1.7386),\n",
       "  tensor(1.1901),\n",
       "  tensor(0.9645),\n",
       "  tensor(0.8547),\n",
       "  tensor(0.7507),\n",
       "  tensor(0.6821),\n",
       "  tensor(0.6539),\n",
       "  tensor(0.6111),\n",
       "  tensor(0.5772),\n",
       "  tensor(0.5807)]}"
      ]
     },
     "execution_count": 1,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "from flair.datasets import UD_ENGLISH\n",
    "from flair.embeddings import WordEmbeddings, StackedEmbeddings\n",
    "from flair.models import SequenceTagger\n",
    "from flair.trainers import ModelTrainer\n",
    "\n",
    "# 1. get the corpus # 获取语料库\n",
    "corpus = UD_ENGLISH().downsample(0.1)\n",
    "print(corpus)\n",
    "\n",
    "# 2. what label do we want to predict? # 训练词性标注\n",
    "label_type = 'upos' \n",
    "\n",
    "# 3. make the label dictionary from the corpus # 从语料库中制作标签词典\n",
    "label_dict = corpus.make_label_dictionary(label_type=label_type)\n",
    "print(label_dict)\n",
    "\n",
    "# 4. initialize embeddings # 初始化embedding\n",
    "embedding_types = [\n",
    "\n",
    "    WordEmbeddings('glove'),\n",
    "\n",
    "    # comment in this line to use character embeddings # 使用字符嵌入\n",
    "    # CharacterEmbeddings(),\n",
    "\n",
    "    # comment in these lines to use flair embeddings # 使用flair嵌入\n",
    "    # FlairEmbeddings('news-forward'),\n",
    "    # FlairEmbeddings('news-backward'),\n",
    "]\n",
    "\n",
    "embeddings = StackedEmbeddings(embeddings=embedding_types) # embedding合并\n",
    "\n",
    "# 5. initialize sequence tagger # 初始化序列化标注\n",
    "tagger = SequenceTagger(hidden_size=256,\n",
    "                        embeddings=embeddings,\n",
    "                        tag_dictionary=label_dict,\n",
    "                        tag_type=label_type,\n",
    "                        use_crf=True)\n",
    "\n",
    "# 6. initialize trainer # 初始化训练器\n",
    "trainer = ModelTrainer(tagger, corpus)\n",
    "\n",
    "# 7. start training # 开始训练\n",
    "trainer.train('resources/taggers/example-upos',\n",
    "              learning_rate=0.1,\n",
    "              mini_batch_size=32,\n",
    "              max_epochs=10)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 2,
   "id": "17296323",
   "metadata": {
    "ExecuteTime": {
     "end_time": "2021-09-12T06:26:57.309066Z",
     "start_time": "2021-09-12T06:26:57.299920Z"
    }
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Corpus: 1254 train + 200 dev + 208 test sentences\n"
     ]
    }
   ],
   "source": [
    "print(corpus)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 3,
   "id": "a106db4f",
   "metadata": {
    "ExecuteTime": {
     "end_time": "2021-09-12T06:27:14.834916Z",
     "start_time": "2021-09-12T06:27:14.820133Z"
    },
    "scrolled": false
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Sentence: \"The MoI in Iraq is equivalent to the US FBI , so this would be like having J. Edgar Hoover unwittingly employ at a high level members of the Weathermen bombers back in the 1960s .\"   [− Tokens: 36  − Token-Labels: \"The <the/DET/DT/det/Def/Art> MoI <MoI/PROPN/NNP/nsubj/Sing> in <in/ADP/IN/case> Iraq <Iraq/PROPN/NNP/nmod/Sing> is <be/AUX/VBZ/cop/Ind/Sing/3/Pres/Fin> equivalent <equivalent/ADJ/JJ/root/Pos> to <to/ADP/IN/case> the <the/DET/DT/det/Def/Art> US <US/PROPN/NNP/compound/Sing> FBI <FBI/PROPN/NNP/obl/Sing> , <,/PUNCT/,/punct> so <so/ADV/RB/advmod> this <this/PRON/DT/nsubj/Sing/Dem> would <would/AUX/MD/aux/Fin> be <be/VERB/VB/parataxis/Inf> like <like/SCONJ/IN/mark> having <have/VERB/VBG/advcl/Ger> J. <J./PROPN/NNP/nsubj/Sing> Edgar <Edgar/PROPN/NNP/flat/Sing> Hoover <Hoover/PROPN/NNP/flat/Sing> unwittingly <unwittingly/ADV/RB/advmod> employ <employ/VERB/VB/ccomp/Inf> at <at/ADP/IN/case> a <a/DET/DT/det/Ind/Art> high <high/ADJ/JJ/amod/Pos> level <level/NOUN/NN/obl/Sing> members <member/NOUN/NNS/obj/Plur> of <of/ADP/IN/case> the <the/DET/DT/det/Def/Art> Weathermen <Weathermen/PROPN/NNPS/compound/Plur> bombers <bomber/NOUN/NNS/nmod/Plur> back <back/ADV/RB/advmod> in <in/ADP/IN/case> the <the/DET/DT/det/Def/Art> 1960s <1960/NOUN/NNS/obl/Plur> . <./PUNCT/./punct>\"  − Sentence-Labels: {'upos': [DET [The (1)] (1.0), PROPN [MoI (2)] (1.0), ADP [in (3)] (1.0), PROPN [Iraq (4)] (1.0), AUX [is (5)] (1.0), ADJ [equivalent (6)] (1.0), ADP [to (7)] (1.0), DET [the (8)] (1.0), PROPN [US (9)] (1.0), PROPN [FBI (10)] (1.0), PUNCT [, (11)] (1.0), ADV [so (12)] (1.0), PRON [this (13)] (1.0), AUX [would (14)] (1.0), VERB [be (15)] (1.0), SCONJ [like (16)] (1.0), VERB [having (17)] (1.0), PROPN [J. (18)] (1.0), PROPN [Edgar (19)] (1.0), PROPN [Hoover (20)] (1.0), ADV [unwittingly (21)] (1.0), VERB [employ (22)] (1.0), ADP [at (23)] (1.0), DET [a (24)] (1.0), ADJ [high (25)] (1.0), NOUN [level (26)] (1.0), NOUN [members (27)] (1.0), ADP [of (28)] (1.0), DET [the (29)] (1.0), PROPN [Weathermen (30)] (1.0), NOUN [bombers (31)] (1.0), ADV [back (32)] (1.0), ADP [in (33)] (1.0), DET [the (34)] (1.0), NOUN [1960s (35)] (1.0), PUNCT [. (36)] (1.0)]}]\n"
     ]
    }
   ],
   "source": [
    "print(corpus.train[0])"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 4,
   "id": "be3cb8cc",
   "metadata": {
    "ExecuteTime": {
     "end_time": "2021-09-12T06:29:16.504055Z",
     "start_time": "2021-09-12T06:29:16.490715Z"
    }
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Dictionary with 19 tags: DET, PROPN, ADP, AUX, ADJ, PUNCT, ADV, PRON, VERB, SCONJ, NOUN, PART, CCONJ, X, NUM, INTJ, SYM, <START>, <STOP>\n"
     ]
    }
   ],
   "source": [
    "print(label_dict)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "be85fdc3",
   "metadata": {},
   "outputs": [],
   "source": []
  },
  {
   "cell_type": "markdown",
   "id": "e8e1c451",
   "metadata": {},
   "source": [
    "或者，尝试使用flairrembeddings和GloVe的堆叠嵌入，覆盖150个epoch的全部数据。这将为您提供Akbik等人(2018)报告的最先进的精度。"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "ded0c191",
   "metadata": {},
   "source": [
    "一旦这个模型得到训练，你就可以用它来预测新句子的标签。只需调用模型的预测方法。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "7ea26b04",
   "metadata": {},
   "outputs": [],
   "source": [
    "# load the model you trained\n",
    "model = SequenceTagger.load('resources/taggers/example-pos/final-model.pt')\n",
    "\n",
    "# create example sentence\n",
    "sentence = Sentence('I love Berlin')\n",
    "\n",
    "# predict tags and print\n",
    "model.predict(sentence)\n",
    "\n",
    "print(sentence.to_tagged_string())\n"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "f00db1b3",
   "metadata": {},
   "source": [
    "如果模型运行良好，在这个例子中，它将正确地将“love”标记为动词。"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "09cd6b64",
   "metadata": {
    "ExecuteTime": {
     "start_time": "2021-09-12T06:12:03.564Z"
    }
   },
   "source": [
    "## Training a Named Entity Recognition (NER) Model with Flair Embeddings（基于Flair嵌入的命名实体识别(NER)模型训练）"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "4ad9d718",
   "metadata": {},
   "source": [
    "为了训练NER的序列标记模型，只需要对上面的脚本进行少量的修改。加载一个像CONLL_03这样的NER语料库(需要手动下载数据-或使用不同的NER语料库)，将label_type改为' NER '，并使用由GloVe和Flair组成的StackedEmbedding:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 23,
   "id": "0c482393",
   "metadata": {
    "ExecuteTime": {
     "end_time": "2021-09-12T06:40:00.088699Z",
     "start_time": "2021-09-12T06:40:00.065835Z"
    },
    "scrolled": true
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "2021-09-12 14:40:00,066 ----------------------------------------------------------------------------------------------------\n",
      "2021-09-12 14:40:00,067 WARNING: CoNLL-03 dataset not found at \"C:\\Users\\sunbe\\.flair\\datasets\\conll_03\".\n",
      "2021-09-12 14:40:00,067 Instructions for obtaining the data can be found here: https://www.clips.uantwerpen.be/conll2003/ner/\"\n",
      "2021-09-12 14:40:00,068 ----------------------------------------------------------------------------------------------------\n"
     ]
    },
    {
     "ename": "FileNotFoundError",
     "evalue": "[WinError 3] 系统找不到指定的路径。: 'C:\\\\Users\\\\sunbe\\\\.flair\\\\datasets\\\\conll_03'",
     "output_type": "error",
     "traceback": [
      "\u001b[1;31m---------------------------------------------------------------------------\u001b[0m",
      "\u001b[1;31mFileNotFoundError\u001b[0m                         Traceback (most recent call last)",
      "\u001b[1;32m<ipython-input-23-c2112171059b>\u001b[0m in \u001b[0;36m<module>\u001b[1;34m\u001b[0m\n\u001b[0;32m      5\u001b[0m \u001b[1;33m\u001b[0m\u001b[0m\n\u001b[0;32m      6\u001b[0m \u001b[1;31m# 1. get the corpus # 获取训练语料库\u001b[0m\u001b[1;33m\u001b[0m\u001b[1;33m\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n\u001b[1;32m----> 7\u001b[1;33m \u001b[0mcorpus\u001b[0m \u001b[1;33m=\u001b[0m \u001b[0mCONLL_03\u001b[0m\u001b[1;33m(\u001b[0m\u001b[1;33m)\u001b[0m\u001b[1;33m\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n\u001b[0m\u001b[0;32m      8\u001b[0m \u001b[0mprint\u001b[0m\u001b[1;33m(\u001b[0m\u001b[0mcorpus\u001b[0m\u001b[1;33m)\u001b[0m\u001b[1;33m\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n",
      "\u001b[1;32mC:\\ProgramData\\Anaconda3\\lib\\site-packages\\flair\\datasets\\sequence_labeling.py\u001b[0m in \u001b[0;36m__init__\u001b[1;34m(self, base_path, tag_to_bioes, in_memory, **corpusargs)\u001b[0m\n\u001b[0;32m    361\u001b[0m             \u001b[0mlog\u001b[0m\u001b[1;33m.\u001b[0m\u001b[0mwarning\u001b[0m\u001b[1;33m(\u001b[0m\u001b[1;34m\"-\"\u001b[0m \u001b[1;33m*\u001b[0m \u001b[1;36m100\u001b[0m\u001b[1;33m)\u001b[0m\u001b[1;33m\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n\u001b[0;32m    362\u001b[0m \u001b[1;33m\u001b[0m\u001b[0m\n\u001b[1;32m--> 363\u001b[1;33m         super(CONLL_03, self).__init__(\n\u001b[0m\u001b[0;32m    364\u001b[0m             \u001b[0mdata_folder\u001b[0m\u001b[1;33m,\u001b[0m\u001b[1;33m\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n\u001b[0;32m    365\u001b[0m             \u001b[0mcolumns\u001b[0m\u001b[1;33m,\u001b[0m\u001b[1;33m\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n",
      "\u001b[1;32mC:\\ProgramData\\Anaconda3\\lib\\site-packages\\flair\\datasets\\sequence_labeling.py\u001b[0m in \u001b[0;36m__init__\u001b[1;34m(self, data_folder, column_format, train_file, test_file, dev_file, tag_to_bioes, column_delimiter, comment_symbol, encoding, document_separator_token, skip_first_line, in_memory, label_name_map, banned_sentences, autofind_splits, name, **corpusargs)\u001b[0m\n\u001b[0;32m     56\u001b[0m         \u001b[1;31m# find train, dev and test files if not specified\u001b[0m\u001b[1;33m\u001b[0m\u001b[1;33m\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n\u001b[0;32m     57\u001b[0m         \u001b[0mdev_file\u001b[0m\u001b[1;33m,\u001b[0m \u001b[0mtest_file\u001b[0m\u001b[1;33m,\u001b[0m \u001b[0mtrain_file\u001b[0m \u001b[1;33m=\u001b[0m\u001b[0;31m \u001b[0m\u001b[0;31m\\\u001b[0m\u001b[1;33m\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n\u001b[1;32m---> 58\u001b[1;33m             \u001b[0mfind_train_dev_test_files\u001b[0m\u001b[1;33m(\u001b[0m\u001b[0mdata_folder\u001b[0m\u001b[1;33m,\u001b[0m \u001b[0mdev_file\u001b[0m\u001b[1;33m,\u001b[0m \u001b[0mtest_file\u001b[0m\u001b[1;33m,\u001b[0m \u001b[0mtrain_file\u001b[0m\u001b[1;33m,\u001b[0m \u001b[0mautofind_splits\u001b[0m\u001b[1;33m)\u001b[0m\u001b[1;33m\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n\u001b[0m\u001b[0;32m     59\u001b[0m \u001b[1;33m\u001b[0m\u001b[0m\n\u001b[0;32m     60\u001b[0m         \u001b[1;31m# get train data\u001b[0m\u001b[1;33m\u001b[0m\u001b[1;33m\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n",
      "\u001b[1;32mC:\\ProgramData\\Anaconda3\\lib\\site-packages\\flair\\datasets\\base.py\u001b[0m in \u001b[0;36mfind_train_dev_test_files\u001b[1;34m(data_folder, dev_file, test_file, train_file, autofind_splits)\u001b[0m\n\u001b[0;32m    265\u001b[0m     \u001b[1;31m# automatically identify train / test / dev files\u001b[0m\u001b[1;33m\u001b[0m\u001b[1;33m\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n\u001b[0;32m    266\u001b[0m     \u001b[1;32mif\u001b[0m \u001b[0mtrain_file\u001b[0m \u001b[1;32mis\u001b[0m \u001b[1;32mNone\u001b[0m \u001b[1;32mand\u001b[0m \u001b[0mautofind_splits\u001b[0m\u001b[1;33m:\u001b[0m\u001b[1;33m\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n\u001b[1;32m--> 267\u001b[1;33m         \u001b[1;32mfor\u001b[0m \u001b[0mfile\u001b[0m \u001b[1;32min\u001b[0m \u001b[0mdata_folder\u001b[0m\u001b[1;33m.\u001b[0m\u001b[0miterdir\u001b[0m\u001b[1;33m(\u001b[0m\u001b[1;33m)\u001b[0m\u001b[1;33m:\u001b[0m\u001b[1;33m\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n\u001b[0m\u001b[0;32m    268\u001b[0m             \u001b[0mfile_name\u001b[0m \u001b[1;33m=\u001b[0m \u001b[0mfile\u001b[0m\u001b[1;33m.\u001b[0m\u001b[0mname\u001b[0m\u001b[1;33m\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n\u001b[0;32m    269\u001b[0m             \u001b[1;32mif\u001b[0m \u001b[1;32mnot\u001b[0m \u001b[0msuffixes_to_ignore\u001b[0m\u001b[1;33m.\u001b[0m\u001b[0misdisjoint\u001b[0m\u001b[1;33m(\u001b[0m\u001b[0mfile\u001b[0m\u001b[1;33m.\u001b[0m\u001b[0msuffixes\u001b[0m\u001b[1;33m)\u001b[0m\u001b[1;33m:\u001b[0m\u001b[1;33m\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n",
      "\u001b[1;32mC:\\ProgramData\\Anaconda3\\lib\\pathlib.py\u001b[0m in \u001b[0;36miterdir\u001b[1;34m(self)\u001b[0m\n\u001b[0;32m   1119\u001b[0m         \u001b[1;32mif\u001b[0m \u001b[0mself\u001b[0m\u001b[1;33m.\u001b[0m\u001b[0m_closed\u001b[0m\u001b[1;33m:\u001b[0m\u001b[1;33m\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n\u001b[0;32m   1120\u001b[0m             \u001b[0mself\u001b[0m\u001b[1;33m.\u001b[0m\u001b[0m_raise_closed\u001b[0m\u001b[1;33m(\u001b[0m\u001b[1;33m)\u001b[0m\u001b[1;33m\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n\u001b[1;32m-> 1121\u001b[1;33m         \u001b[1;32mfor\u001b[0m \u001b[0mname\u001b[0m \u001b[1;32min\u001b[0m \u001b[0mself\u001b[0m\u001b[1;33m.\u001b[0m\u001b[0m_accessor\u001b[0m\u001b[1;33m.\u001b[0m\u001b[0mlistdir\u001b[0m\u001b[1;33m(\u001b[0m\u001b[0mself\u001b[0m\u001b[1;33m)\u001b[0m\u001b[1;33m:\u001b[0m\u001b[1;33m\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n\u001b[0m\u001b[0;32m   1122\u001b[0m             \u001b[1;32mif\u001b[0m \u001b[0mname\u001b[0m \u001b[1;32min\u001b[0m \u001b[1;33m{\u001b[0m\u001b[1;34m'.'\u001b[0m\u001b[1;33m,\u001b[0m \u001b[1;34m'..'\u001b[0m\u001b[1;33m}\u001b[0m\u001b[1;33m:\u001b[0m\u001b[1;33m\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n\u001b[0;32m   1123\u001b[0m                 \u001b[1;31m# Yielding a path object for these makes little sense\u001b[0m\u001b[1;33m\u001b[0m\u001b[1;33m\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n",
      "\u001b[1;31mFileNotFoundError\u001b[0m: [WinError 3] 系统找不到指定的路径。: 'C:\\\\Users\\\\sunbe\\\\.flair\\\\datasets\\\\conll_03'"
     ]
    }
   ],
   "source": [
    "from flair.datasets import CONLL_03\n",
    "from flair.embeddings import WordEmbeddings, FlairEmbeddings, StackedEmbeddings\n",
    "from flair.models import SequenceTagger\n",
    "from flair.trainers import ModelTrainer\n",
    "\n",
    "# 1. get the corpus # 获取训练语料库\n",
    "corpus = CONLL_03()\n",
    "print(corpus)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 7,
   "id": "80bf136e",
   "metadata": {
    "ExecuteTime": {
     "end_time": "2021-09-12T06:31:39.769149Z",
     "start_time": "2021-09-12T06:31:39.740104Z"
    },
    "collapsed": true
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "2021-09-12 14:31:39,744 ----------------------------------------------------------------------------------------------------\n",
      "2021-09-12 14:31:39,745 WARNING: CoNLL-03 dataset not found at \"C:\\Users\\sunbe\\.flair\\datasets\\conll_03\".\n",
      "2021-09-12 14:31:39,745 Instructions for obtaining the data can be found here: https://www.clips.uantwerpen.be/conll2003/ner/\"\n",
      "2021-09-12 14:31:39,746 ----------------------------------------------------------------------------------------------------\n"
     ]
    },
    {
     "ename": "FileNotFoundError",
     "evalue": "[WinError 3] 系统找不到指定的路径。: 'C:\\\\Users\\\\sunbe\\\\.flair\\\\datasets\\\\conll_03'",
     "output_type": "error",
     "traceback": [
      "\u001b[1;31m---------------------------------------------------------------------------\u001b[0m",
      "\u001b[1;31mFileNotFoundError\u001b[0m                         Traceback (most recent call last)",
      "\u001b[1;32m<ipython-input-7-87ebd9c1c139>\u001b[0m in \u001b[0;36m<module>\u001b[1;34m\u001b[0m\n\u001b[0;32m      5\u001b[0m \u001b[1;33m\u001b[0m\u001b[0m\n\u001b[0;32m      6\u001b[0m \u001b[1;31m# 1. get the corpus # 获取训练语料库\u001b[0m\u001b[1;33m\u001b[0m\u001b[1;33m\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n\u001b[1;32m----> 7\u001b[1;33m \u001b[0mcorpus\u001b[0m \u001b[1;33m=\u001b[0m \u001b[0mCONLL_03\u001b[0m\u001b[1;33m(\u001b[0m\u001b[1;33m)\u001b[0m\u001b[1;33m\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n\u001b[0m\u001b[0;32m      8\u001b[0m \u001b[0mprint\u001b[0m\u001b[1;33m(\u001b[0m\u001b[0mcorpus\u001b[0m\u001b[1;33m)\u001b[0m\u001b[1;33m\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n\u001b[0;32m      9\u001b[0m \u001b[1;33m\u001b[0m\u001b[0m\n",
      "\u001b[1;32mC:\\ProgramData\\Anaconda3\\lib\\site-packages\\flair\\datasets\\sequence_labeling.py\u001b[0m in \u001b[0;36m__init__\u001b[1;34m(self, base_path, tag_to_bioes, in_memory, **corpusargs)\u001b[0m\n\u001b[0;32m    361\u001b[0m             \u001b[0mlog\u001b[0m\u001b[1;33m.\u001b[0m\u001b[0mwarning\u001b[0m\u001b[1;33m(\u001b[0m\u001b[1;34m\"-\"\u001b[0m \u001b[1;33m*\u001b[0m \u001b[1;36m100\u001b[0m\u001b[1;33m)\u001b[0m\u001b[1;33m\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n\u001b[0;32m    362\u001b[0m \u001b[1;33m\u001b[0m\u001b[0m\n\u001b[1;32m--> 363\u001b[1;33m         super(CONLL_03, self).__init__(\n\u001b[0m\u001b[0;32m    364\u001b[0m             \u001b[0mdata_folder\u001b[0m\u001b[1;33m,\u001b[0m\u001b[1;33m\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n\u001b[0;32m    365\u001b[0m             \u001b[0mcolumns\u001b[0m\u001b[1;33m,\u001b[0m\u001b[1;33m\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n",
      "\u001b[1;32mC:\\ProgramData\\Anaconda3\\lib\\site-packages\\flair\\datasets\\sequence_labeling.py\u001b[0m in \u001b[0;36m__init__\u001b[1;34m(self, data_folder, column_format, train_file, test_file, dev_file, tag_to_bioes, column_delimiter, comment_symbol, encoding, document_separator_token, skip_first_line, in_memory, label_name_map, banned_sentences, autofind_splits, name, **corpusargs)\u001b[0m\n\u001b[0;32m     56\u001b[0m         \u001b[1;31m# find train, dev and test files if not specified\u001b[0m\u001b[1;33m\u001b[0m\u001b[1;33m\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n\u001b[0;32m     57\u001b[0m         \u001b[0mdev_file\u001b[0m\u001b[1;33m,\u001b[0m \u001b[0mtest_file\u001b[0m\u001b[1;33m,\u001b[0m \u001b[0mtrain_file\u001b[0m \u001b[1;33m=\u001b[0m\u001b[0;31m \u001b[0m\u001b[0;31m\\\u001b[0m\u001b[1;33m\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n\u001b[1;32m---> 58\u001b[1;33m             \u001b[0mfind_train_dev_test_files\u001b[0m\u001b[1;33m(\u001b[0m\u001b[0mdata_folder\u001b[0m\u001b[1;33m,\u001b[0m \u001b[0mdev_file\u001b[0m\u001b[1;33m,\u001b[0m \u001b[0mtest_file\u001b[0m\u001b[1;33m,\u001b[0m \u001b[0mtrain_file\u001b[0m\u001b[1;33m,\u001b[0m \u001b[0mautofind_splits\u001b[0m\u001b[1;33m)\u001b[0m\u001b[1;33m\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n\u001b[0m\u001b[0;32m     59\u001b[0m \u001b[1;33m\u001b[0m\u001b[0m\n\u001b[0;32m     60\u001b[0m         \u001b[1;31m# get train data\u001b[0m\u001b[1;33m\u001b[0m\u001b[1;33m\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n",
      "\u001b[1;32mC:\\ProgramData\\Anaconda3\\lib\\site-packages\\flair\\datasets\\base.py\u001b[0m in \u001b[0;36mfind_train_dev_test_files\u001b[1;34m(data_folder, dev_file, test_file, train_file, autofind_splits)\u001b[0m\n\u001b[0;32m    265\u001b[0m     \u001b[1;31m# automatically identify train / test / dev files\u001b[0m\u001b[1;33m\u001b[0m\u001b[1;33m\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n\u001b[0;32m    266\u001b[0m     \u001b[1;32mif\u001b[0m \u001b[0mtrain_file\u001b[0m \u001b[1;32mis\u001b[0m \u001b[1;32mNone\u001b[0m \u001b[1;32mand\u001b[0m \u001b[0mautofind_splits\u001b[0m\u001b[1;33m:\u001b[0m\u001b[1;33m\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n\u001b[1;32m--> 267\u001b[1;33m         \u001b[1;32mfor\u001b[0m \u001b[0mfile\u001b[0m \u001b[1;32min\u001b[0m \u001b[0mdata_folder\u001b[0m\u001b[1;33m.\u001b[0m\u001b[0miterdir\u001b[0m\u001b[1;33m(\u001b[0m\u001b[1;33m)\u001b[0m\u001b[1;33m:\u001b[0m\u001b[1;33m\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n\u001b[0m\u001b[0;32m    268\u001b[0m             \u001b[0mfile_name\u001b[0m \u001b[1;33m=\u001b[0m \u001b[0mfile\u001b[0m\u001b[1;33m.\u001b[0m\u001b[0mname\u001b[0m\u001b[1;33m\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n\u001b[0;32m    269\u001b[0m             \u001b[1;32mif\u001b[0m \u001b[1;32mnot\u001b[0m \u001b[0msuffixes_to_ignore\u001b[0m\u001b[1;33m.\u001b[0m\u001b[0misdisjoint\u001b[0m\u001b[1;33m(\u001b[0m\u001b[0mfile\u001b[0m\u001b[1;33m.\u001b[0m\u001b[0msuffixes\u001b[0m\u001b[1;33m)\u001b[0m\u001b[1;33m:\u001b[0m\u001b[1;33m\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n",
      "\u001b[1;32mC:\\ProgramData\\Anaconda3\\lib\\pathlib.py\u001b[0m in \u001b[0;36miterdir\u001b[1;34m(self)\u001b[0m\n\u001b[0;32m   1119\u001b[0m         \u001b[1;32mif\u001b[0m \u001b[0mself\u001b[0m\u001b[1;33m.\u001b[0m\u001b[0m_closed\u001b[0m\u001b[1;33m:\u001b[0m\u001b[1;33m\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n\u001b[0;32m   1120\u001b[0m             \u001b[0mself\u001b[0m\u001b[1;33m.\u001b[0m\u001b[0m_raise_closed\u001b[0m\u001b[1;33m(\u001b[0m\u001b[1;33m)\u001b[0m\u001b[1;33m\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n\u001b[1;32m-> 1121\u001b[1;33m         \u001b[1;32mfor\u001b[0m \u001b[0mname\u001b[0m \u001b[1;32min\u001b[0m \u001b[0mself\u001b[0m\u001b[1;33m.\u001b[0m\u001b[0m_accessor\u001b[0m\u001b[1;33m.\u001b[0m\u001b[0mlistdir\u001b[0m\u001b[1;33m(\u001b[0m\u001b[0mself\u001b[0m\u001b[1;33m)\u001b[0m\u001b[1;33m:\u001b[0m\u001b[1;33m\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n\u001b[0m\u001b[0;32m   1122\u001b[0m             \u001b[1;32mif\u001b[0m \u001b[0mname\u001b[0m \u001b[1;32min\u001b[0m \u001b[1;33m{\u001b[0m\u001b[1;34m'.'\u001b[0m\u001b[1;33m,\u001b[0m \u001b[1;34m'..'\u001b[0m\u001b[1;33m}\u001b[0m\u001b[1;33m:\u001b[0m\u001b[1;33m\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n\u001b[0;32m   1123\u001b[0m                 \u001b[1;31m# Yielding a path object for these makes little sense\u001b[0m\u001b[1;33m\u001b[0m\u001b[1;33m\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n",
      "\u001b[1;31mFileNotFoundError\u001b[0m: [WinError 3] 系统找不到指定的路径。: 'C:\\\\Users\\\\sunbe\\\\.flair\\\\datasets\\\\conll_03'"
     ]
    }
   ],
   "source": [
    "from flair.datasets import CONLL_03\n",
    "from flair.embeddings import WordEmbeddings, FlairEmbeddings, StackedEmbeddings\n",
    "from flair.models import SequenceTagger\n",
    "from flair.trainers import ModelTrainer\n",
    "\n",
    "# 1. get the corpus # 获取训练语料库（不能获取语料库）####\n",
    "corpus = CONLL_03()  \n",
    "print(corpus)\n",
    "\n",
    "# 2. what label do we want to predict? # 我们想要预测的标签类型\n",
    "label_type = 'ner'\n",
    "\n",
    "# 3. make the label dictionary from the corpus # 从语料库中获取标签字典\n",
    "label_dict = corpus.make_label_dictionary(label_type=label_type)\n",
    "print(label_dict)\n",
    "\n",
    "# 4. initialize embedding stack with Flair and GloVe # 初始化词嵌入模型\n",
    "embedding_types = [\n",
    "    WordEmbeddings('glove'),\n",
    "    FlairEmbeddings('news-forward'),\n",
    "    FlairEmbeddings('news-backward'),\n",
    "]\n",
    "\n",
    "embeddings = StackedEmbeddings(embeddings=embedding_types)\n",
    "\n",
    "# 5. initialize sequence tagger # 初始化序列标注类\n",
    "tagger = SequenceTagger(hidden_size=256,\n",
    "                        embeddings=embeddings,\n",
    "                        tag_dictionary=label_dict,\n",
    "                        tag_type=label_type,\n",
    "                        use_crf=True)\n",
    "\n",
    "# 6. initialize trainer # 初始化训练器\n",
    "trainer = ModelTrainer(tagger, corpus)\n",
    "\n",
    "# 7. start training # 开始训练\n",
    "trainer.train('resources/taggers/sota-ner-flair',\n",
    "              learning_rate=0.1,\n",
    "              mini_batch_size=32,\n",
    "              max_epochs=150)\n"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "de67c40a",
   "metadata": {},
   "source": [
    "这将给你类似于Akbik等人(2018)报告的最先进的数字。"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "34126066",
   "metadata": {},
   "source": [
    "## Training a Named Entity Recognition (NER) Model with Transformers（使用transformer训练命名实体识别模型）"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "ac92cebf",
   "metadata": {},
   "source": [
    "如果您使用transformer作为嵌入，微调它们并使用完整的文档上下文，您可以得到更好的数字(详情请参阅我们的FLERT论文)。它是最先进的，但比上面的模型慢得多"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "88ba2c27",
   "metadata": {},
   "source": [
    "改变脚本，使用transformer嵌入和改变训练方法：使用AdamW优化器微调和微小的学习率，而不是SGD:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 25,
   "id": "1e23c4d3",
   "metadata": {
    "ExecuteTime": {
     "end_time": "2021-09-12T10:08:41.628347Z",
     "start_time": "2021-09-12T10:08:41.595367Z"
    },
    "collapsed": true
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "2021-09-12 18:08:41,599 ----------------------------------------------------------------------------------------------------\n",
      "2021-09-12 18:08:41,600 WARNING: CoNLL-03 dataset not found at \"C:\\Users\\sunbe\\.flair\\datasets\\conll_03\".\n",
      "2021-09-12 18:08:41,600 Instructions for obtaining the data can be found here: https://www.clips.uantwerpen.be/conll2003/ner/\"\n",
      "2021-09-12 18:08:41,600 ----------------------------------------------------------------------------------------------------\n"
     ]
    },
    {
     "ename": "FileNotFoundError",
     "evalue": "[WinError 3] 系统找不到指定的路径。: 'C:\\\\Users\\\\sunbe\\\\.flair\\\\datasets\\\\conll_03'",
     "output_type": "error",
     "traceback": [
      "\u001b[1;31m---------------------------------------------------------------------------\u001b[0m",
      "\u001b[1;31mFileNotFoundError\u001b[0m                         Traceback (most recent call last)",
      "\u001b[1;32m<ipython-input-25-6a2b1432ef8c>\u001b[0m in \u001b[0;36m<module>\u001b[1;34m\u001b[0m\n\u001b[0;32m      7\u001b[0m \u001b[1;33m\u001b[0m\u001b[0m\n\u001b[0;32m      8\u001b[0m \u001b[1;31m# 1. get the corpus # 获取语料库\u001b[0m\u001b[1;33m\u001b[0m\u001b[1;33m\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n\u001b[1;32m----> 9\u001b[1;33m \u001b[0mcorpus\u001b[0m \u001b[1;33m=\u001b[0m \u001b[0mCONLL_03\u001b[0m\u001b[1;33m(\u001b[0m\u001b[1;33m)\u001b[0m\u001b[1;33m\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n\u001b[0m\u001b[0;32m     10\u001b[0m \u001b[0mprint\u001b[0m\u001b[1;33m(\u001b[0m\u001b[0mcorpus\u001b[0m\u001b[1;33m)\u001b[0m\u001b[1;33m\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n\u001b[0;32m     11\u001b[0m \u001b[1;33m\u001b[0m\u001b[0m\n",
      "\u001b[1;32mC:\\ProgramData\\Anaconda3\\lib\\site-packages\\flair\\datasets\\sequence_labeling.py\u001b[0m in \u001b[0;36m__init__\u001b[1;34m(self, base_path, tag_to_bioes, in_memory, **corpusargs)\u001b[0m\n\u001b[0;32m    361\u001b[0m             \u001b[0mlog\u001b[0m\u001b[1;33m.\u001b[0m\u001b[0mwarning\u001b[0m\u001b[1;33m(\u001b[0m\u001b[1;34m\"-\"\u001b[0m \u001b[1;33m*\u001b[0m \u001b[1;36m100\u001b[0m\u001b[1;33m)\u001b[0m\u001b[1;33m\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n\u001b[0;32m    362\u001b[0m \u001b[1;33m\u001b[0m\u001b[0m\n\u001b[1;32m--> 363\u001b[1;33m         super(CONLL_03, self).__init__(\n\u001b[0m\u001b[0;32m    364\u001b[0m             \u001b[0mdata_folder\u001b[0m\u001b[1;33m,\u001b[0m\u001b[1;33m\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n\u001b[0;32m    365\u001b[0m             \u001b[0mcolumns\u001b[0m\u001b[1;33m,\u001b[0m\u001b[1;33m\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n",
      "\u001b[1;32mC:\\ProgramData\\Anaconda3\\lib\\site-packages\\flair\\datasets\\sequence_labeling.py\u001b[0m in \u001b[0;36m__init__\u001b[1;34m(self, data_folder, column_format, train_file, test_file, dev_file, tag_to_bioes, column_delimiter, comment_symbol, encoding, document_separator_token, skip_first_line, in_memory, label_name_map, banned_sentences, autofind_splits, name, **corpusargs)\u001b[0m\n\u001b[0;32m     56\u001b[0m         \u001b[1;31m# find train, dev and test files if not specified\u001b[0m\u001b[1;33m\u001b[0m\u001b[1;33m\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n\u001b[0;32m     57\u001b[0m         \u001b[0mdev_file\u001b[0m\u001b[1;33m,\u001b[0m \u001b[0mtest_file\u001b[0m\u001b[1;33m,\u001b[0m \u001b[0mtrain_file\u001b[0m \u001b[1;33m=\u001b[0m\u001b[0;31m \u001b[0m\u001b[0;31m\\\u001b[0m\u001b[1;33m\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n\u001b[1;32m---> 58\u001b[1;33m             \u001b[0mfind_train_dev_test_files\u001b[0m\u001b[1;33m(\u001b[0m\u001b[0mdata_folder\u001b[0m\u001b[1;33m,\u001b[0m \u001b[0mdev_file\u001b[0m\u001b[1;33m,\u001b[0m \u001b[0mtest_file\u001b[0m\u001b[1;33m,\u001b[0m \u001b[0mtrain_file\u001b[0m\u001b[1;33m,\u001b[0m \u001b[0mautofind_splits\u001b[0m\u001b[1;33m)\u001b[0m\u001b[1;33m\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n\u001b[0m\u001b[0;32m     59\u001b[0m \u001b[1;33m\u001b[0m\u001b[0m\n\u001b[0;32m     60\u001b[0m         \u001b[1;31m# get train data\u001b[0m\u001b[1;33m\u001b[0m\u001b[1;33m\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n",
      "\u001b[1;32mC:\\ProgramData\\Anaconda3\\lib\\site-packages\\flair\\datasets\\base.py\u001b[0m in \u001b[0;36mfind_train_dev_test_files\u001b[1;34m(data_folder, dev_file, test_file, train_file, autofind_splits)\u001b[0m\n\u001b[0;32m    265\u001b[0m     \u001b[1;31m# automatically identify train / test / dev files\u001b[0m\u001b[1;33m\u001b[0m\u001b[1;33m\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n\u001b[0;32m    266\u001b[0m     \u001b[1;32mif\u001b[0m \u001b[0mtrain_file\u001b[0m \u001b[1;32mis\u001b[0m \u001b[1;32mNone\u001b[0m \u001b[1;32mand\u001b[0m \u001b[0mautofind_splits\u001b[0m\u001b[1;33m:\u001b[0m\u001b[1;33m\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n\u001b[1;32m--> 267\u001b[1;33m         \u001b[1;32mfor\u001b[0m \u001b[0mfile\u001b[0m \u001b[1;32min\u001b[0m \u001b[0mdata_folder\u001b[0m\u001b[1;33m.\u001b[0m\u001b[0miterdir\u001b[0m\u001b[1;33m(\u001b[0m\u001b[1;33m)\u001b[0m\u001b[1;33m:\u001b[0m\u001b[1;33m\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n\u001b[0m\u001b[0;32m    268\u001b[0m             \u001b[0mfile_name\u001b[0m \u001b[1;33m=\u001b[0m \u001b[0mfile\u001b[0m\u001b[1;33m.\u001b[0m\u001b[0mname\u001b[0m\u001b[1;33m\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n\u001b[0;32m    269\u001b[0m             \u001b[1;32mif\u001b[0m \u001b[1;32mnot\u001b[0m \u001b[0msuffixes_to_ignore\u001b[0m\u001b[1;33m.\u001b[0m\u001b[0misdisjoint\u001b[0m\u001b[1;33m(\u001b[0m\u001b[0mfile\u001b[0m\u001b[1;33m.\u001b[0m\u001b[0msuffixes\u001b[0m\u001b[1;33m)\u001b[0m\u001b[1;33m:\u001b[0m\u001b[1;33m\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n",
      "\u001b[1;32mC:\\ProgramData\\Anaconda3\\lib\\pathlib.py\u001b[0m in \u001b[0;36miterdir\u001b[1;34m(self)\u001b[0m\n\u001b[0;32m   1119\u001b[0m         \u001b[1;32mif\u001b[0m \u001b[0mself\u001b[0m\u001b[1;33m.\u001b[0m\u001b[0m_closed\u001b[0m\u001b[1;33m:\u001b[0m\u001b[1;33m\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n\u001b[0;32m   1120\u001b[0m             \u001b[0mself\u001b[0m\u001b[1;33m.\u001b[0m\u001b[0m_raise_closed\u001b[0m\u001b[1;33m(\u001b[0m\u001b[1;33m)\u001b[0m\u001b[1;33m\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n\u001b[1;32m-> 1121\u001b[1;33m         \u001b[1;32mfor\u001b[0m \u001b[0mname\u001b[0m \u001b[1;32min\u001b[0m \u001b[0mself\u001b[0m\u001b[1;33m.\u001b[0m\u001b[0m_accessor\u001b[0m\u001b[1;33m.\u001b[0m\u001b[0mlistdir\u001b[0m\u001b[1;33m(\u001b[0m\u001b[0mself\u001b[0m\u001b[1;33m)\u001b[0m\u001b[1;33m:\u001b[0m\u001b[1;33m\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n\u001b[0m\u001b[0;32m   1122\u001b[0m             \u001b[1;32mif\u001b[0m \u001b[0mname\u001b[0m \u001b[1;32min\u001b[0m \u001b[1;33m{\u001b[0m\u001b[1;34m'.'\u001b[0m\u001b[1;33m,\u001b[0m \u001b[1;34m'..'\u001b[0m\u001b[1;33m}\u001b[0m\u001b[1;33m:\u001b[0m\u001b[1;33m\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n\u001b[0;32m   1123\u001b[0m                 \u001b[1;31m# Yielding a path object for these makes little sense\u001b[0m\u001b[1;33m\u001b[0m\u001b[1;33m\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n",
      "\u001b[1;31mFileNotFoundError\u001b[0m: [WinError 3] 系统找不到指定的路径。: 'C:\\\\Users\\\\sunbe\\\\.flair\\\\datasets\\\\conll_03'"
     ]
    }
   ],
   "source": [
    "from flair.datasets import CONLL_03\n",
    "from flair.embeddings import TransformerWordEmbeddings\n",
    "from flair.models import SequenceTagger\n",
    "from flair.trainers import ModelTrainer\n",
    "import torch\n",
    "from torch.optim.lr_scheduler import OneCycleLR\n",
    "\n",
    "# 1. get the corpus # 获取语料库\n",
    "corpus = CONLL_03()\n",
    "print(corpus)\n",
    "\n",
    "# 2. what label do we want to predict? # 预测的标签类型\n",
    "label_type = 'ner'\n",
    "\n",
    "# 3. make the label dictionary from the corpus # 从语料库中获取标签字典\n",
    "label_dict = corpus.make_label_dictionary(label_type=label_type)\n",
    "print(label_dict)\n",
    "\n",
    "# 4. initialize fine-tuneable transformer embeddings WITH document context # 使用文档上下文初始化可微调的transformer嵌入\n",
    "embeddings = TransformerWordEmbeddings(\n",
    "    model='xlm-roberta-large',\n",
    "    layers=\"-1\",\n",
    "    subtoken_pooling=\"first\",\n",
    "    fine_tune=True, # 微调模型\n",
    "    use_context=True, # 使用上下文\n",
    ") \n",
    "\n",
    "# 5. initialize bare-bones sequence tagger (no CRF, no RNN, no reprojection)\n",
    "tagger = SequenceTagger(\n",
    "    hidden_size=256,\n",
    "    embeddings=embeddings,\n",
    "    tag_dictionary=label_dict,\n",
    "    tag_type='ner',\n",
    "    use_crf=False,\n",
    "    use_rnn=False,\n",
    "    reproject_embeddings=False,\n",
    ")\n",
    "\n",
    "# 6. initialize trainer with AdamW optimizer\n",
    "trainer = ModelTrainer(tagger, corpus, optimizer=torch.optim.AdamW)\n",
    "\n",
    "# 7. run training with XLM parameters (20 epochs, small LR, one-cycle learning rate scheduling)\n",
    "trainer.train('resources/taggers/sota-ner-flert',\n",
    "              learning_rate=5.0e-6,\n",
    "              mini_batch_size=4,\n",
    "              mini_batch_chunk_size=1,  # remove this parameter to speed up computation if you have a big GPU\n",
    "              max_epochs=20,  # 10 is also good\n",
    "              scheduler=OneCycleLR,\n",
    "              embeddings_storage_mode='none',\n",
    "              weight_decay=0.,\n",
    "              )\n"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "df280f3d",
   "metadata": {},
   "source": [
    "这将给你类似于Schweter和Akbik(2021年)报告的最先进的数字。"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "3fc43dea",
   "metadata": {},
   "source": [
    "## Training a Text Classification Model（训练文本分类模型）"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "07d2b32d",
   "metadata": {},
   "source": [
    "训练其他类型的模型与上面训练序列标签的脚本非常相似。对于文本分类，请使用适当的语料库，并使用文档级嵌入而不是单词级嵌入(请参阅这两方面的教程以了解差异)。剩下的和以前完全一样!"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "79c63b4c",
   "metadata": {},
   "source": [
    "文本分类的最佳结果使用TransformerDocumentEmbeddings微调转换器，如下面的代码所示:"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "d0d27f74",
   "metadata": {},
   "source": [
    "(如果你没有大型GPU来微调transformer，请尝试DocumentPoolEmbeddings或documentnnembeddings;有时候它们也能工作!)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 29,
   "id": "4ac1eb49",
   "metadata": {
    "ExecuteTime": {
     "end_time": "2021-09-12T11:13:39.111202Z",
     "start_time": "2021-09-12T10:43:10.478149Z"
    }
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "2021-09-12 18:43:10,481 Reading data from C:\\Users\\sunbe\\.flair\\datasets\\trec_6\n",
      "2021-09-12 18:43:10,481 Train: C:\\Users\\sunbe\\.flair\\datasets\\trec_6\\train.txt\n",
      "2021-09-12 18:43:10,482 Dev: None\n",
      "2021-09-12 18:43:10,482 Test: C:\\Users\\sunbe\\.flair\\datasets\\trec_6\\test.txt\n",
      "2021-09-12 18:43:10,770 Initialized corpus C:\\Users\\sunbe\\.flair\\datasets\\trec_6 (label type name is 'question_class')\n",
      "2021-09-12 18:43:10,771 Computing label dictionary. Progress:\n"
     ]
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "100%|███████████████████████████████████████████████████████████████████████████| 4907/4907 [00:00<00:00, 27787.42it/s]"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "2021-09-12 18:43:10,949 Corpus contains the labels: question_class (#4907)\n",
      "2021-09-12 18:43:10,949 Created (for label 'question_class') Dictionary with 6 tags: DESC, ENTY, ABBR, HUM, NUM, LOC\n"
     ]
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "\n"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "2021-09-12 18:43:21,793 ----------------------------------------------------------------------------------------------------\n",
      "2021-09-12 18:43:21,794 Model: \"TextClassifier(\n",
      "  (loss_function): CrossEntropyLoss()\n",
      "  (document_embeddings): TransformerDocumentEmbeddings(\n",
      "    (model): DistilBertModel(\n",
      "      (embeddings): Embeddings(\n",
      "        (word_embeddings): Embedding(30522, 768, padding_idx=0)\n",
      "        (position_embeddings): Embedding(512, 768)\n",
      "        (LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)\n",
      "        (dropout): Dropout(p=0.1, inplace=False)\n",
      "      )\n",
      "      (transformer): Transformer(\n",
      "        (layer): ModuleList(\n",
      "          (0): TransformerBlock(\n",
      "            (attention): MultiHeadSelfAttention(\n",
      "              (dropout): Dropout(p=0.1, inplace=False)\n",
      "              (q_lin): Linear(in_features=768, out_features=768, bias=True)\n",
      "              (k_lin): Linear(in_features=768, out_features=768, bias=True)\n",
      "              (v_lin): Linear(in_features=768, out_features=768, bias=True)\n",
      "              (out_lin): Linear(in_features=768, out_features=768, bias=True)\n",
      "            )\n",
      "            (sa_layer_norm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)\n",
      "            (ffn): FFN(\n",
      "              (dropout): Dropout(p=0.1, inplace=False)\n",
      "              (lin1): Linear(in_features=768, out_features=3072, bias=True)\n",
      "              (lin2): Linear(in_features=3072, out_features=768, bias=True)\n",
      "            )\n",
      "            (output_layer_norm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)\n",
      "          )\n",
      "          (1): TransformerBlock(\n",
      "            (attention): MultiHeadSelfAttention(\n",
      "              (dropout): Dropout(p=0.1, inplace=False)\n",
      "              (q_lin): Linear(in_features=768, out_features=768, bias=True)\n",
      "              (k_lin): Linear(in_features=768, out_features=768, bias=True)\n",
      "              (v_lin): Linear(in_features=768, out_features=768, bias=True)\n",
      "              (out_lin): Linear(in_features=768, out_features=768, bias=True)\n",
      "            )\n",
      "            (sa_layer_norm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)\n",
      "            (ffn): FFN(\n",
      "              (dropout): Dropout(p=0.1, inplace=False)\n",
      "              (lin1): Linear(in_features=768, out_features=3072, bias=True)\n",
      "              (lin2): Linear(in_features=3072, out_features=768, bias=True)\n",
      "            )\n",
      "            (output_layer_norm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)\n",
      "          )\n",
      "          (2): TransformerBlock(\n",
      "            (attention): MultiHeadSelfAttention(\n",
      "              (dropout): Dropout(p=0.1, inplace=False)\n",
      "              (q_lin): Linear(in_features=768, out_features=768, bias=True)\n",
      "              (k_lin): Linear(in_features=768, out_features=768, bias=True)\n",
      "              (v_lin): Linear(in_features=768, out_features=768, bias=True)\n",
      "              (out_lin): Linear(in_features=768, out_features=768, bias=True)\n",
      "            )\n",
      "            (sa_layer_norm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)\n",
      "            (ffn): FFN(\n",
      "              (dropout): Dropout(p=0.1, inplace=False)\n",
      "              (lin1): Linear(in_features=768, out_features=3072, bias=True)\n",
      "              (lin2): Linear(in_features=3072, out_features=768, bias=True)\n",
      "            )\n",
      "            (output_layer_norm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)\n",
      "          )\n",
      "          (3): TransformerBlock(\n",
      "            (attention): MultiHeadSelfAttention(\n",
      "              (dropout): Dropout(p=0.1, inplace=False)\n",
      "              (q_lin): Linear(in_features=768, out_features=768, bias=True)\n",
      "              (k_lin): Linear(in_features=768, out_features=768, bias=True)\n",
      "              (v_lin): Linear(in_features=768, out_features=768, bias=True)\n",
      "              (out_lin): Linear(in_features=768, out_features=768, bias=True)\n",
      "            )\n",
      "            (sa_layer_norm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)\n",
      "            (ffn): FFN(\n",
      "              (dropout): Dropout(p=0.1, inplace=False)\n",
      "              (lin1): Linear(in_features=768, out_features=3072, bias=True)\n",
      "              (lin2): Linear(in_features=3072, out_features=768, bias=True)\n",
      "            )\n",
      "            (output_layer_norm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)\n",
      "          )\n",
      "          (4): TransformerBlock(\n",
      "            (attention): MultiHeadSelfAttention(\n",
      "              (dropout): Dropout(p=0.1, inplace=False)\n",
      "              (q_lin): Linear(in_features=768, out_features=768, bias=True)\n",
      "              (k_lin): Linear(in_features=768, out_features=768, bias=True)\n",
      "              (v_lin): Linear(in_features=768, out_features=768, bias=True)\n",
      "              (out_lin): Linear(in_features=768, out_features=768, bias=True)\n",
      "            )\n",
      "            (sa_layer_norm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)\n",
      "            (ffn): FFN(\n",
      "              (dropout): Dropout(p=0.1, inplace=False)\n",
      "              (lin1): Linear(in_features=768, out_features=3072, bias=True)\n",
      "              (lin2): Linear(in_features=3072, out_features=768, bias=True)\n",
      "            )\n",
      "            (output_layer_norm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)\n",
      "          )\n",
      "          (5): TransformerBlock(\n",
      "            (attention): MultiHeadSelfAttention(\n",
      "              (dropout): Dropout(p=0.1, inplace=False)\n",
      "              (q_lin): Linear(in_features=768, out_features=768, bias=True)\n",
      "              (k_lin): Linear(in_features=768, out_features=768, bias=True)\n",
      "              (v_lin): Linear(in_features=768, out_features=768, bias=True)\n",
      "              (out_lin): Linear(in_features=768, out_features=768, bias=True)\n",
      "            )\n",
      "            (sa_layer_norm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)\n",
      "            (ffn): FFN(\n",
      "              (dropout): Dropout(p=0.1, inplace=False)\n",
      "              (lin1): Linear(in_features=768, out_features=3072, bias=True)\n",
      "              (lin2): Linear(in_features=3072, out_features=768, bias=True)\n",
      "            )\n",
      "            (output_layer_norm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)\n",
      "          )\n",
      "        )\n",
      "      )\n",
      "    )\n",
      "  )\n",
      "  (decoder): Linear(in_features=768, out_features=6, bias=True)\n",
      "  (weights): None\n",
      "  (weight_tensor) None\n",
      ")\"\n",
      "2021-09-12 18:43:21,794 ----------------------------------------------------------------------------------------------------\n",
      "2021-09-12 18:43:21,795 Corpus: \"Corpus: 4907 train + 545 dev + 500 test sentences\"\n",
      "2021-09-12 18:43:21,795 ----------------------------------------------------------------------------------------------------\n",
      "2021-09-12 18:43:21,796 Parameters:\n",
      "2021-09-12 18:43:21,796  - learning_rate: \"5e-05\"\n",
      "2021-09-12 18:43:21,797  - mini_batch_size: \"4\"\n",
      "2021-09-12 18:43:21,797  - patience: \"3\"\n",
      "2021-09-12 18:43:21,797  - anneal_factor: \"0.5\"\n",
      "2021-09-12 18:43:21,799  - max_epochs: \"2\"\n",
      "2021-09-12 18:43:21,800  - shuffle: \"True\"\n",
      "2021-09-12 18:43:21,800  - train_with_dev: \"False\"\n",
      "2021-09-12 18:43:21,801  - batch_growth_annealing: \"False\"\n",
      "2021-09-12 18:43:21,801 ----------------------------------------------------------------------------------------------------\n",
      "2021-09-12 18:43:21,801 Model training base path: \"resources\\taggers\\question-classification-with-transformer\"\n",
      "2021-09-12 18:43:21,802 ----------------------------------------------------------------------------------------------------\n",
      "2021-09-12 18:43:21,803 Device: cpu\n",
      "2021-09-12 18:43:21,803 ----------------------------------------------------------------------------------------------------\n",
      "2021-09-12 18:43:21,804 Embeddings storage mode: none\n",
      "2021-09-12 18:43:21,806 ----------------------------------------------------------------------------------------------------\n",
      "2021-09-12 18:44:49,293 epoch 1 - iter 122/1227 - loss 0.23521638 - samples/sec: 5.58 - lr: 0.000050\n",
      "2021-09-12 18:46:18,119 epoch 1 - iter 244/1227 - loss 0.17452125 - samples/sec: 5.49 - lr: 0.000049\n",
      "2021-09-12 18:47:43,979 epoch 1 - iter 366/1227 - loss 0.15243659 - samples/sec: 5.68 - lr: 0.000047\n",
      "2021-09-12 18:49:10,332 epoch 1 - iter 488/1227 - loss 0.13804544 - samples/sec: 5.65 - lr: 0.000045\n",
      "2021-09-12 18:50:38,218 epoch 1 - iter 610/1227 - loss 0.12626535 - samples/sec: 5.55 - lr: 0.000043\n",
      "2021-09-12 18:52:06,224 epoch 1 - iter 732/1227 - loss 0.12152383 - samples/sec: 5.55 - lr: 0.000040\n",
      "2021-09-12 18:53:34,681 epoch 1 - iter 854/1227 - loss 0.11515834 - samples/sec: 5.52 - lr: 0.000036\n"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "2021-09-12 18:55:02,862 epoch 1 - iter 976/1227 - loss 0.11057719 - samples/sec: 5.53 - lr: 0.000033\n",
      "2021-09-12 18:56:34,143 epoch 1 - iter 1098/1227 - loss 0.10363438 - samples/sec: 5.35 - lr: 0.000029\n",
      "2021-09-12 18:58:03,972 epoch 1 - iter 1220/1227 - loss 0.09958527 - samples/sec: 5.43 - lr: 0.000025\n",
      "2021-09-12 18:58:08,935 ----------------------------------------------------------------------------------------------------\n",
      "2021-09-12 18:58:08,936 EPOCH 1 done: loss 0.0992 - lr 0.0000250\n",
      "2021-09-12 18:58:16,148 DEV : loss 0.09369111061096191 - f1-score (micro avg)  0.9211\n",
      "2021-09-12 18:58:16,151 BAD EPOCHS (no improvement): 4\n",
      "2021-09-12 18:58:16,152 saving best model\n",
      "2021-09-12 18:58:16,518 ----------------------------------------------------------------------------------------------------\n",
      "2021-09-12 18:59:46,421 epoch 2 - iter 122/1227 - loss 0.02819291 - samples/sec: 5.43 - lr: 0.000021\n",
      "2021-09-12 19:01:20,948 epoch 2 - iter 244/1227 - loss 0.02266050 - samples/sec: 5.16 - lr: 0.000017\n",
      "2021-09-12 19:02:55,542 epoch 2 - iter 366/1227 - loss 0.02593210 - samples/sec: 5.16 - lr: 0.000014\n",
      "2021-09-12 19:04:28,483 epoch 2 - iter 488/1227 - loss 0.02992876 - samples/sec: 5.25 - lr: 0.000010\n",
      "2021-09-12 19:05:56,907 epoch 2 - iter 610/1227 - loss 0.02731942 - samples/sec: 5.52 - lr: 0.000007\n",
      "2021-09-12 19:07:27,164 epoch 2 - iter 732/1227 - loss 0.02548668 - samples/sec: 5.41 - lr: 0.000005\n",
      "2021-09-12 19:08:53,888 epoch 2 - iter 854/1227 - loss 0.02280563 - samples/sec: 5.63 - lr: 0.000003\n",
      "2021-09-12 19:10:19,244 epoch 2 - iter 976/1227 - loss 0.02161421 - samples/sec: 5.72 - lr: 0.000001\n",
      "2021-09-12 19:11:45,713 epoch 2 - iter 1098/1227 - loss 0.02129824 - samples/sec: 5.64 - lr: 0.000000\n",
      "2021-09-12 19:13:13,121 epoch 2 - iter 1220/1227 - loss 0.02018971 - samples/sec: 5.58 - lr: 0.000000\n",
      "2021-09-12 19:13:18,135 ----------------------------------------------------------------------------------------------------\n",
      "2021-09-12 19:13:18,136 EPOCH 2 done: loss 0.0201 - lr 0.0000000\n",
      "2021-09-12 19:13:25,093 DEV : loss 0.06933436542749405 - f1-score (micro avg)  0.945\n",
      "2021-09-12 19:13:25,095 BAD EPOCHS (no improvement): 4\n",
      "2021-09-12 19:13:25,097 saving best model\n",
      "2021-09-12 19:13:25,809 ----------------------------------------------------------------------------------------------------\n",
      "2021-09-12 19:13:25,810 loading file resources\\taggers\\question-classification-with-transformer\\best-model.pt\n",
      "2021-09-12 19:13:39,103 0.97\t0.97\t0.97\t0.97\n",
      "2021-09-12 19:13:39,103 \n",
      "Results:\n",
      "- F-score (micro) 0.97\n",
      "- F-score (macro) 0.9654\n",
      "- Accuracy 0.97\n",
      "\n",
      "By class:\n",
      "              precision    recall  f1-score   support\n",
      "\n",
      "        DESC     0.9645    0.9855    0.9749       138\n",
      "         NUM     0.9655    0.9912    0.9782       113\n",
      "        ENTY     0.9770    0.9043    0.9392        94\n",
      "         LOC     0.9756    0.9877    0.9816        81\n",
      "         HUM     0.9697    0.9846    0.9771        65\n",
      "        ABBR     1.0000    0.8889    0.9412         9\n",
      "\n",
      "   micro avg     0.9700    0.9700    0.9700       500\n",
      "   macro avg     0.9754    0.9570    0.9654       500\n",
      "weighted avg     0.9702    0.9700    0.9697       500\n",
      " samples avg     0.9700    0.9700    0.9700       500\n",
      "\n",
      "2021-09-12 19:13:39,104 ----------------------------------------------------------------------------------------------------\n"
     ]
    },
    {
     "data": {
      "text/plain": [
       "{'test_score': 0.97,\n",
       " 'dev_score_history': [0.9211009174311927, 0.944954128440367],\n",
       " 'train_loss_history': [0.09920649923087896, 0.020081056068094575],\n",
       " 'dev_loss_history': [tensor(0.0937), tensor(0.0693)]}"
      ]
     },
     "execution_count": 29,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "import torch\n",
    "from torch.optim.lr_scheduler import OneCycleLR\n",
    "\n",
    "from flair.data import Corpus\n",
    "from flair.datasets import TREC_6\n",
    "from flair.embeddings import TransformerDocumentEmbeddings\n",
    "from flair.models import TextClassifier\n",
    "from flair.trainers import ModelTrainer\n",
    "\n",
    "# 1. get the corpus # 获取语料库\n",
    "corpus: Corpus = TREC_6()\n",
    "\n",
    "# 2. what label do we want to predict? # 预测标签类型\n",
    "label_type = 'question_class'\n",
    "\n",
    "# 3. create the label dictionary\n",
    "label_dict = corpus.make_label_dictionary(label_type=label_type)\n",
    "\n",
    "# 4. initialize transformer document embeddings (many models are available)\n",
    "document_embeddings = TransformerDocumentEmbeddings('distilbert-base-uncased', fine_tune=True)\n",
    "\n",
    "# 5. create the text classifier # 建立文本分类器\n",
    "classifier = TextClassifier(document_embeddings, label_dictionary=label_dict, label_type=label_type)\n",
    "\n",
    "# 6. initialize trainer with AdamW optimizer # 初始化训练器\n",
    "trainer = ModelTrainer(classifier, corpus, optimizer=torch.optim.AdamW)\n",
    "\n",
    "# 7. run training with fine-tuning  # 跑具有transformer微调网络的训练任务\n",
    "trainer.train('resources/taggers/question-classification-with-transformer',\n",
    "              learning_rate=5.0e-5,\n",
    "              mini_batch_size=4,\n",
    "#               max_epochs=10,\n",
    "              max_epochs=2,\n",
    "              scheduler=OneCycleLR, # 调节学习率的\n",
    "              embeddings_storage_mode='none',  # embedding数据的存储地，cpu或gpu\n",
    "              weight_decay=0.,\n",
    "              )"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "7ebcd46a",
   "metadata": {},
   "source": [
    "一旦模型被训练，你就可以加载它来预测新句子的类别。只需调用模型的predict方法。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 32,
   "id": "8e362ee1",
   "metadata": {
    "ExecuteTime": {
     "end_time": "2021-09-12T11:57:59.276651Z",
     "start_time": "2021-09-12T11:57:50.134707Z"
    }
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "2021-09-12 19:57:50,135 loading file resources/taggers/question-classification-with-transformer/final-model.pt\n",
      "[HUM (1.0)]\n"
     ]
    }
   ],
   "source": [
    "from flair.data import Sentence\n",
    "# 加载预训练模型\n",
    "classifier = TextClassifier.load('resources/taggers/question-classification-with-transformer/final-model.pt')\n",
    "\n",
    "# create example sentence\n",
    "sentence = Sentence('Who built the Eiffel Tower ?')\n",
    "\n",
    "# predict class and print # 预测句子类型\n",
    "classifier.predict(sentence)\n",
    "# 打印句子类型\n",
    "print(sentence.labels)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "c28d2a1e",
   "metadata": {},
   "source": [
    "## Multi-Dataset Training（多数据集训练）"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "851190b8",
   "metadata": {},
   "source": [
    "现在，让我们训练一个模型，它可以用英语和德语对文本进行PoS标记（词性标记）。为此，我们加载英语和德语的UD（通用）语料库并创建一个MultiCorpus对象。我们还使用了新的多语言Flair嵌入来完成这项任务。"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "33fbf8d8",
   "metadata": {},
   "source": [
    "其余的都和以前一样。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 33,
   "id": "99926571",
   "metadata": {
    "ExecuteTime": {
     "end_time": "2021-09-12T12:30:26.062087Z",
     "start_time": "2021-09-12T11:58:10.517345Z"
    }
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "2021-09-12 19:58:10,522 Reading data from C:\\Users\\sunbe\\.flair\\datasets\\ud_english\n",
      "2021-09-12 19:58:10,523 Train: C:\\Users\\sunbe\\.flair\\datasets\\ud_english\\en_ewt-ud-train.conllu\n",
      "2021-09-12 19:58:10,524 Dev: C:\\Users\\sunbe\\.flair\\datasets\\ud_english\\en_ewt-ud-dev.conllu\n",
      "2021-09-12 19:58:10,524 Test: C:\\Users\\sunbe\\.flair\\datasets\\ud_english\\en_ewt-ud-test.conllu\n",
      "2021-09-12 19:58:21,848 Reading data from C:\\Users\\sunbe\\.flair\\datasets\\ud_german\n",
      "2021-09-12 19:58:21,848 Train: C:\\Users\\sunbe\\.flair\\datasets\\ud_german\\de_gsd-ud-train.conllu\n",
      "2021-09-12 19:58:21,849 Dev: C:\\Users\\sunbe\\.flair\\datasets\\ud_german\\de_gsd-ud-dev.conllu\n",
      "2021-09-12 19:58:21,850 Test: C:\\Users\\sunbe\\.flair\\datasets\\ud_german\\de_gsd-ud-test.conllu\n",
      "2021-09-12 19:58:31,534 Computing label dictionary. Progress:\n"
     ]
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "100%|███████████████████████████████████████████████████████████████████████████| 2636/2636 [00:00<00:00, 20961.46it/s]"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "2021-09-12 19:58:31,663 Corpus contains the labels: upos (#45820), lemma (#45819), pos (#45819), dependency (#45819), number (#21555), case (#14242), gender (#12885), prontype (#7930), verbform (#6181), person (#4888), definite (#4627), tense (#3916), mood (#3543), degree (#1295), numtype (#1090), poss (#534), voice (#411), foreign (#281), reflex (#160), number[psor] (#157), gender[psor] (#134), polarity (#117), typo (#25), abbr (#11), style (#2)\n",
      "2021-09-12 19:58:31,664 Created (for label 'upos') Dictionary with 17 tags: PROPN, AUX, PART, VERB, ADV, ADP, PRON, NOUN, PUNCT, SCONJ, ADJ, NUM, CCONJ, DET, X, INTJ, SYM\n",
      "Dictionary with 17 tags: PROPN, AUX, PART, VERB, ADV, ADP, PRON, NOUN, PUNCT, SCONJ, ADJ, NUM, CCONJ, DET, X, INTJ, SYM\n"
     ]
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "\n"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "2021-09-12 19:58:32,836 ----------------------------------------------------------------------------------------------------\n",
      "2021-09-12 19:58:32,837 Model: \"SequenceTagger(\n",
      "  (embeddings): StackedEmbeddings(\n",
      "    (list_embedding_0): FlairEmbeddings(\n",
      "      (lm): LanguageModel(\n",
      "        (drop): Dropout(p=0.1, inplace=False)\n",
      "        (encoder): Embedding(11854, 100)\n",
      "        (rnn): LSTM(100, 2048)\n",
      "        (decoder): Linear(in_features=2048, out_features=11854, bias=True)\n",
      "      )\n",
      "    )\n",
      "    (list_embedding_1): FlairEmbeddings(\n",
      "      (lm): LanguageModel(\n",
      "        (drop): Dropout(p=0.1, inplace=False)\n",
      "        (encoder): Embedding(11854, 100)\n",
      "        (rnn): LSTM(100, 2048)\n",
      "        (decoder): Linear(in_features=2048, out_features=11854, bias=True)\n",
      "      )\n",
      "    )\n",
      "  )\n",
      "  (word_dropout): WordDropout(p=0.05)\n",
      "  (locked_dropout): LockedDropout(p=0.5)\n",
      "  (embedding2nn): Linear(in_features=4096, out_features=4096, bias=True)\n",
      "  (rnn): LSTM(4096, 256, batch_first=True, bidirectional=True)\n",
      "  (linear): Linear(in_features=512, out_features=19, bias=True)\n",
      "  (beta): 1.0\n",
      "  (weights): None\n",
      "  (weight_tensor) None\n",
      ")\"\n",
      "2021-09-12 19:58:32,838 ----------------------------------------------------------------------------------------------------\n",
      "2021-09-12 19:58:32,839 Corpus: \"MultiCorpus: 2636 train + 280 dev + 305 test sentences\n",
      " - UD_ENGLISH Corpus: 12543 train + 2001 dev + 2077 test sentences - C:\\Users\\sunbe\\.flair\\datasets\\ud_english\n",
      " - UD_GERMAN Corpus: 13814 train + 799 dev + 977 test sentences - C:\\Users\\sunbe\\.flair\\datasets\\ud_german\"\n",
      "2021-09-12 19:58:32,839 ----------------------------------------------------------------------------------------------------\n",
      "2021-09-12 19:58:32,840 Parameters:\n",
      "2021-09-12 19:58:32,840  - learning_rate: \"0.1\"\n",
      "2021-09-12 19:58:32,841  - mini_batch_size: \"32\"\n",
      "2021-09-12 19:58:32,841  - patience: \"3\"\n",
      "2021-09-12 19:58:32,842  - anneal_factor: \"0.5\"\n",
      "2021-09-12 19:58:32,842  - max_epochs: \"1\"\n",
      "2021-09-12 19:58:32,842  - shuffle: \"True\"\n",
      "2021-09-12 19:58:32,843  - train_with_dev: \"False\"\n",
      "2021-09-12 19:58:32,843  - batch_growth_annealing: \"False\"\n",
      "2021-09-12 19:58:32,844 ----------------------------------------------------------------------------------------------------\n",
      "2021-09-12 19:58:32,844 Model training base path: \"resources\\taggers\\example-universal-pos\"\n",
      "2021-09-12 19:58:32,845 ----------------------------------------------------------------------------------------------------\n",
      "2021-09-12 19:58:32,845 Device: cpu\n",
      "2021-09-12 19:58:32,845 ----------------------------------------------------------------------------------------------------\n",
      "2021-09-12 19:58:32,846 Embeddings storage mode: cpu\n",
      "2021-09-12 19:58:32,848 ----------------------------------------------------------------------------------------------------\n",
      "2021-09-12 19:59:51,751 epoch 1 - iter 8/83 - loss 3.12990192 - samples/sec: 3.24 - lr: 0.100000\n",
      "2021-09-12 20:01:12,851 epoch 1 - iter 16/83 - loss 2.88331185 - samples/sec: 3.16 - lr: 0.100000\n",
      "2021-09-12 20:02:43,193 epoch 1 - iter 24/83 - loss 2.64417426 - samples/sec: 2.84 - lr: 0.100000\n",
      "2021-09-12 20:03:51,166 epoch 1 - iter 32/83 - loss 2.46559140 - samples/sec: 3.77 - lr: 0.100000\n",
      "2021-09-12 20:05:02,072 epoch 1 - iter 40/83 - loss 2.31242145 - samples/sec: 3.61 - lr: 0.100000\n",
      "2021-09-12 20:06:11,403 epoch 1 - iter 48/83 - loss 2.21518706 - samples/sec: 3.70 - lr: 0.100000\n",
      "2021-09-12 20:07:42,715 epoch 1 - iter 56/83 - loss 2.08393968 - samples/sec: 2.81 - lr: 0.100000\n",
      "2021-09-12 20:09:20,485 epoch 1 - iter 64/83 - loss 1.96040083 - samples/sec: 2.62 - lr: 0.100000\n",
      "2021-09-12 20:10:57,948 epoch 1 - iter 72/83 - loss 1.85018150 - samples/sec: 2.63 - lr: 0.100000\n",
      "2021-09-12 20:12:33,912 epoch 1 - iter 80/83 - loss 1.74801418 - samples/sec: 2.67 - lr: 0.100000\n",
      "2021-09-12 20:13:03,281 ----------------------------------------------------------------------------------------------------\n",
      "2021-09-12 20:13:03,282 EPOCH 1 done: loss 1.7188 - lr 0.1000000\n",
      "2021-09-12 20:14:22,921 DEV : loss 1.0206539630889893 - f1-score (micro avg)  0.6915\n",
      "2021-09-12 20:14:22,935 BAD EPOCHS (no improvement): 0\n",
      "2021-09-12 20:14:22,938 saving best model\n",
      "2021-09-12 20:14:24,400 ----------------------------------------------------------------------------------------------------\n",
      "2021-09-12 20:14:24,406 loading file resources\\taggers\\example-universal-pos\\best-model.pt\n",
      "2021-09-12 20:16:15,260 0.6824\t0.6824\t0.6824\t0.6824\n",
      "2021-09-12 20:16:15,262 \n",
      "Results:\n",
      "- F-score (micro) 0.6824\n",
      "- F-score (macro) 0.4598\n",
      "- Accuracy 0.6824\n",
      "\n",
      "By class:\n",
      "              precision    recall  f1-score   support\n",
      "\n",
      "        NOUN     0.7803    0.7160    0.7468       764\n",
      "       PUNCT     0.9388    0.9405    0.9396       538\n",
      "       PROPN     0.3748    0.8416    0.5187       322\n",
      "        VERB     0.5976    0.7692    0.6726       390\n",
      "         DET     0.8872    0.8655    0.8762       409\n",
      "         ADP     0.9260    0.8204    0.8700       412\n",
      "         ADJ     0.4783    0.4400    0.4583       300\n",
      "        PRON     0.7710    0.6623    0.7125       305\n",
      "         AUX     0.5952    0.5365    0.5643       233\n",
      "         ADV     0.4354    0.2591    0.3249       247\n",
      "       CCONJ     0.5000    0.6984    0.5828       126\n",
      "         NUM     0.9643    0.2872    0.4426        94\n",
      "        PART     0.0000    0.0000    0.0000        81\n",
      "       SCONJ     0.6667    0.0580    0.1067        69\n",
      "           X     0.0000    0.0000    0.0000        18\n",
      "        INTJ     0.0000    0.0000    0.0000        14\n",
      "         SYM     0.0000    0.0000    0.0000        13\n",
      "\n",
      "   micro avg     0.6824    0.6824    0.6824      4335\n",
      "   macro avg     0.5244    0.4644    0.4598      4335\n",
      "weighted avg     0.6976    0.6824    0.6716      4335\n",
      " samples avg     0.6824    0.6824    0.6824      4335\n",
      "\n",
      "2021-09-12 20:16:15,263 ----------------------------------------------------------------------------------------------------\n",
      "2021-09-12 20:16:15,265 ----------------------------------------------------------------------------------------------------\n",
      "2021-09-12 20:25:08,716 C:\\Users\\sunbe\\.flair\\datasets\\ud_english\n",
      "2021-09-12 20:25:08,718 0.6596\t0.6596\t0.6596\t0.6596\n",
      "2021-09-12 20:25:08,719 ----------------------------------------------------------------------------------------------------\n",
      "2021-09-12 20:30:25,985 C:\\Users\\sunbe\\.flair\\datasets\\ud_german\n",
      "2021-09-12 20:30:25,987 0.7349\t0.7349\t0.7349\t0.7349\n"
     ]
    },
    {
     "data": {
      "text/plain": [
       "{'test_score': 0.6823529411764706,\n",
       " 'dev_score_history': [0.6915153158037408],\n",
       " 'train_loss_history': [1.7188331359065359],\n",
       " 'dev_loss_history': [tensor(1.0207)]}"
      ]
     },
     "execution_count": 33,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "from flair.data import MultiCorpus\n",
    "from flair.datasets import UD_ENGLISH, UD_GERMAN\n",
    "from flair.embeddings import FlairEmbeddings, StackedEmbeddings\n",
    "from flair.models import SequenceTagger\n",
    "from flair.trainers import ModelTrainer\n",
    "\n",
    "# 1. get the corpora - English and German UD # 获取通用英语和德语的语料集\n",
    "corpus = MultiCorpus([UD_ENGLISH(), UD_GERMAN()]).downsample(0.1)\n",
    "\n",
    "# 2. what label do we want to predict? # 目标标签类型为，通用词性标记\n",
    "label_type = 'upos'\n",
    "\n",
    "# 3. make the label dictionary from the corpus # 获得标签字典\n",
    "label_dict = corpus.make_label_dictionary(label_type=label_type)\n",
    "print(label_dict)\n",
    "\n",
    "# 4. initialize embeddings\n",
    "embedding_types = [\n",
    "    # we use multilingual Flair embeddings in this task # 使用多语言Flair embedding\n",
    "    FlairEmbeddings('multi-forward'),\n",
    "    FlairEmbeddings('multi-backward'),\n",
    "]\n",
    "\n",
    "embeddings = StackedEmbeddings(embeddings=embedding_types)\n",
    "\n",
    "# 5. initialize sequence tagger\n",
    "tagger = SequenceTagger(hidden_size=256,\n",
    "                        embeddings=embeddings,\n",
    "                        tag_dictionary=label_dict,\n",
    "                        tag_type=label_type,\n",
    "                        use_crf=True)\n",
    "\n",
    "# 6. initialize trainer\n",
    "trainer = ModelTrainer(tagger, corpus)\n",
    "\n",
    "# 7. start training\n",
    "trainer.train('resources/taggers/example-universal-pos',\n",
    "              learning_rate=0.1,\n",
    "              mini_batch_size=32,\n",
    "#               max_epochs=150,\n",
    "              max_epochs=1,\n",
    "              write_weights = True,\n",
    "              )\n"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "e2b9a7ca",
   "metadata": {},
   "source": [
    "这为您提供了一个多语言模型。尝试尝试更多的语言!"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "148fddea",
   "metadata": {},
   "source": [
    "## Plotting Training Curves and Weights （绘制训练曲线和权重）"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "6358de9a",
   "metadata": {},
   "source": [
    "Flair包括一种辅助方法来绘制神经网络的训练曲线和权重。ModelTrainer会自动生成一个loss.tsv在结果文件夹中。如果在训练期间设置write_weights=True，它也将生成weight.txt文件。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 35,
   "id": "ea92de2b",
   "metadata": {
    "ExecuteTime": {
     "end_time": "2021-09-12T12:36:01.070458Z",
     "start_time": "2021-09-12T12:36:01.043799Z"
    },
    "scrolled": true
   },
   "outputs": [
    {
     "ename": "FileNotFoundError",
     "evalue": "[Errno 2] No such file or directory: 'loss.tsv'",
     "output_type": "error",
     "traceback": [
      "\u001b[1;31m---------------------------------------------------------------------------\u001b[0m",
      "\u001b[1;31mFileNotFoundError\u001b[0m                         Traceback (most recent call last)",
      "\u001b[1;32m<ipython-input-35-c13baf282a47>\u001b[0m in \u001b[0;36m<module>\u001b[1;34m\u001b[0m\n\u001b[0;32m     10\u001b[0m \u001b[1;33m\u001b[0m\u001b[0m\n\u001b[0;32m     11\u001b[0m \u001b[0mplotter\u001b[0m \u001b[1;33m=\u001b[0m \u001b[0mPlotter\u001b[0m\u001b[1;33m(\u001b[0m\u001b[1;33m)\u001b[0m\u001b[1;33m\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n\u001b[1;32m---> 12\u001b[1;33m \u001b[0mplotter\u001b[0m\u001b[1;33m.\u001b[0m\u001b[0mplot_training_curves\u001b[0m\u001b[1;33m(\u001b[0m\u001b[1;34m'loss.tsv'\u001b[0m\u001b[1;33m)\u001b[0m\u001b[1;33m\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n\u001b[0m\u001b[0;32m     13\u001b[0m \u001b[0mplotter\u001b[0m\u001b[1;33m.\u001b[0m\u001b[0mplot_weights\u001b[0m\u001b[1;33m(\u001b[0m\u001b[1;34m'weights.txt'\u001b[0m\u001b[1;33m)\u001b[0m\u001b[1;33m\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n",
      "\u001b[1;32mC:\\ProgramData\\Anaconda3\\lib\\site-packages\\flair\\visual\\training_curves.py\u001b[0m in \u001b[0;36mplot_training_curves\u001b[1;34m(self, file_name, plot_values)\u001b[0m\n\u001b[0;32m    178\u001b[0m         \u001b[1;32mfor\u001b[0m \u001b[0mplot_no\u001b[0m\u001b[1;33m,\u001b[0m \u001b[0mplot_value\u001b[0m \u001b[1;32min\u001b[0m \u001b[0menumerate\u001b[0m\u001b[1;33m(\u001b[0m\u001b[0mplot_values\u001b[0m\u001b[1;33m)\u001b[0m\u001b[1;33m:\u001b[0m\u001b[1;33m\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n\u001b[0;32m    179\u001b[0m \u001b[1;33m\u001b[0m\u001b[0m\n\u001b[1;32m--> 180\u001b[1;33m             \u001b[0mtraining_curves\u001b[0m \u001b[1;33m=\u001b[0m \u001b[0mself\u001b[0m\u001b[1;33m.\u001b[0m\u001b[0m_extract_evaluation_data\u001b[0m\u001b[1;33m(\u001b[0m\u001b[0mfile_name\u001b[0m\u001b[1;33m,\u001b[0m \u001b[0mplot_value\u001b[0m\u001b[1;33m)\u001b[0m\u001b[1;33m\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n\u001b[0m\u001b[0;32m    181\u001b[0m \u001b[1;33m\u001b[0m\u001b[0m\n\u001b[0;32m    182\u001b[0m             \u001b[0mplt\u001b[0m\u001b[1;33m.\u001b[0m\u001b[0msubplot\u001b[0m\u001b[1;33m(\u001b[0m\u001b[0mlen\u001b[0m\u001b[1;33m(\u001b[0m\u001b[0mplot_values\u001b[0m\u001b[1;33m)\u001b[0m\u001b[1;33m,\u001b[0m \u001b[1;36m1\u001b[0m\u001b[1;33m,\u001b[0m \u001b[0mplot_no\u001b[0m \u001b[1;33m+\u001b[0m \u001b[1;36m1\u001b[0m\u001b[1;33m)\u001b[0m\u001b[1;33m\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n",
      "\u001b[1;32mC:\\ProgramData\\Anaconda3\\lib\\site-packages\\flair\\visual\\training_curves.py\u001b[0m in \u001b[0;36m_extract_evaluation_data\u001b[1;34m(file_name, score)\u001b[0m\n\u001b[0;32m     38\u001b[0m         }\n\u001b[0;32m     39\u001b[0m \u001b[1;33m\u001b[0m\u001b[0m\n\u001b[1;32m---> 40\u001b[1;33m         \u001b[1;32mwith\u001b[0m \u001b[0mopen\u001b[0m\u001b[1;33m(\u001b[0m\u001b[0mfile_name\u001b[0m\u001b[1;33m,\u001b[0m \u001b[1;34m\"r\"\u001b[0m\u001b[1;33m)\u001b[0m \u001b[1;32mas\u001b[0m \u001b[0mtsvin\u001b[0m\u001b[1;33m:\u001b[0m\u001b[1;33m\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n\u001b[0m\u001b[0;32m     41\u001b[0m             \u001b[0mtsvin\u001b[0m \u001b[1;33m=\u001b[0m \u001b[0mcsv\u001b[0m\u001b[1;33m.\u001b[0m\u001b[0mreader\u001b[0m\u001b[1;33m(\u001b[0m\u001b[0mtsvin\u001b[0m\u001b[1;33m,\u001b[0m \u001b[0mdelimiter\u001b[0m\u001b[1;33m=\u001b[0m\u001b[1;34m\"\\t\"\u001b[0m\u001b[1;33m)\u001b[0m\u001b[1;33m\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n\u001b[0;32m     42\u001b[0m \u001b[1;33m\u001b[0m\u001b[0m\n",
      "\u001b[1;31mFileNotFoundError\u001b[0m: [Errno 2] No such file or directory: 'loss.tsv'"
     ]
    },
    {
     "data": {
      "text/plain": [
       "<Figure size 1080x720 with 0 Axes>"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    }
   ],
   "source": [
    "# # set write_weights to True to write weights\n",
    "# trainer.train('resources/taggers/example-universal-pos',\n",
    "#               ...\n",
    "# write_weights = True,\n",
    "#                 ...\n",
    "# )\n",
    "\n",
    "# visualize # 可视化损失和权重\n",
    "from flair.visual.training_curves import Plotter\n",
    "\n",
    "plotter = Plotter()\n",
    "plotter.plot_training_curves('loss.tsv') # 因为只有一个epoch所以没有loss.tsv文件\n",
    "plotter.plot_weights('weights.txt')\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "194ead45",
   "metadata": {},
   "outputs": [],
   "source": [
    "这将在结果文件夹中生成PNG图。"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "e53771ab",
   "metadata": {},
   "source": [
    "## Resuming Training（恢复训练）"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "4fd4774f",
   "metadata": {},
   "source": [
    "如果您想在某个点停止训练并在稍后的点恢复训练，则应该将参数checkpoint设置为True。这将在每个epoch之后保存模型和训练参数。因此，您可以在任何以后的点加载模型和训练器，并在您离开的地方继续训练。"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "69819213",
   "metadata": {},
   "source": [
    "下面的示例代码展示了如何训练、停止和继续训练SequenceTagger。TextClassifier也可以这样做。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 38,
   "id": "a342d39d",
   "metadata": {
    "ExecuteTime": {
     "end_time": "2021-09-12T12:48:54.392217Z",
     "start_time": "2021-09-12T12:48:50.279380Z"
    },
    "scrolled": true
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "2021-09-12 20:48:50,284 Reading data from C:\\Users\\sunbe\\.flair\\datasets\\wnut_17\n",
      "2021-09-12 20:48:50,286 Train: C:\\Users\\sunbe\\.flair\\datasets\\wnut_17\\wnut17train.conll\n",
      "2021-09-12 20:48:50,286 Dev: C:\\Users\\sunbe\\.flair\\datasets\\wnut_17\\emerging.dev.conll\n",
      "2021-09-12 20:48:50,287 Test: C:\\Users\\sunbe\\.flair\\datasets\\wnut_17\\emerging.test.annotated\n",
      "2021-09-12 20:48:51,060 Computing label dictionary. Progress:\n"
     ]
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "100%|█████████████████████████████████████████████████████████████████████████████| 339/339 [00:00<00:00, 20559.12it/s]"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "2021-09-12 20:48:51,080 Corpus contains the labels: ner (#6410)\n",
      "2021-09-12 20:48:51,081 Created (for label 'ner') Dictionary with 22 tags: O, S-product, S-person, S-corporation, B-corporation, I-corporation, E-corporation, B-group, I-group, E-group, S-location, B-location, E-location, B-person, E-person, S-group, B-creative-work, I-creative-work, E-creative-work, S-creative-work, I-location, I-person\n"
     ]
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "\n"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "2021-09-12 20:48:51,510 ----------------------------------------------------------------------------------------------------\n",
      "2021-09-12 20:48:51,511 Model: \"SequenceTagger(\n",
      "  (embeddings): StackedEmbeddings(\n",
      "    (list_embedding_0): WordEmbeddings('glove')\n",
      "  )\n",
      "  (word_dropout): WordDropout(p=0.05)\n",
      "  (locked_dropout): LockedDropout(p=0.5)\n",
      "  (embedding2nn): Linear(in_features=100, out_features=100, bias=True)\n",
      "  (rnn): LSTM(100, 256, batch_first=True, bidirectional=True)\n",
      "  (linear): Linear(in_features=512, out_features=24, bias=True)\n",
      "  (beta): 1.0\n",
      "  (weights): None\n",
      "  (weight_tensor) None\n",
      ")\"\n",
      "2021-09-12 20:48:51,512 ----------------------------------------------------------------------------------------------------\n",
      "2021-09-12 20:48:51,512 Corpus: \"Corpus: 339 train + 101 dev + 129 test sentences\"\n",
      "2021-09-12 20:48:51,513 ----------------------------------------------------------------------------------------------------\n",
      "2021-09-12 20:48:51,513 Parameters:\n",
      "2021-09-12 20:48:51,514  - learning_rate: \"0.1\"\n",
      "2021-09-12 20:48:51,515  - mini_batch_size: \"32\"\n",
      "2021-09-12 20:48:51,515  - patience: \"3\"\n",
      "2021-09-12 20:48:51,516  - anneal_factor: \"0.5\"\n",
      "2021-09-12 20:48:51,516  - max_epochs: \"2\"\n",
      "2021-09-12 20:48:51,516  - shuffle: \"True\"\n",
      "2021-09-12 20:48:51,517  - train_with_dev: \"False\"\n",
      "2021-09-12 20:48:51,518  - batch_growth_annealing: \"False\"\n",
      "2021-09-12 20:48:51,519 ----------------------------------------------------------------------------------------------------\n",
      "2021-09-12 20:48:51,519 Model training base path: \"resources\\taggers\\example-ner\"\n",
      "2021-09-12 20:48:51,520 ----------------------------------------------------------------------------------------------------\n",
      "2021-09-12 20:48:51,520 Device: cpu\n",
      "2021-09-12 20:48:51,521 ----------------------------------------------------------------------------------------------------\n",
      "2021-09-12 20:48:51,521 Embeddings storage mode: cpu\n",
      "2021-09-12 20:48:51,523 ----------------------------------------------------------------------------------------------------\n",
      "2021-09-12 20:48:51,814 epoch 1 - iter 1/11 - loss 3.71619591 - samples/sec: 110.64 - lr: 0.100000\n",
      "2021-09-12 20:48:52,066 epoch 1 - iter 2/11 - loss 3.33022650 - samples/sec: 127.42 - lr: 0.100000\n",
      "2021-09-12 20:48:52,304 epoch 1 - iter 3/11 - loss 2.87843786 - samples/sec: 134.82 - lr: 0.100000\n",
      "2021-09-12 20:48:52,573 epoch 1 - iter 4/11 - loss 2.44662376 - samples/sec: 120.02 - lr: 0.100000\n",
      "2021-09-12 20:48:52,800 epoch 1 - iter 5/11 - loss 2.08935536 - samples/sec: 141.74 - lr: 0.100000\n",
      "2021-09-12 20:48:53,090 epoch 1 - iter 6/11 - loss 1.78017261 - samples/sec: 110.70 - lr: 0.100000\n",
      "2021-09-12 20:48:53,335 epoch 1 - iter 7/11 - loss 1.61542913 - samples/sec: 131.15 - lr: 0.100000\n",
      "2021-09-12 20:48:53,600 epoch 1 - iter 8/11 - loss 1.48927635 - samples/sec: 121.23 - lr: 0.100000\n",
      "2021-09-12 20:48:53,835 epoch 1 - iter 9/11 - loss 1.38885790 - samples/sec: 136.85 - lr: 0.100000\n",
      "2021-09-12 20:48:54,137 epoch 1 - iter 10/11 - loss 1.28605602 - samples/sec: 106.46 - lr: 0.100000\n",
      "2021-09-12 20:48:54,320 epoch 1 - iter 11/11 - loss 1.25192265 - samples/sec: 175.49 - lr: 0.100000\n",
      "2021-09-12 20:48:54,321 ----------------------------------------------------------------------------------------------------\n",
      "2021-09-12 20:48:54,322 EPOCH 1 done: loss 1.2519 - lr 0.1000000\n",
      "2021-09-12 20:48:54,355 The string 'B-product' is not in dictionary! Dictionary contains only: ['O', 'S-product', 'S-person', 'S-corporation', 'B-corporation', 'I-corporation', 'E-corporation', 'B-group', 'I-group', 'E-group', 'S-location', 'B-location', 'E-location', 'B-person', 'E-person', 'S-group', 'B-creative-work', 'I-creative-work', 'E-creative-work', 'S-creative-work', 'I-location', 'I-person', '<START>', '<STOP>']\n",
      "2021-09-12 20:48:54,356 You can create a Dictionary that handles unknown items with an <unk>-key by setting add_unk = True in the construction.\n"
     ]
    },
    {
     "ename": "IndexError",
     "evalue": "",
     "output_type": "error",
     "traceback": [
      "\u001b[1;31m---------------------------------------------------------------------------\u001b[0m",
      "\u001b[1;31mIndexError\u001b[0m                                Traceback (most recent call last)",
      "\u001b[1;32m<ipython-input-38-ba0a48f4cf88>\u001b[0m in \u001b[0;36m<module>\u001b[1;34m\u001b[0m\n\u001b[0;32m     33\u001b[0m \u001b[1;33m\u001b[0m\u001b[0m\n\u001b[0;32m     34\u001b[0m \u001b[1;31m# 7. start training   # 开始训练\u001b[0m\u001b[1;33m\u001b[0m\u001b[1;33m\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n\u001b[1;32m---> 35\u001b[1;33m trainer.train('resources/taggers/example-ner',\n\u001b[0m\u001b[0;32m     36\u001b[0m               \u001b[0mlearning_rate\u001b[0m\u001b[1;33m=\u001b[0m\u001b[1;36m0.1\u001b[0m\u001b[1;33m,\u001b[0m\u001b[1;33m\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n\u001b[0;32m     37\u001b[0m               \u001b[0mmini_batch_size\u001b[0m\u001b[1;33m=\u001b[0m\u001b[1;36m32\u001b[0m\u001b[1;33m,\u001b[0m\u001b[1;33m\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n",
      "\u001b[1;32mC:\\ProgramData\\Anaconda3\\lib\\site-packages\\flair\\trainers\\trainer.py\u001b[0m in \u001b[0;36mtrain\u001b[1;34m(self, base_path, learning_rate, mini_batch_size, mini_batch_chunk_size, max_epochs, scheduler, cycle_momentum, anneal_factor, patience, initial_extra_patience, min_learning_rate, train_with_dev, train_with_test, monitor_train, monitor_test, embeddings_storage_mode, checkpoint, save_final_model, anneal_with_restarts, anneal_with_prestarts, anneal_against_dev_loss, batch_growth_annealing, shuffle, param_selection_mode, write_weights, num_workers, sampler, use_amp, amp_opt_level, eval_on_train_fraction, eval_on_train_shuffle, save_model_each_k_epochs, main_evaluation_metric, tensorboard_comment, save_best_checkpoints, use_swa, use_final_model_for_eval, gold_label_dictionary_for_eval, **kwargs)\u001b[0m\n\u001b[0;32m    523\u001b[0m \u001b[1;33m\u001b[0m\u001b[0m\n\u001b[0;32m    524\u001b[0m                 \u001b[1;32mif\u001b[0m \u001b[0mlog_dev\u001b[0m\u001b[1;33m:\u001b[0m\u001b[1;33m\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n\u001b[1;32m--> 525\u001b[1;33m                     dev_eval_result = self.model.evaluate(\n\u001b[0m\u001b[0;32m    526\u001b[0m                         \u001b[0mself\u001b[0m\u001b[1;33m.\u001b[0m\u001b[0mcorpus\u001b[0m\u001b[1;33m.\u001b[0m\u001b[0mdev\u001b[0m\u001b[1;33m,\u001b[0m\u001b[1;33m\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n\u001b[0;32m    527\u001b[0m                         \u001b[0mgold_label_type\u001b[0m\u001b[1;33m=\u001b[0m\u001b[0mself\u001b[0m\u001b[1;33m.\u001b[0m\u001b[0mmodel\u001b[0m\u001b[1;33m.\u001b[0m\u001b[0mlabel_type\u001b[0m\u001b[1;33m,\u001b[0m\u001b[1;33m\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n",
      "\u001b[1;32mC:\\ProgramData\\Anaconda3\\lib\\site-packages\\flair\\nn\\model.py\u001b[0m in \u001b[0;36mevaluate\u001b[1;34m(self, data_points, gold_label_type, out_path, embedding_storage_mode, mini_batch_size, num_workers, main_evaluation_metric, exclude_labels, gold_label_dictionary)\u001b[0m\n\u001b[0;32m    159\u001b[0m \u001b[1;33m\u001b[0m\u001b[0m\n\u001b[0;32m    160\u001b[0m                 \u001b[1;31m# predict for batch\u001b[0m\u001b[1;33m\u001b[0m\u001b[1;33m\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n\u001b[1;32m--> 161\u001b[1;33m                 loss_and_count = self.predict(batch,\n\u001b[0m\u001b[0;32m    162\u001b[0m                                               \u001b[0membedding_storage_mode\u001b[0m\u001b[1;33m=\u001b[0m\u001b[0membedding_storage_mode\u001b[0m\u001b[1;33m,\u001b[0m\u001b[1;33m\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n\u001b[0;32m    163\u001b[0m                                               \u001b[0mmini_batch_size\u001b[0m\u001b[1;33m=\u001b[0m\u001b[0mmini_batch_size\u001b[0m\u001b[1;33m,\u001b[0m\u001b[1;33m\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n",
      "\u001b[1;32mC:\\ProgramData\\Anaconda3\\lib\\site-packages\\flair\\models\\sequence_tagger_model.py\u001b[0m in \u001b[0;36mpredict\u001b[1;34m(self, sentences, mini_batch_size, all_tag_prob, verbose, label_name, return_loss, embedding_storage_mode)\u001b[0m\n\u001b[0;32m    368\u001b[0m \u001b[1;33m\u001b[0m\u001b[0m\n\u001b[0;32m    369\u001b[0m                 \u001b[1;32mif\u001b[0m \u001b[0mreturn_loss\u001b[0m\u001b[1;33m:\u001b[0m\u001b[1;33m\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n\u001b[1;32m--> 370\u001b[1;33m                     \u001b[0mloss_and_count\u001b[0m \u001b[1;33m=\u001b[0m \u001b[0mself\u001b[0m\u001b[1;33m.\u001b[0m\u001b[0m_calculate_loss\u001b[0m\u001b[1;33m(\u001b[0m\u001b[0mfeature\u001b[0m\u001b[1;33m,\u001b[0m \u001b[0mbatch\u001b[0m\u001b[1;33m)\u001b[0m\u001b[1;33m\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n\u001b[0m\u001b[0;32m    371\u001b[0m                     \u001b[0moverall_loss\u001b[0m \u001b[1;33m+=\u001b[0m \u001b[0mloss_and_count\u001b[0m\u001b[1;33m[\u001b[0m\u001b[1;36m0\u001b[0m\u001b[1;33m]\u001b[0m\u001b[1;33m\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n\u001b[0;32m    372\u001b[0m                     \u001b[0moverall_count\u001b[0m \u001b[1;33m+=\u001b[0m \u001b[0mloss_and_count\u001b[0m\u001b[1;33m[\u001b[0m\u001b[1;36m1\u001b[0m\u001b[1;33m]\u001b[0m\u001b[1;33m\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n",
      "\u001b[1;32mC:\\ProgramData\\Anaconda3\\lib\\site-packages\\flair\\models\\sequence_tagger_model.py\u001b[0m in \u001b[0;36m_calculate_loss\u001b[1;34m(self, features, sentences)\u001b[0m\n\u001b[0;32m    523\u001b[0m         \u001b[1;32mfor\u001b[0m \u001b[0ms_id\u001b[0m\u001b[1;33m,\u001b[0m \u001b[0msentence\u001b[0m \u001b[1;32min\u001b[0m \u001b[0menumerate\u001b[0m\u001b[1;33m(\u001b[0m\u001b[0msentences\u001b[0m\u001b[1;33m)\u001b[0m\u001b[1;33m:\u001b[0m\u001b[1;33m\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n\u001b[0;32m    524\u001b[0m             \u001b[1;31m# get the tags in this sentence\u001b[0m\u001b[1;33m\u001b[0m\u001b[1;33m\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n\u001b[1;32m--> 525\u001b[1;33m             tag_idx: List[int] = [\n\u001b[0m\u001b[0;32m    526\u001b[0m                 \u001b[0mself\u001b[0m\u001b[1;33m.\u001b[0m\u001b[0mtag_dictionary\u001b[0m\u001b[1;33m.\u001b[0m\u001b[0mget_idx_for_item\u001b[0m\u001b[1;33m(\u001b[0m\u001b[0mtoken\u001b[0m\u001b[1;33m.\u001b[0m\u001b[0mget_tag\u001b[0m\u001b[1;33m(\u001b[0m\u001b[0mself\u001b[0m\u001b[1;33m.\u001b[0m\u001b[0mtag_type\u001b[0m\u001b[1;33m)\u001b[0m\u001b[1;33m.\u001b[0m\u001b[0mvalue\u001b[0m\u001b[1;33m)\u001b[0m\u001b[1;33m\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n\u001b[0;32m    527\u001b[0m                 \u001b[1;32mfor\u001b[0m \u001b[0mtoken\u001b[0m \u001b[1;32min\u001b[0m \u001b[0msentence\u001b[0m\u001b[1;33m\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n",
      "\u001b[1;32mC:\\ProgramData\\Anaconda3\\lib\\site-packages\\flair\\models\\sequence_tagger_model.py\u001b[0m in \u001b[0;36m<listcomp>\u001b[1;34m(.0)\u001b[0m\n\u001b[0;32m    524\u001b[0m             \u001b[1;31m# get the tags in this sentence\u001b[0m\u001b[1;33m\u001b[0m\u001b[1;33m\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n\u001b[0;32m    525\u001b[0m             tag_idx: List[int] = [\n\u001b[1;32m--> 526\u001b[1;33m                 \u001b[0mself\u001b[0m\u001b[1;33m.\u001b[0m\u001b[0mtag_dictionary\u001b[0m\u001b[1;33m.\u001b[0m\u001b[0mget_idx_for_item\u001b[0m\u001b[1;33m(\u001b[0m\u001b[0mtoken\u001b[0m\u001b[1;33m.\u001b[0m\u001b[0mget_tag\u001b[0m\u001b[1;33m(\u001b[0m\u001b[0mself\u001b[0m\u001b[1;33m.\u001b[0m\u001b[0mtag_type\u001b[0m\u001b[1;33m)\u001b[0m\u001b[1;33m.\u001b[0m\u001b[0mvalue\u001b[0m\u001b[1;33m)\u001b[0m\u001b[1;33m\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n\u001b[0m\u001b[0;32m    527\u001b[0m                 \u001b[1;32mfor\u001b[0m \u001b[0mtoken\u001b[0m \u001b[1;32min\u001b[0m \u001b[0msentence\u001b[0m\u001b[1;33m\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n\u001b[0;32m    528\u001b[0m             ]\n",
      "\u001b[1;32mC:\\ProgramData\\Anaconda3\\lib\\site-packages\\flair\\data.py\u001b[0m in \u001b[0;36mget_idx_for_item\u001b[1;34m(self, item)\u001b[0m\n\u001b[0;32m     64\u001b[0m             \u001b[0mlog\u001b[0m\u001b[1;33m.\u001b[0m\u001b[0merror\u001b[0m\u001b[1;33m(\u001b[0m\u001b[1;34mf\"The string '{item}' is not in dictionary! Dictionary contains only: {self.get_items()}\"\u001b[0m\u001b[1;33m)\u001b[0m\u001b[1;33m\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n\u001b[0;32m     65\u001b[0m             \u001b[0mlog\u001b[0m\u001b[1;33m.\u001b[0m\u001b[0merror\u001b[0m\u001b[1;33m(\u001b[0m\u001b[1;34m\"You can create a Dictionary that handles unknown items with an <unk>-key by setting add_unk = True in the construction.\"\u001b[0m\u001b[1;33m)\u001b[0m\u001b[1;33m\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n\u001b[1;32m---> 66\u001b[1;33m             \u001b[1;32mraise\u001b[0m \u001b[0mIndexError\u001b[0m\u001b[1;33m\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n\u001b[0m\u001b[0;32m     67\u001b[0m \u001b[1;33m\u001b[0m\u001b[0m\n\u001b[0;32m     68\u001b[0m     \u001b[1;32mdef\u001b[0m \u001b[0mget_idx_for_items\u001b[0m\u001b[1;33m(\u001b[0m\u001b[0mself\u001b[0m\u001b[1;33m,\u001b[0m \u001b[0mitems\u001b[0m\u001b[1;33m:\u001b[0m \u001b[0mList\u001b[0m\u001b[1;33m[\u001b[0m\u001b[0mstr\u001b[0m\u001b[1;33m]\u001b[0m\u001b[1;33m)\u001b[0m \u001b[1;33m->\u001b[0m \u001b[0mList\u001b[0m\u001b[1;33m[\u001b[0m\u001b[0mint\u001b[0m\u001b[1;33m]\u001b[0m\u001b[1;33m:\u001b[0m\u001b[1;33m\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n",
      "\u001b[1;31mIndexError\u001b[0m: "
     ]
    }
   ],
   "source": [
    "from flair.data import Corpus\n",
    "from flair.datasets import WNUT_17\n",
    "from flair.embeddings import TokenEmbeddings, WordEmbeddings, StackedEmbeddings\n",
    "from typing import List\n",
    "from flair.models import SequenceTagger\n",
    "from flair.trainers import ModelTrainer\n",
    "\n",
    "# 1. get the corpus # 获取语料库\n",
    "corpus: Corpus = WNUT_17().downsample(0.1)\n",
    "\n",
    "# 2. what label do we want to predict? # 将要预测的标签种类\n",
    "label_type = 'ner'\n",
    "\n",
    "# 3. make the label dictionary from the corpus # 获取语料库的标签字典\n",
    "label_dict = corpus.make_label_dictionary(label_type=label_type)\n",
    "\n",
    "# 4. initialize embeddings  # 初始化embedding\n",
    "embedding_types: List[TokenEmbeddings] = [\n",
    "    WordEmbeddings('glove')\n",
    "]\n",
    "\n",
    "embeddings: StackedEmbeddings = StackedEmbeddings(embeddings=embedding_types)\n",
    "\n",
    "# 5. initialize sequence tagger  # 初始化序列标签器\n",
    "tagger: SequenceTagger = SequenceTagger(hidden_size=256,\n",
    "                                        embeddings=embeddings,\n",
    "                                        tag_dictionary=label_dict,\n",
    "                                        tag_type=label_type,\n",
    "                                        use_crf=True)\n",
    "\n",
    "# 6. initialize trainer # 初始化训练器\n",
    "trainer: ModelTrainer = ModelTrainer(tagger, corpus)\n",
    "\n",
    "# 7. start training   # 开始训练\n",
    "trainer.train('resources/taggers/example-ner',\n",
    "              learning_rate=0.1,\n",
    "              mini_batch_size=32,\n",
    "#               max_epochs=10,\n",
    "              max_epochs=2,\n",
    "              checkpoint=True) # 设置检查点\n",
    "\n",
    "# 8. stop training at any point # 停止训练在任何一个检查点\n",
    "\n",
    "# 9. continue trainer at later point  # 开始训练在最后一个检查点\n",
    "checkpoint = 'resources/taggers/example-ner/checkpoint.pt'\n",
    "trainer = ModelTrainer.load_checkpoint(checkpoint, corpus)\n",
    "trainer.train('resources/taggers/example-ner',\n",
    "              learning_rate=0.1,\n",
    "              mini_batch_size=32,\n",
    "#               max_epochs=150,\n",
    "              max_epochs=2,\n",
    "              checkpoint=True)\n"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "2d095978",
   "metadata": {},
   "source": [
    "## Scalability: Training with Large Datasets（可伸缩性:使用大型数据集进行培训）"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "58a12d5f",
   "metadata": {},
   "source": [
    "Flair中的许多embedding在运行时方面的生成，成本较高，并且可能具有较大的向量。这方面的例子是基于Flair和transformer的embedding。根据您的设置，您可以设置选项来优化训练时间。"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "401a4c7a",
   "metadata": {},
   "source": [
    "### Setting the Mini-Batch Size（设置Mini-Batch大小）"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "b341aa49",
   "metadata": {},
   "source": [
    "最重要的是mini_batch_size:如果你的GPU能够处理它以获得良好的加速，请将其设置为更高的值。但是，如果数据集非常小，就不要把它设置得太高，否则每个epoch就没有足够的学习步骤。"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "d267c296",
   "metadata": {},
   "source": [
    "一个类似的参数是mini_batch_chunk_size:该参数导致min-batch进一步被分割成块，导致速度变慢，但gpu内存效率更高。标准是将这个设置为None(只是不要设置它)-只有在你的GPU不能处理所需的小批量大小时才设置这个。记住，这与mini_batch_size相反，因此会减慢计算速度。"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "a6a6b8fc",
   "metadata": {},
   "source": [
    "### Setting the Storage Mode of Embeddings（设置embedding的存储模式）"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "e7212f4a",
   "metadata": {},
   "source": [
    "您需要设置的另一个主要参数是ModelTrainer的train()方法中的embedddings_storage_mode。它可以有以下三个值之一:"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "d8fe5b61",
   "metadata": {},
   "source": [
    "（1）'none':如果你设置了embedddings_storage_mode ='none'，嵌入不会被存储在内存中。相反，它们是在每个训练min-batch中(在训练期间)实时生成的。这样做的主要好处是可以降低内存需求。如果对transformer进行微调，请始终设置此参数。"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "97c20231",
   "metadata": {},
   "source": [
    "（2）'cpu':如果你设置了embedddings_storage_mode ='cpu'，嵌入将被存储在常规内存中。"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "acdbae61",
   "metadata": {},
   "source": [
    "（3）'gpu':如果你设置了embedddings_storage_mode ='gpu'，嵌入将被存储在CUDA内存中。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "52206619",
   "metadata": {},
   "outputs": [],
   "source": []
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "985d099c",
   "metadata": {},
   "outputs": [],
   "source": []
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "fba70d0b",
   "metadata": {},
   "outputs": [],
   "source": []
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "fdd252a1",
   "metadata": {},
   "outputs": [],
   "source": []
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "e510ce75",
   "metadata": {},
   "outputs": [],
   "source": []
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "2328c995",
   "metadata": {},
   "outputs": [],
   "source": []
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "dcde5d1f",
   "metadata": {},
   "outputs": [],
   "source": []
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "d146c984",
   "metadata": {},
   "outputs": [],
   "source": []
  },
  {
   "cell_type": "markdown",
   "id": "ce18c344",
   "metadata": {},
   "source": [
    "# Tutorial 9: Training your own Flair Embeddings（训练您自己的flair embedding）"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "2e34125a",
   "metadata": {},
   "source": [
    "Flair Embeddings是Flair的秘密酱料，允许我们在一系列NLP任务中实现最先进的精度。本教程向您展示了如何训练您自己的Flair嵌入，如果您想将Flair应用到新的语言或领域，这可能会派上用场。"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "58f83804",
   "metadata": {},
   "source": [
    "## Preparing a Text Corpus（准备文本语料库）"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "be613a12",
   "metadata": {},
   "source": [
    "语言模型用纯文本训练。在字符LMs的情况下，我们训练它们预测字符序列中的下一个字符。要训练自己的模型，首先需要确定一个适当的大型语料库。在我们的实验中，我们使用了大约10亿个单词的语料库。"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "851cd6ca",
   "metadata": {},
   "source": [
    "您需要将您的语料库分成训练、验证和测试部分。我们的trainer类假设有一个语料库文件夹，其中有一个包含测试和验证数据的'test.txt'和' validate .txt'。重要的是，还有一个名为“train”的文件夹，其中包含分拆的训练数据。例如，十亿词的语料库被分成100个部分。如果所有的数据不能放进内存中，那么这些分割是必要的，在这种情况下，训练器会随机迭代所有的分割。"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "66a1c3d4",
   "metadata": {},
   "source": [
    "因此，文件夹结构必须是这样的："
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "ca06486d",
   "metadata": {},
   "outputs": [],
   "source": [
    "corpus/\n",
    "corpus/train/\n",
    "corpus/train/train_split_1\n",
    "corpus/train/train_split_2\n",
    "corpus/train/...\n",
    "corpus/train/train_split_X\n",
    "corpus/test.txt\n",
    "corpus/valid.txt"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "0f1b88df",
   "metadata": {},
   "source": [
    "在大多数情况下，建议以非结构化的格式提供语料库，对文档或句子没有显式的分隔符。如果想让LM更容易识别文档边界，可以引入“[SEP]”这样的分隔符令牌。"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "59335056",
   "metadata": {},
   "source": [
    "## Training the Language Model（训练语言模式）"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "260c7361",
   "metadata": {},
   "source": [
    "一旦有了这个文件夹结构，只需将LanguageModelTrainer类指向它，就可以开始学习模型了"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 40,
   "id": "fddd9b19",
   "metadata": {
    "ExecuteTime": {
     "end_time": "2021-09-12T12:54:31.198091Z",
     "start_time": "2021-09-12T12:54:31.175804Z"
    },
    "scrolled": true
   },
   "outputs": [
    {
     "ename": "AssertionError",
     "evalue": "",
     "output_type": "error",
     "traceback": [
      "\u001b[1;31m---------------------------------------------------------------------------\u001b[0m",
      "\u001b[1;31mAssertionError\u001b[0m                            Traceback (most recent call last)",
      "\u001b[1;32m<ipython-input-40-2b2d792ca9d6>\u001b[0m in \u001b[0;36m<module>\u001b[1;34m\u001b[0m\n\u001b[0;32m     10\u001b[0m \u001b[1;33m\u001b[0m\u001b[0m\n\u001b[0;32m     11\u001b[0m \u001b[1;31m# get your corpus, process forward and at the character level # 准备好你的语料库，向前推进，在字符层面\u001b[0m\u001b[1;33m\u001b[0m\u001b[1;33m\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n\u001b[1;32m---> 12\u001b[1;33m corpus = TextCorpus('/path/to/your/corpus',\n\u001b[0m\u001b[0;32m     13\u001b[0m                     \u001b[0mdictionary\u001b[0m\u001b[1;33m,\u001b[0m\u001b[1;33m\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n\u001b[0;32m     14\u001b[0m                     \u001b[0mis_forward_lm\u001b[0m\u001b[1;33m,\u001b[0m\u001b[1;33m\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n",
      "\u001b[1;32mC:\\ProgramData\\Anaconda3\\lib\\site-packages\\flair\\trainers\\language_model_trainer.py\u001b[0m in \u001b[0;36m__init__\u001b[1;34m(self, path, dictionary, forward, character_level, random_case_flip, document_delimiter)\u001b[0m\n\u001b[0;32m    117\u001b[0m             \u001b[0mpath\u001b[0m \u001b[1;33m=\u001b[0m \u001b[0mPath\u001b[0m\u001b[1;33m(\u001b[0m\u001b[0mpath\u001b[0m\u001b[1;33m)\u001b[0m\u001b[1;33m\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n\u001b[0;32m    118\u001b[0m \u001b[1;33m\u001b[0m\u001b[0m\n\u001b[1;32m--> 119\u001b[1;33m         self.train = TextDataset(\n\u001b[0m\u001b[0;32m    120\u001b[0m             \u001b[0mpath\u001b[0m \u001b[1;33m/\u001b[0m \u001b[1;34m\"train\"\u001b[0m\u001b[1;33m,\u001b[0m\u001b[1;33m\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n\u001b[0;32m    121\u001b[0m             \u001b[0mdictionary\u001b[0m\u001b[1;33m,\u001b[0m\u001b[1;33m\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n",
      "\u001b[1;32mC:\\ProgramData\\Anaconda3\\lib\\site-packages\\flair\\trainers\\language_model_trainer.py\u001b[0m in \u001b[0;36m__init__\u001b[1;34m(self, path, dictionary, expand_vocab, forward, split_on_char, random_case_flip, document_delimiter, shuffle)\u001b[0m\n\u001b[0;32m     37\u001b[0m         \u001b[1;32mif\u001b[0m \u001b[0mtype\u001b[0m\u001b[1;33m(\u001b[0m\u001b[0mpath\u001b[0m\u001b[1;33m)\u001b[0m \u001b[1;32mis\u001b[0m \u001b[0mstr\u001b[0m\u001b[1;33m:\u001b[0m\u001b[1;33m\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n\u001b[0;32m     38\u001b[0m             \u001b[0mpath\u001b[0m \u001b[1;33m=\u001b[0m \u001b[0mPath\u001b[0m\u001b[1;33m(\u001b[0m\u001b[0mpath\u001b[0m\u001b[1;33m)\u001b[0m\u001b[1;33m\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n\u001b[1;32m---> 39\u001b[1;33m         \u001b[1;32massert\u001b[0m \u001b[0mpath\u001b[0m\u001b[1;33m.\u001b[0m\u001b[0mexists\u001b[0m\u001b[1;33m(\u001b[0m\u001b[1;33m)\u001b[0m\u001b[1;33m\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n\u001b[0m\u001b[0;32m     40\u001b[0m \u001b[1;33m\u001b[0m\u001b[0m\n\u001b[0;32m     41\u001b[0m         \u001b[0mself\u001b[0m\u001b[1;33m.\u001b[0m\u001b[0mfiles\u001b[0m \u001b[1;33m=\u001b[0m \u001b[1;32mNone\u001b[0m\u001b[1;33m\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n",
      "\u001b[1;31mAssertionError\u001b[0m: "
     ]
    }
   ],
   "source": [
    "from flair.data import Dictionary\n",
    "from flair.models import LanguageModel\n",
    "from flair.trainers.language_model_trainer import LanguageModelTrainer, TextCorpus\n",
    "\n",
    "# are you training a forward or backward LM? # 你在训练向前还是向后的LM?\n",
    "is_forward_lm = True\n",
    "\n",
    "# load the default character dictionary # 加载默认字符字典\n",
    "dictionary: Dictionary = Dictionary.load('chars')\n",
    "\n",
    "# get your corpus, process forward and at the character level # 准备好你的语料库，向前推进，在字符层面\n",
    "corpus = TextCorpus('/path/to/your/corpus',\n",
    "                    dictionary,\n",
    "                    is_forward_lm,\n",
    "                    character_level=True).downsample(0.1)  # 取数据的10%\n",
    "\n",
    "# instantiate your language model, set hidden size and number of layers # 实例化你的语言模型，设置隐藏的层的大小和数量\n",
    "language_model = LanguageModel(dictionary, # 字符字典\n",
    "                               is_forward_lm,\n",
    "                               hidden_size=128,\n",
    "                               nlayers=1)\n",
    "\n",
    "# train your language model  # 训练语言模型\n",
    "trainer = LanguageModelTrainer(language_model, corpus)\n",
    "# 训练模型\n",
    "trainer.train('resources/taggers/language_model',\n",
    "              sequence_length=10,\n",
    "              mini_batch_size=10,\n",
    "#               max_epochs=10,\n",
    "             max_epochs=2)\n"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "d0f75c17",
   "metadata": {},
   "source": [
    "这个脚本中的参数非常小。我们在隐藏大小为1024或2048、序列长度为250和mini-batch大小为100的情况下获得了良好的结果。根据你的资源，你可以尝试训练大型模型，但要注意你需要非常强大的GPU和大量的时间来训练模型(我们为>训练了1周)。"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "61746c8a",
   "metadata": {},
   "source": [
    "## Using the LM as Embeddings（使用语言模型作为embeddings）"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "55f21e49",
   "metadata": {},
   "source": [
    "一旦你训练了LM，使用它作为嵌入是很容易的。只需将模型加载到FlairEmbeddings类中，并像在Flair中嵌入其他东西一样使用:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 41,
   "id": "5d08bb6c",
   "metadata": {
    "ExecuteTime": {
     "end_time": "2021-09-12T12:55:24.563380Z",
     "start_time": "2021-09-12T12:55:24.534147Z"
    },
    "scrolled": false
   },
   "outputs": [
    {
     "ename": "ValueError",
     "evalue": "The given model \"resources/taggers/language_model/best-lm.pt\" is not available or is not a valid path.",
     "output_type": "error",
     "traceback": [
      "\u001b[1;31m---------------------------------------------------------------------------\u001b[0m",
      "\u001b[1;31mValueError\u001b[0m                                Traceback (most recent call last)",
      "\u001b[1;32m<ipython-input-41-f58e22cb0ac1>\u001b[0m in \u001b[0;36m<module>\u001b[1;34m\u001b[0m\n\u001b[0;32m      2\u001b[0m \u001b[1;33m\u001b[0m\u001b[0m\n\u001b[0;32m      3\u001b[0m \u001b[1;31m# init embeddings from your trained LM\u001b[0m\u001b[1;33m\u001b[0m\u001b[1;33m\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n\u001b[1;32m----> 4\u001b[1;33m \u001b[0mchar_lm_embeddings\u001b[0m \u001b[1;33m=\u001b[0m \u001b[0mFlairEmbeddings\u001b[0m\u001b[1;33m(\u001b[0m\u001b[1;34m'resources/taggers/language_model/best-lm.pt'\u001b[0m\u001b[1;33m)\u001b[0m\u001b[1;33m\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n\u001b[0m\u001b[0;32m      5\u001b[0m \u001b[1;33m\u001b[0m\u001b[0m\n\u001b[0;32m      6\u001b[0m \u001b[1;31m# embed sentence\u001b[0m\u001b[1;33m\u001b[0m\u001b[1;33m\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n",
      "\u001b[1;32mC:\\ProgramData\\Anaconda3\\lib\\site-packages\\flair\\embeddings\\token.py\u001b[0m in \u001b[0;36m__init__\u001b[1;34m(self, model, fine_tune, chars_per_chunk, with_whitespace, tokenized_lm, is_lower)\u001b[0m\n\u001b[0;32m    559\u001b[0m \u001b[1;33m\u001b[0m\u001b[0m\n\u001b[0;32m    560\u001b[0m             \u001b[1;32melif\u001b[0m \u001b[1;32mnot\u001b[0m \u001b[0mPath\u001b[0m\u001b[1;33m(\u001b[0m\u001b[0mmodel\u001b[0m\u001b[1;33m)\u001b[0m\u001b[1;33m.\u001b[0m\u001b[0mexists\u001b[0m\u001b[1;33m(\u001b[0m\u001b[1;33m)\u001b[0m\u001b[1;33m:\u001b[0m\u001b[1;33m\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n\u001b[1;32m--> 561\u001b[1;33m                 raise ValueError(\n\u001b[0m\u001b[0;32m    562\u001b[0m                     \u001b[1;34mf'The given model \"{model}\" is not available or is not a valid path.'\u001b[0m\u001b[1;33m\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n\u001b[0;32m    563\u001b[0m                 )\n",
      "\u001b[1;31mValueError\u001b[0m: The given model \"resources/taggers/language_model/best-lm.pt\" is not available or is not a valid path."
     ]
    }
   ],
   "source": [
    "sentence = Sentence('I love Berlin')\n",
    "\n",
    "# init embeddings from your trained LM\n",
    "char_lm_embeddings = FlairEmbeddings('resources/taggers/language_model/best-lm.pt')\n",
    "\n",
    "# embed sentence\n",
    "char_lm_embeddings.embed(sentence)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "cd6ed3a3",
   "metadata": {},
   "source": [
    "## Non-Latin Alphabets（非拉丁字母）"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "10e5340e",
   "metadata": {},
   "source": [
    "如果您训练嵌入语言使用非拉丁字母，如阿拉伯语或日语，您需要首先创建自己的字符字典。你可以使用下面的代码片段:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "a2a2531f",
   "metadata": {},
   "outputs": [],
   "source": [
    "\n",
    "# make an empty character dictionary\n",
    "from flair.data import Dictionary\n",
    "char_dictionary: Dictionary = Dictionary() # 创建空的字符字典\n",
    "\n",
    "# counter object # counter对象\n",
    "import collections\n",
    "counter = collections.Counter()\n",
    "\n",
    "processed = 0\n",
    "\n",
    "import glob\n",
    "files = glob.glob('/path/to/your/corpus/files/*.*')\n",
    "\n",
    "print(files)\n",
    "for file in files:\n",
    "    print(file)\n",
    "\n",
    "    with open(file, 'r', encoding='utf-8') as f:\n",
    "        tokens = 0\n",
    "        for line in f:\n",
    "\n",
    "            processed += 1            \n",
    "            chars = list(line)\n",
    "            tokens += len(chars)\n",
    "\n",
    "            # Add chars to the dictionary  #添加\n",
    "            counter.update(chars)\n",
    "\n",
    "            # comment this line in to speed things up (if the corpus is too large)\n",
    "            # if tokens > 50000000: break\n",
    "\n",
    "    # break\n",
    "\n",
    "total_count = 0\n",
    "for letter, count in counter.most_common():\n",
    "    total_count += count\n",
    "\n",
    "print(total_count)\n",
    "print(processed)\n",
    "\n",
    "sum = 0\n",
    "idx = 0\n",
    "for letter, count in counter.most_common():\n",
    "    sum += count\n",
    "    percentile = (sum / total_count)\n",
    "\n",
    "    # comment this line in to use only top X percentile of chars, otherwise filter later\n",
    "    # if percentile < 0.00001: break\n",
    "\n",
    "    char_dictionary.add_item(letter)\n",
    "    idx += 1\n",
    "    print('%d\\t%s\\t%7d\\t%7d\\t%f' % (idx, letter, count, sum, percentile))\n",
    "\n",
    "print(char_dictionary.item2idx)\n",
    "\n",
    "import pickle\n",
    "with open('/path/to/your_char_mappings', 'wb') as f:\n",
    "    mappings = {\n",
    "        'idx2item': char_dictionary.idx2item,\n",
    "        'item2idx': char_dictionary.item2idx\n",
    "    }\n",
    "    pickle.dump(mappings, f)\n"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "2372dd2d",
   "metadata": {},
   "source": [
    "然后你可以在代码中使用这个字典而不是默认的字典来训练语言模型:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "cb612696",
   "metadata": {},
   "outputs": [],
   "source": [
    "import pickle\n",
    "dictionary = Dictionary.load_from_file('/path/to/your_char_mappings')"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "dfdcb6e8",
   "metadata": {},
   "source": [
    "## Parameters（参数）"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "de3b2232",
   "metadata": {},
   "source": [
    "您可以在LanguageModelTrainer中使用一些学习参数。：例如，我们通常发现初始学习率为20，退火因子为4对于大多数语料库来说已经很不错了。您可能还想修改学习率调度程序的“patience”值。我们目前的值是25，这意味着如果25splits的训练损失没有改善，学习率就会降低。"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "1a3f6375",
   "metadata": {},
   "source": [
    "## Fine-Tuning an Existing LM（对现有LM（语言模型）进行微调）"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "d79d2bc7",
   "metadata": {},
   "source": [
    "有时，优化现有的语言模型比从头开始训练更有意义。例如，如果你有一个通用的英语LM，你想为一个特定的领域进行微调。"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "3b061e8c",
   "metadata": {},
   "source": [
    "要对LanguageModel进行微调，您只需要加载一个现有的LanguageModel，而不需要实例化一个新的。其余培训代码与上述相同:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 42,
   "id": "c14d2b37",
   "metadata": {
    "ExecuteTime": {
     "end_time": "2021-09-12T12:55:44.599112Z",
     "start_time": "2021-09-12T12:55:44.373938Z"
    },
    "scrolled": true
   },
   "outputs": [
    {
     "ename": "AssertionError",
     "evalue": "",
     "output_type": "error",
     "traceback": [
      "\u001b[1;31m---------------------------------------------------------------------------\u001b[0m",
      "\u001b[1;31mAssertionError\u001b[0m                            Traceback (most recent call last)",
      "\u001b[1;32m<ipython-input-42-67546ca85df0>\u001b[0m in \u001b[0;36m<module>\u001b[1;34m\u001b[0m\n\u001b[0;32m     14\u001b[0m \u001b[1;33m\u001b[0m\u001b[0m\n\u001b[0;32m     15\u001b[0m \u001b[1;31m# get your corpus, process forward and at the character level # 准备好你的语料库，向前推进，在字符层面\u001b[0m\u001b[1;33m\u001b[0m\u001b[1;33m\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n\u001b[1;32m---> 16\u001b[1;33m corpus = TextCorpus('path/to/your/corpus',\n\u001b[0m\u001b[0;32m     17\u001b[0m                     \u001b[0mdictionary\u001b[0m\u001b[1;33m,\u001b[0m\u001b[1;33m\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n\u001b[0;32m     18\u001b[0m                     \u001b[0mis_forward_lm\u001b[0m\u001b[1;33m,\u001b[0m\u001b[1;33m\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n",
      "\u001b[1;32mC:\\ProgramData\\Anaconda3\\lib\\site-packages\\flair\\trainers\\language_model_trainer.py\u001b[0m in \u001b[0;36m__init__\u001b[1;34m(self, path, dictionary, forward, character_level, random_case_flip, document_delimiter)\u001b[0m\n\u001b[0;32m    117\u001b[0m             \u001b[0mpath\u001b[0m \u001b[1;33m=\u001b[0m \u001b[0mPath\u001b[0m\u001b[1;33m(\u001b[0m\u001b[0mpath\u001b[0m\u001b[1;33m)\u001b[0m\u001b[1;33m\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n\u001b[0;32m    118\u001b[0m \u001b[1;33m\u001b[0m\u001b[0m\n\u001b[1;32m--> 119\u001b[1;33m         self.train = TextDataset(\n\u001b[0m\u001b[0;32m    120\u001b[0m             \u001b[0mpath\u001b[0m \u001b[1;33m/\u001b[0m \u001b[1;34m\"train\"\u001b[0m\u001b[1;33m,\u001b[0m\u001b[1;33m\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n\u001b[0;32m    121\u001b[0m             \u001b[0mdictionary\u001b[0m\u001b[1;33m,\u001b[0m\u001b[1;33m\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n",
      "\u001b[1;32mC:\\ProgramData\\Anaconda3\\lib\\site-packages\\flair\\trainers\\language_model_trainer.py\u001b[0m in \u001b[0;36m__init__\u001b[1;34m(self, path, dictionary, expand_vocab, forward, split_on_char, random_case_flip, document_delimiter, shuffle)\u001b[0m\n\u001b[0;32m     37\u001b[0m         \u001b[1;32mif\u001b[0m \u001b[0mtype\u001b[0m\u001b[1;33m(\u001b[0m\u001b[0mpath\u001b[0m\u001b[1;33m)\u001b[0m \u001b[1;32mis\u001b[0m \u001b[0mstr\u001b[0m\u001b[1;33m:\u001b[0m\u001b[1;33m\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n\u001b[0;32m     38\u001b[0m             \u001b[0mpath\u001b[0m \u001b[1;33m=\u001b[0m \u001b[0mPath\u001b[0m\u001b[1;33m(\u001b[0m\u001b[0mpath\u001b[0m\u001b[1;33m)\u001b[0m\u001b[1;33m\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n\u001b[1;32m---> 39\u001b[1;33m         \u001b[1;32massert\u001b[0m \u001b[0mpath\u001b[0m\u001b[1;33m.\u001b[0m\u001b[0mexists\u001b[0m\u001b[1;33m(\u001b[0m\u001b[1;33m)\u001b[0m\u001b[1;33m\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n\u001b[0m\u001b[0;32m     40\u001b[0m \u001b[1;33m\u001b[0m\u001b[0m\n\u001b[0;32m     41\u001b[0m         \u001b[0mself\u001b[0m\u001b[1;33m.\u001b[0m\u001b[0mfiles\u001b[0m \u001b[1;33m=\u001b[0m \u001b[1;32mNone\u001b[0m\u001b[1;33m\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n",
      "\u001b[1;31mAssertionError\u001b[0m: "
     ]
    }
   ],
   "source": [
    "from flair.data import Dictionary\n",
    "from flair.embeddings import FlairEmbeddings\n",
    "from flair.trainers.language_model_trainer import LanguageModelTrainer, TextCorpus\n",
    "\n",
    "\n",
    "# instantiate an existing LM, such as one from the FlairEmbeddings # 实例化一个现有的LM，例如来自FlairEmbeddings的LM\n",
    "language_model = FlairEmbeddings('news-forward').lm\n",
    "\n",
    "# are you fine-tuning a forward or backward LM?  # 你是在微调前进还是后退的LM?\n",
    "is_forward_lm = language_model.is_forward_lm\n",
    "\n",
    "# get the dictionary from the existing language model # 从现有的语言模型获取字典\n",
    "dictionary: Dictionary = language_model.dictionary\n",
    "\n",
    "# get your corpus, process forward and at the character level # 准备好你的语料库，向前推进，在字符层面\n",
    "corpus = TextCorpus('path/to/your/corpus',  # 没有语料库地址\n",
    "                    dictionary,\n",
    "                    is_forward_lm,\n",
    "                    character_level=True).downsample(0.1)\n",
    "\n",
    "# use the model trainer to fine-tune this model on your corpus # 用模型训练器在你的语料库上微调这个语言模型\n",
    "trainer = LanguageModelTrainer(language_model, corpus)\n",
    "\n",
    "trainer.train('resources/taggers/language_model',\n",
    "              sequence_length=100,\n",
    "              mini_batch_size=100,\n",
    "              learning_rate=20,\n",
    "              patience=10,\n",
    "              checkpoint=True)\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "ae71e87d",
   "metadata": {},
   "outputs": [],
   "source": [
    "注意，在进行微调时，必须使用与之前相同的字符字典并复制方向(forward/backward)。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "ed22a80c",
   "metadata": {},
   "outputs": [],
   "source": []
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "100585a5",
   "metadata": {},
   "outputs": [],
   "source": []
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "967511b2",
   "metadata": {},
   "outputs": [],
   "source": []
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "d08dd377",
   "metadata": {},
   "outputs": [],
   "source": []
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "bbb57a4e",
   "metadata": {},
   "outputs": [],
   "source": []
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.8.8"
  },
  "toc": {
   "base_numbering": 1,
   "nav_menu": {},
   "number_sections": true,
   "sideBar": true,
   "skip_h1_title": false,
   "title_cell": "Table of Contents",
   "title_sidebar": "Contents",
   "toc_cell": false,
   "toc_position": {
    "height": "calc(100% - 180px)",
    "left": "10px",
    "top": "150px",
    "width": "378.825px"
   },
   "toc_section_display": true,
   "toc_window_display": true
  },
  "varInspector": {
   "cols": {
    "lenName": 16,
    "lenType": 16,
    "lenVar": 40
   },
   "kernels_config": {
    "python": {
     "delete_cmd_postfix": "",
     "delete_cmd_prefix": "del ",
     "library": "var_list.py",
     "varRefreshCmd": "print(var_dic_list())"
    },
    "r": {
     "delete_cmd_postfix": ") ",
     "delete_cmd_prefix": "rm(",
     "library": "var_list.r",
     "varRefreshCmd": "cat(var_dic_list()) "
    }
   },
   "types_to_exclude": [
    "module",
    "function",
    "builtin_function_or_method",
    "instance",
    "_Feature"
   ],
   "window_display": false
  }
 },
 "nbformat": 4,
 "nbformat_minor": 5
}
