{
 "cells": [
  {
   "cell_type": "markdown",
   "id": "31d093db",
   "metadata": {},
   "source": [
    "\n",
    "下面用代码展示BERT的基本用法。\n",
    "\n",
    "下面展示给定输入为“The capital of China is \\[MASK\\]”的情况下，模型会如何预测被掩码的词。这里输出概率最高的5个词。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 1,
   "id": "026be16d",
   "metadata": {},
   "outputs": [
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "Some weights of the model checkpoint at bert-base-uncased were not used when initializing BertForMaskedLM: ['bert.pooler.dense.bias', 'bert.pooler.dense.weight', 'cls.seq_relationship.bias', 'cls.seq_relationship.weight']\n",
      "- This IS expected if you are initializing BertForMaskedLM from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model).\n",
      "- This IS NOT expected if you are initializing BertForMaskedLM from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).\n"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "The capital of China is beijing.\n",
      "The capital of China is nanjing.\n",
      "The capital of China is shanghai.\n",
      "The capital of China is guangzhou.\n",
      "The capital of China is shenzhen.\n"
     ]
    }
   ],
   "source": [
    "\"\"\"\n",
    "代码来源于GitHub项目huggingface/transformers\n",
    "（Copyright (c) 2020, The HuggingFace Team, Apache-2.0 License（见附录））\n",
    "\"\"\"\n",
    "from transformers import BertTokenizer, BertForMaskedLM\n",
    "from torch.nn import functional as F\n",
    "import torch\n",
    "\n",
    "# 选用bert-base-uncased模型进行预测，使用相应的分词器\n",
    "tokenizer = BertTokenizer.from_pretrained('bert-base-uncased')\n",
    "model = BertForMaskedLM.from_pretrained('bert-base-uncased', return_dict=True)\n",
    "\n",
    "# 准备输入句子“The capital of China is [MASK].”\n",
    "text = 'The capital of China is ' + tokenizer.mask_token + '.'\n",
    "# 将输入句子编码为PyTorch张量\n",
    "inputs = tokenizer.encode_plus(text, return_tensors='pt')\n",
    "# 定位[MASK]所在的位置\n",
    "mask_index = torch.where(inputs['input_ids'][0] == tokenizer.mask_token_id)\n",
    "output = model(**inputs)\n",
    "logits = output.logits\n",
    "# 从[MASK]所在位置的输出分布中，选择概率最高的5个并打印\n",
    "distribution = F.softmax(logits, dim=-1)\n",
    "mask_word = distribution[0, mask_index, :]\n",
    "top_5 = torch.topk(mask_word, 5, dim=1)[1][0]\n",
    "for token in top_5:\n",
    "    word = tokenizer.decode([token])\n",
    "    new_sentence = text.replace(tokenizer.mask_token, word)\n",
    "    print(new_sentence)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 2,
   "id": "79a45e93-5764-4335-8e73-11452def2a28",
   "metadata": {},
   "outputs": [
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "Some weights of the model checkpoint at bert-base-uncased were not used when initializing BertForMaskedLM: ['bert.pooler.dense.bias', 'bert.pooler.dense.weight', 'cls.seq_relationship.bias', 'cls.seq_relationship.weight']\n",
      "- This IS expected if you are initializing BertForMaskedLM from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model).\n",
      "- This IS NOT expected if you are initializing BertForMaskedLM from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).\n"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "The capital of China is beijing.\n",
      "The capital of China is nanjing.\n",
      "The capital of China is shanghai.\n",
      "The capital of China is guangzhou.\n",
      "The capital of China is shenzhen.\n"
     ]
    }
   ],
   "source": [
    "\"\"\"\n",
    "代码来源于GitHub项目huggingface/transformers\n",
    "（Copyright (c) 2020, The HuggingFace Team, Apache-2.0 License（见附录））\n",
    "\"\"\"\n",
    "from transformers import BertTokenizer, BertForMaskedLM\n",
    "from torch.nn import functional as F\n",
    "import torch\n",
    "\n",
    "# 选用bert-base-uncased模型进行预测，使用相应的分词器\n",
    "tokenizer = BertTokenizer.from_pretrained('bert-base-uncased')\n",
    "model = BertForMaskedLM.from_pretrained('bert-base-uncased', return_dict=True)\n",
    "\n",
    "# 准备输入句子“The capital of China is [MASK].”\n",
    "text = 'The capital of China is ' + tokenizer.mask_token + '.'\n",
    "# 将输入句子编码为PyTorch张量\n",
    "inputs = tokenizer.encode_plus(text, return_tensors='pt')\n",
    "# 定位[MASK]所在的位置\n",
    "mask_index = torch.where(inputs['input_ids'][0] == tokenizer.mask_token_id)\n",
    "output = model(**inputs)\n",
    "logits = output.logits\n",
    "# 从[MASK]所在位置的输出分布中，选择概率最高的5个并打印\n",
    "distribution = F.softmax(logits, dim=-1)\n",
    "mask_word = distribution[0, mask_index, :]\n",
    "top_5 = torch.topk(mask_word, 5, dim=1)[1][0]\n",
    "for token in top_5:\n",
    "    word = tokenizer.decode([token])\n",
    "    new_sentence = text.replace(tokenizer.mask_token, word)\n",
    "    print(new_sentence)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "a03f59ac",
   "metadata": {},
   "source": [
    "下面展示如何微调BERT用于文本分类。这里使用第4章的Books数据集。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 3,
   "id": "895c0795",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "train size = 8627 , test size = 2157\n",
      "{0: '计算机类', 1: '艺术传媒类', 2: '经管类'}\n",
      "8627 2157\n"
     ]
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "100%|██████████████████████████████████████████████████████████████████████████████| 100/100 [00:00<00:00, 2331.76it/s]\n",
      "100%|██████████████████████████████████████████████████████████████████████████████| 100/100 [00:00<00:00, 2296.15it/s]\n"
     ]
    }
   ],
   "source": [
    "\"\"\"\n",
    "代码来源于GitHub项目huggingface/transformers\n",
    "（Copyright (c) 2020, The HuggingFace Team, Apache-2.0 License（见附录））\n",
    "\"\"\"\n",
    "import sys\n",
    "from tqdm import tqdm\n",
    "\n",
    "# 导入前面实现的Books数据集\n",
    "sys.path.append('E:\\孙大圣\\@Hands-on-NLP-main\\@Hands-on-NLP-main\\code')\n",
    "from my_utils import BooksDataset\n",
    "\n",
    "dataset = BooksDataset()\n",
    "# 打印出类和标签ID\n",
    "print(dataset.id2label)\n",
    "print(len(dataset.train_data), len(dataset.test_data))\n",
    "\n",
    "# 接下来使用分词器进行分词，并采样100条数据用于训练和测试\n",
    "# 为防止运行时间过长，此处为了在CPU上顺利运行，只选用100条数据。\n",
    "from transformers import AutoTokenizer\n",
    "tokenizer = AutoTokenizer.from_pretrained('bert-base-cased')\n",
    "\n",
    "def tokenize_function(text):\n",
    "    return tokenizer(text, padding='max_length', truncation=True)\n",
    "\n",
    "def tokenize(raw_data):\n",
    "    dataset = []\n",
    "    for data in tqdm(raw_data):\n",
    "        tokens = tokenize_function(data['abstract'])       # en_book替换为abstract关键词\n",
    "        tokens['label'] = data['label']\n",
    "        dataset.append(tokens)\n",
    "    return dataset\n",
    "        \n",
    "small_train_dataset = tokenize(dataset.train_data[:100])\n",
    "small_eval_dataset = tokenize(dataset.test_data[:100])\n",
    "\n",
    "# Print first 5 items to verify\n",
    "# print(small_train_dataset[:5])\n",
    "# print(small_eval_dataset[:5])"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 4,
   "id": "754ffc00-4b9c-4894-bb79-598d29b7d29c",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Keys in the first item of train_data: dict_keys(['book', 'class', 'abstract', 'author', 'label'])\n",
      "Keys in the first item of test_data: dict_keys(['book', 'class', 'abstract', 'author', 'label'])\n"
     ]
    }
   ],
   "source": [
    "# en_book替换为abstract关键词,查找关键词的代码\n",
    "from tqdm import tqdm\n",
    "\n",
    "def inspect_dataset_keys(dataset):\n",
    "    if dataset.train_data:\n",
    "        train_keys = dataset.train_data[0].keys()\n",
    "        print(f\"Keys in the first item of train_data: {train_keys}\")\n",
    "    else:\n",
    "        print(\"train_data is empty.\")\n",
    "    \n",
    "    if dataset.test_data:\n",
    "        test_keys = dataset.test_data[0].keys()\n",
    "        print(f\"Keys in the first item of test_data: {test_keys}\")\n",
    "    else:\n",
    "        print(\"test_data is empty.\")\n",
    "\n",
    "# Assuming 'dataset' is already defined and contains 'train_data' and 'test_data'\n",
    "inspect_dataset_keys(dataset)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 6,
   "id": "e03ae8df-4503-4b13-a070-7d388be49a0c",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Package                       Version\n",
      "----------------------------- ------------\n",
      "absl-py                       2.1.0\n",
      "accelerate                    1.6.0\n",
      "accumulation_tree             0.6.4\n",
      "ace_tools                     0.0\n",
      "aiohttp                       3.9.3\n",
      "aioitertools                  0.7.1\n",
      "aiosignal                     1.2.0\n",
      "alabaster                     0.7.12\n",
      "anaconda-anon-usage           0.4.3\n",
      "anaconda-catalogs             0.2.0\n",
      "anaconda-client               1.12.3\n",
      "anaconda-cloud-auth           0.1.4\n",
      "anaconda-navigator            2.5.2\n",
      "anaconda-project              0.11.1\n",
      "annotated-types               0.7.0\n",
      "anyio                         4.9.0\n",
      "appdirs                       1.4.4\n",
      "archspec                      0.2.3\n",
      "argon2-cffi                   21.3.0\n",
      "argon2-cffi-bindings          21.2.0\n",
      "arrow                         1.2.3\n",
      "astroid                       2.14.2\n",
      "asttokens                     2.0.5\n",
      "astunparse                    1.6.3\n",
      "async-lru                     2.0.4\n",
      "async-timeout                 5.0.1\n",
      "atomicwrites                  1.4.0\n",
      "attrs                         24.2.0\n",
      "Automat                       24.8.1\n",
      "autopep8                      1.6.0\n",
      "Babel                         2.11.0\n",
      "backports.functools-lru-cache 1.6.4\n",
      "backports.tempfile            1.0\n",
      "backports.weakref             1.0.post1\n",
      "bce-python-sdk                0.9.29\n",
      "bcrypt                        3.2.0\n",
      "beautifulsoup4                4.12.2\n",
      "binaryornot                   0.4.4\n",
      "black                         23.11.0\n",
      "bleach                        4.1.0\n",
      "blinker                       1.6.2\n",
      "blis                          1.2.1\n",
      "bokeh                         3.3.4\n",
      "boltons                       23.0.0\n",
      "botocore                      1.31.64\n",
      "Bottleneck                    1.3.7\n",
      "Brotli                        1.0.9\n",
      "cachetools                    4.2.2\n",
      "cashews                       7.4.0\n",
      "catalogue                     2.0.10\n",
      "certifi                       2025.1.31\n",
      "cffi                          1.16.0\n",
      "chardet                       4.0.0\n",
      "charset-normalizer            3.4.1\n",
      "click                         8.1.7\n",
      "cloudpathlib                  0.21.0\n",
      "cloudpickle                   2.2.1\n",
      "clyent                        1.2.2\n",
      "colorama                      0.4.6\n",
      "colorcet                      3.0.1\n",
      "comm                          0.2.2\n",
      "commonX                       0.6.37\n",
      "conda                         24.5.0\n",
      "conda-build                   24.1.2\n",
      "conda-content-trust           0.2.0\n",
      "conda_index                   0.4.0\n",
      "conda-libmamba-solver         24.1.0\n",
      "conda-pack                    0.6.0\n",
      "conda-package-handling        2.2.0\n",
      "conda_package_streaming       0.9.0\n",
      "conda-repo-cli                1.0.75\n",
      "conda-token                   0.4.0\n",
      "conda-verify                  3.4.2\n",
      "confection                    0.1.5\n",
      "constantly                    23.10.4\n",
      "contourpy                     1.2.0\n",
      "cookiecutter                  2.6.0\n",
      "cryptography                  42.0.2\n",
      "cssselect                     1.2.0\n",
      "curl_cffi                     0.10.0\n",
      "cycler                        0.11.0\n",
      "cymem                         2.0.11\n",
      "cytoolz                       0.12.2\n",
      "dask                          2023.11.0\n",
      "datasets                      3.5.0\n",
      "debugpy                       1.6.7\n",
      "decorator                     5.1.1\n",
      "defusedxml                    0.7.1\n",
      "Deprecated                    1.2.18\n",
      "diff-match-patch              20200713\n",
      "dill                          0.3.8\n",
      "distlib                       0.3.9\n",
      "distributed                   2023.11.0\n",
      "distro                        1.8.0\n",
      "dnspython                     2.7.0\n",
      "docstring-to-markdown         0.11\n",
      "docutils                      0.18.1\n",
      "emoji                         2.14.1\n",
      "entrypoints                   0.4\n",
      "et-xmlfile                    1.1.0\n",
      "evaluate                      0.4.3\n",
      "executing                     0.8.3\n",
      "factor_analyzer               0.5.1\n",
      "fastjsonschema                2.16.2\n",
      "filelock                      3.13.1\n",
      "flake8                        6.0.0\n",
      "Flask                         2.2.5\n",
      "flatbuffers                   24.3.25\n",
      "fonttools                     4.25.0\n",
      "frozendict                    2.4.2\n",
      "frozenlist                    1.4.0\n",
      "fsspec                        2023.10.0\n",
      "future                        0.18.3\n",
      "gast                          0.6.0\n",
      "gensim                        4.3.3\n",
      "gitdb                         4.0.7\n",
      "GitPython                     3.1.37\n",
      "gmpy2                         2.1.2\n",
      "google-pasta                  0.2.0\n",
      "greenlet                      3.0.1\n",
      "grpcio                        1.68.1\n",
      "h11                           0.14.0\n",
      "h5py                          3.12.1\n",
      "hanlp                         2.1.1\n",
      "hanlp-common                  0.0.23\n",
      "hanlp-downloader              0.0.25\n",
      "hanlp-trie                    0.0.5\n",
      "HeapDict                      1.0.1\n",
      "httpcore                      1.0.7\n",
      "httpx                         0.28.1\n",
      "httpx-sse                     0.4.0\n",
      "huggingface-hub               0.30.2\n",
      "hyperlink                     21.0.0\n",
      "idna                          3.10\n",
      "imagesize                     1.4.1\n",
      "img2pdf                       0.6.0\n",
      "importlib-metadata            7.0.1\n",
      "incremental                   24.7.2\n",
      "inflection                    0.5.1\n",
      "iniconfig                     1.1.1\n",
      "inside                        0.0.4\n",
      "intake                        0.6.8\n",
      "intervaltree                  3.1.0\n",
      "ipykernel                     6.28.0\n",
      "ipython                       8.20.0\n",
      "ipython-genutils              0.2.0\n",
      "ipywidgets                    8.1.5\n",
      "isort                         5.9.3\n",
      "itemadapter                   0.3.0\n",
      "itemloaders                   1.1.0\n",
      "itsdangerous                  2.0.1\n",
      "jaraco.classes                3.2.1\n",
      "jedi                          0.18.1\n",
      "jellyfish                     1.0.1\n",
      "Jinja2                        3.1.2\n",
      "jmcomic                       2.5.34\n",
      "jmespath                      1.0.1\n",
      "joblib                        1.2.0\n",
      "json5                         0.9.6\n",
      "jsonpatch                     1.32\n",
      "jsonpointer                   2.1\n",
      "jsonschema                    4.19.2\n",
      "jsonschema-specifications     2023.7.1\n",
      "jupyter_client                8.6.0\n",
      "jupyter_core                  5.5.0\n",
      "jupyter-events                0.8.0\n",
      "jupyter-lsp                   2.2.0\n",
      "jupyter_server                2.10.0\n",
      "jupyter_server_terminals      0.4.4\n",
      "jupyterlab                    4.0.11\n",
      "jupyterlab-pygments           0.1.2\n",
      "jupyterlab_server             2.25.1\n",
      "jupyterlab_widgets            3.0.13\n",
      "keras                         3.9.2\n",
      "keyring                       23.13.1\n",
      "kiwisolver                    1.4.4\n",
      "langcodes                     3.5.0\n",
      "language_data                 1.3.0\n",
      "lazy_loader                   0.3\n",
      "lazy-object-proxy             1.6.0\n",
      "len                           0.1.0\n",
      "lens                          0.4.5\n",
      "libarchive-c                  2.9\n",
      "libclang                      18.1.1\n",
      "libmambapy                    1.5.6\n",
      "linkify-it-py                 2.0.0\n",
      "llvmlite                      0.42.0\n",
      "lmdb                          1.4.1\n",
      "locket                        1.0.0\n",
      "lxml                          4.9.3\n",
      "lz4                           4.3.2\n",
      "marisa-trie                   1.2.1\n",
      "Markdown                      3.8\n",
      "markdown-it-py                3.0.0\n",
      "MarkupSafe                    3.0.2\n",
      "matplotlib                    3.9.0\n",
      "matplotlib-inline             0.1.6\n",
      "mccabe                        0.7.0\n",
      "mcp                           1.4.1\n",
      "mdit-py-plugins               0.3.0\n",
      "mdurl                         0.1.2\n",
      "menuinst                      2.0.2\n",
      "mistune                       2.0.4\n",
      "mkl-fft                       1.3.8\n",
      "mkl-random                    1.2.4\n",
      "mkl-service                   2.4.0\n",
      "ml_dtypes                     0.5.1\n",
      "more-itertools                10.1.0\n",
      "mpmath                        1.3.0\n",
      "msgpack                       1.0.3\n",
      "multidict                     6.0.4\n",
      "multipledispatch              0.6.0\n",
      "multiprocess                  0.70.16\n",
      "munkres                       1.1.4\n",
      "murmurhash                    1.0.12\n",
      "mypy                          1.8.0\n",
      "mypy-extensions               1.0.0\n",
      "namex                         0.0.8\n",
      "navigator-updater             0.4.0\n",
      "nb-cli                        1.4.2\n",
      "nbclient                      0.8.0\n",
      "nbconvert                     7.10.0\n",
      "nbformat                      5.9.2\n",
      "nest-asyncio                  1.6.0\n",
      "networkx                      3.1\n",
      "nltk                          3.8.1\n",
      "noneprompt                    0.1.9\n",
      "notebook                      7.0.8\n",
      "notebook_shim                 0.2.3\n",
      "numexpr                       2.8.7\n",
      "numpy                         1.23.5\n",
      "numpydoc                      1.5.0\n",
      "nvidia-ml-py                  12.570.86\n",
      "opencv-python                 4.10.0.84\n",
      "openpyxl                      3.0.10\n",
      "opt_einsum                    3.4.0\n",
      "optree                        0.13.1\n",
      "outcome                       1.3.0.post0\n",
      "overrides                     7.4.0\n",
      "packaging                     25.0\n",
      "pandas                        2.1.4\n",
      "pandas-flavor                 0.6.0\n",
      "pandocfilters                 1.5.0\n",
      "param                         2.0.2\n",
      "paramiko                      2.8.1\n",
      "parsel                        1.8.1\n",
      "parso                         0.8.3\n",
      "partd                         1.4.1\n",
      "pathlib                       1.0.1\n",
      "pathspec                      0.10.3\n",
      "patsy                         0.5.6\n",
      "pexpect                       4.8.0\n",
      "phrasetree                    0.0.9\n",
      "pickleshare                   0.7.5\n",
      "pikepdf                       9.5.2\n",
      "pillow                        10.2.0\n",
      "pingouin                      0.5.5\n",
      "pip                           25.0.1\n",
      "pkce                          1.0.3\n",
      "pkginfo                       1.9.6\n",
      "platformdirs                  3.10.0\n",
      "plotly                        5.9.0\n",
      "pluggy                        1.0.0\n",
      "ply                           3.11\n",
      "preshed                       3.0.9\n",
      "prettytable                   3.11.0\n",
      "prometheus-client             0.14.1\n",
      "prompt-toolkit                3.0.43\n",
      "Protego                       0.1.16\n",
      "protobuf                      5.29.4\n",
      "psutil                        5.9.0\n",
      "ptyprocess                    0.7.0\n",
      "pure-eval                     0.2.2\n",
      "py-cpuinfo                    9.0.0\n",
      "pyarrow                       19.0.1\n",
      "pyasn1                        0.4.8\n",
      "pyasn1-modules                0.2.8\n",
      "pycodestyle                   2.10.0\n",
      "pycosat                       0.6.6\n",
      "pycparser                     2.21\n",
      "pycryptodome                  3.21.0\n",
      "pyct                          0.5.0\n",
      "pycurl                        7.45.2\n",
      "pydantic                      2.10.6\n",
      "pydantic_core                 2.27.2\n",
      "pydantic-settings             2.8.1\n",
      "PyDispatcher                  2.0.5\n",
      "pydocstyle                    6.3.0\n",
      "pyecharts                     2.0.6\n",
      "pyee                          11.1.1\n",
      "pyfiglet                      1.0.2\n",
      "pyflakes                      3.0.1\n",
      "Pygments                      2.19.1\n",
      "PyJWT                         2.4.0\n",
      "pylint                        2.16.2\n",
      "pylint-venv                   2.3.0\n",
      "pymongo                       4.10.1\n",
      "PyMySQL                       1.1.1\n",
      "PyNaCl                        1.5.0\n",
      "pynvml                        12.0.0\n",
      "pyodbc                        5.0.1\n",
      "pyOpenSSL                     24.0.0\n",
      "pyparsing                     3.0.9\n",
      "pyppeteer                     2.0.0\n",
      "PyQt5                         5.15.10\n",
      "PyQt5-sip                     12.13.0\n",
      "PyQtWebEngine                 5.15.6\n",
      "PySocks                       1.7.1\n",
      "pytesseract                   0.3.13\n",
      "pytest                        7.4.0\n",
      "python-dateutil               2.8.2\n",
      "python-dotenv                 0.21.0\n",
      "python-json-logger            2.0.7\n",
      "python-lsp-jsonrpc            1.1.2\n",
      "python-lsp-server             1.12.0\n",
      "python-slugify                5.0.2\n",
      "python-snappy                 0.6.1\n",
      "pytoolconfig                  1.2.6\n",
      "pytz                          2023.3.post1\n",
      "pyudorandom                   1.0.0\n",
      "pyviz_comms                   3.0.0\n",
      "pywin32                       305.1\n",
      "pywin32-ctypes                0.2.0\n",
      "pywinpty                      2.0.10\n",
      "PyYAML                        6.0.1\n",
      "pyzmq                         25.1.2\n",
      "QDarkStyle                    3.0.2\n",
      "qstylizer                     0.2.2\n",
      "QtAwesome                     1.2.2\n",
      "QtPy                          2.4.1\n",
      "queuelib                      1.6.2\n",
      "referencing                   0.30.2\n",
      "regex                         2023.10.3\n",
      "requests                      2.32.3\n",
      "requests-file                 1.5.1\n",
      "requests-toolbelt             1.0.0\n",
      "retrying                      1.3.4\n",
      "rfc3339-validator             0.1.4\n",
      "rfc3986-validator             0.1.1\n",
      "rich                          14.0.0\n",
      "rope                          1.7.0\n",
      "rpds-py                       0.10.6\n",
      "Rtree                         1.0.1\n",
      "ruamel.yaml                   0.17.21\n",
      "ruamel-yaml-conda             0.17.21\n",
      "safetensors                   0.5.3\n",
      "scikit-learn                  1.5.0\n",
      "scipy                         1.13.1\n",
      "Scrapy                        2.12.0\n",
      "seaborn                       0.13.2\n",
      "selenium                      4.1.1\n",
      "semver                        2.13.0\n",
      "Send2Trash                    1.8.2\n",
      "sentencepiece                 0.2.0\n",
      "service-identity              18.1.0\n",
      "setuptools                    79.0.0\n",
      "shellingham                   1.5.4\n",
      "simplejson                    3.19.3\n",
      "sip                           6.7.12\n",
      "six                           1.17.0\n",
      "smart-open                    5.2.1\n",
      "smmap                         4.0.0\n",
      "sniffio                       1.3.0\n",
      "snowballstemmer               2.2.0\n",
      "sortedcontainers              2.4.0\n",
      "soupsieve                     2.5\n",
      "spacy                         3.8.5\n",
      "spacy-legacy                  3.0.12\n",
      "spacy-loggers                 1.0.5\n",
      "spacy_pkuseg                  1.0.0\n",
      "Sphinx                        5.0.2\n",
      "sphinxcontrib-applehelp       1.0.2\n",
      "sphinxcontrib-devhelp         1.0.2\n",
      "sphinxcontrib-htmlhelp        2.0.0\n",
      "sphinxcontrib-jsmath          1.0.1\n",
      "sphinxcontrib-qthelp          1.0.3\n",
      "sphinxcontrib-serializinghtml 1.1.5\n",
      "SQLAlchemy                    2.0.25\n",
      "srsly                         2.5.1\n",
      "sse-starlette                 2.2.1\n",
      "stack-data                    0.2.0\n",
      "stanza                        1.10.1\n",
      "starlette                     0.46.1\n",
      "statsmodels                   0.14.4\n",
      "supar                         1.1.4\n",
      "svgwrite                      1.4.3\n",
      "sympy                         1.13.1\n",
      "tabulate                      0.9.0\n",
      "tblib                         1.7.0\n",
      "tdigest                       0.5.2.2\n",
      "tenacity                      8.2.2\n",
      "tensorboard                   2.19.0\n",
      "tensorboard-data-server       0.7.2\n",
      "tensorflow                    2.19.0\n",
      "tensorflow_intel              2.18.0\n",
      "tensorflow-io-gcs-filesystem  0.31.0\n",
      "termcolor                     2.5.0\n",
      "terminado                     0.17.1\n",
      "text-unidecode                1.3\n",
      "textdistance                  4.2.1\n",
      "tf_keras                      2.19.0\n",
      "thinc                         8.3.4\n",
      "threadpoolctl                 3.5.0\n",
      "three-merge                   0.1.1\n",
      "tinycss2                      1.2.1\n",
      "tldextract                    5.1.3\n",
      "tokenizers                    0.21.1\n",
      "toml                          0.10.2\n",
      "tomlkit                       0.11.1\n",
      "toolz                         0.12.0\n",
      "toposort                      1.5\n",
      "torch                         2.6.0\n",
      "torchaudio                    2.6.0\n",
      "torchvision                   0.21.0\n",
      "tornado                       6.3.3\n",
      "tqdm                          4.67.1\n",
      "traitlets                     5.7.1\n",
      "transformers                  4.51.3\n",
      "Tree                          0.2.4\n",
      "trio                          0.26.2\n",
      "trio-websocket                0.11.1\n",
      "truststore                    0.8.0\n",
      "Twisted                       24.11.0\n",
      "twisted-iocpsupport           1.0.2\n",
      "typer                         0.15.2\n",
      "typing_extensions             4.12.2\n",
      "tzdata                        2023.3\n",
      "tzlocal                       2.1\n",
      "uc-micro-py                   1.0.1\n",
      "ujson                         5.4.0\n",
      "Unidecode                     1.2.0\n",
      "urllib3                       2.4.0\n",
      "urllib3-secure-extra          0.1.0\n",
      "utils                         1.0.2\n",
      "uvicorn                       0.34.0\n",
      "validators                    0.18.2\n",
      "virtualenv                    20.29.3\n",
      "w3lib                         2.1.2\n",
      "wasabi                        1.1.3\n",
      "watchdog                      2.1.6\n",
      "watchfiles                    0.24.0\n",
      "wcwidth                       0.2.5\n",
      "weasel                        0.4.1\n",
      "webdriver-manager             4.0.2\n",
      "webencodings                  0.5.1\n",
      "websocket-client              1.8.0\n",
      "websockets                    10.4\n",
      "Werkzeug                      3.1.3\n",
      "whatthepatch                  1.0.2\n",
      "wheel                         0.45.1\n",
      "widgetsnbextension            4.0.13\n",
      "win-inet-pton                 1.1.0\n",
      "wordcloud                     1.9.4\n",
      "wrapt                         1.17.2\n",
      "wsproto                       1.2.0\n",
      "xarray                        2024.9.0\n",
      "xgboost                       2.0.3\n",
      "xlwings                       0.29.1\n",
      "xxhash                        3.5.0\n",
      "xyzservices                   2022.9.0\n",
      "yapf                          0.31.0\n",
      "yarl                          1.9.3\n",
      "zh_core_web_sm                3.8.0\n",
      "zict                          3.0.0\n",
      "zipp                          3.17.0\n",
      "zope.interface                5.4.0\n",
      "zstandard                     0.19.0\n",
      "Note: you may need to restart the kernel to use updated packages.\n"
     ]
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "WARNING: Ignoring invalid distribution ~ransformers (E:\\Translations\\Python\\Python311\\site-packages)\n",
      "WARNING: Ignoring invalid distribution ~umpy (E:\\Translations\\Python\\Python311\\site-packages)\n"
     ]
    }
   ],
   "source": [
    "pip list"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 8,
   "id": "39353b6d",
   "metadata": {},
   "outputs": [
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "Some weights of BertForSequenceClassification were not initialized from the model checkpoint at bert-base-cased and are newly initialized: ['classifier.bias', 'classifier.weight']\n",
      "You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.\n"
     ]
    }
   ],
   "source": [
    "# 加载bert-base-cased这个预训练模型，并指定序列分类作为模型输出头，\n",
    "# 分类标签数为3类\n",
    "from transformers import AutoModelForSequenceClassification\n",
    "model = AutoModelForSequenceClassification.from_pretrained(\\\n",
    "    'bert-base-cased', num_labels=len(dataset.id2label))\n",
    "\n",
    "# 为了在训练过程中及时地监控模型性能，定义评估函数，计算分类准确率\n",
    "import numpy as np\n",
    "# 可以使用如下指令安装evaluate\n",
    "# conda install evaluate\n",
    "import evaluate\n",
    "\n",
    "metric = evaluate.load('accuracy')\n",
    "\n",
    "def compute_metrics(eval_pred):\n",
    "    logits, labels = eval_pred\n",
    "    predictions = np.argmax(logits, axis=-1)\n",
    "    return metric.compute(predictions=predictions, references=labels)\n",
    "\n",
    "# 通过TrainingArguments这个类来构造训练所需的参数\n",
    "# evaluation_strategy='epoch'指定每个epoch结束的时候计算评价指标\n",
    "from transformers import TrainingArguments, Trainer\n",
    "training_args = TrainingArguments(output_dir='test_trainer',\\\n",
    "    eval_strategy='epoch')\n",
    "\n",
    "# transformers这个库自带的Trainer类封装了大量模型训练的细节，\n",
    "# 例如数据转换、性能评测、保存模型等\n",
    "# 可以调用Trainer类来非常方便地调用标准的微调流程，默认训练3个epoch\n",
    "trainer = Trainer(\n",
    "    model=model,\n",
    "    args=training_args,\n",
    "    train_dataset=small_train_dataset,\n",
    "    eval_dataset=small_eval_dataset,\n",
    "    compute_metrics=compute_metrics,\n",
    ")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 9,
   "id": "337b979a",
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/html": [
       "\n",
       "    <div>\n",
       "      \n",
       "      <progress value='39' max='39' style='width:300px; height:20px; vertical-align: middle;'></progress>\n",
       "      [39/39 04:48, Epoch 3/3]\n",
       "    </div>\n",
       "    <table border=\"1\" class=\"dataframe\">\n",
       "  <thead>\n",
       " <tr style=\"text-align: left;\">\n",
       "      <th>Epoch</th>\n",
       "      <th>Training Loss</th>\n",
       "      <th>Validation Loss</th>\n",
       "      <th>Accuracy</th>\n",
       "    </tr>\n",
       "  </thead>\n",
       "  <tbody>\n",
       "    <tr>\n",
       "      <td>1</td>\n",
       "      <td>No log</td>\n",
       "      <td>1.094163</td>\n",
       "      <td>0.350000</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>2</td>\n",
       "      <td>No log</td>\n",
       "      <td>1.102361</td>\n",
       "      <td>0.270000</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>3</td>\n",
       "      <td>No log</td>\n",
       "      <td>1.064910</td>\n",
       "      <td>0.470000</td>\n",
       "    </tr>\n",
       "  </tbody>\n",
       "</table><p>"
      ],
      "text/plain": [
       "<IPython.core.display.HTML object>"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "data": {
      "text/plain": [
       "TrainOutput(global_step=39, training_loss=1.067896036001352, metrics={'train_runtime': 297.5087, 'train_samples_per_second': 1.008, 'train_steps_per_second': 0.131, 'total_flos': 78934025318400.0, 'train_loss': 1.067896036001352, 'epoch': 3.0})"
      ]
     },
     "execution_count": 9,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "# 默认的微调流程使用wandb记录训练log，访问wandb官网了解如何使用\n",
    "# 此处通过WANDB_DISABLED环境变量禁用wandb，减少不必要的网络访问\n",
    "import os\n",
    "os.environ[\"WANDB_DISABLED\"] = \"true\"\n",
    "trainer.train()"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "fb019379",
   "metadata": {},
   "source": [
    "以上代码通过调用Trainer类来实现简单的微调流程，接下来展示如何自定义微调流程。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 10,
   "id": "7c8f4428",
   "metadata": {},
   "outputs": [
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "Some weights of BertForSequenceClassification were not initialized from the model checkpoint at bert-base-cased and are newly initialized: ['classifier.bias', 'classifier.weight']\n",
      "You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.\n"
     ]
    },
    {
     "data": {
      "application/vnd.jupyter.widget-view+json": {
       "model_id": "6ed38437a6b745ceadb8f1755a278070",
       "version_major": 2,
       "version_minor": 0
      },
      "text/plain": [
       "  0%|          | 0/39 [00:00<?, ?it/s]"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    }
   ],
   "source": [
    "import torch\n",
    "\n",
    "del model\n",
    "del trainer\n",
    "# 如果你使用了GPU，清空GPU缓存\n",
    "torch.cuda.empty_cache()\n",
    "\n",
    "# 使用DataLoader类为模型提供数据\n",
    "from torch.utils.data import DataLoader\n",
    "\n",
    "# 将Python列表转为PyTorch张量\n",
    "def collate(batch):\n",
    "    input_ids, token_type_ids, attention_mask, labels = [], [], [], []\n",
    "    for d in batch:\n",
    "        input_ids.append(d['input_ids'])\n",
    "        token_type_ids.append(d['token_type_ids'])\n",
    "        attention_mask.append(d['attention_mask'])\n",
    "        labels.append(d['label'])\n",
    "    input_ids = torch.tensor(input_ids)\n",
    "    token_type_ids = torch.tensor(token_type_ids)\n",
    "    attention_mask = torch.tensor(attention_mask)\n",
    "    labels = torch.tensor(labels)\n",
    "    return {'input_ids': input_ids, 'token_type_ids': token_type_ids,\\\n",
    "        'attention_mask': attention_mask, 'labels': labels}\n",
    "\n",
    "train_dataloader = DataLoader(small_train_dataset, shuffle=True,\\\n",
    "    batch_size=8, collate_fn=collate)\n",
    "eval_dataloader = DataLoader(small_eval_dataset, batch_size=8,\\\n",
    "    collate_fn=collate)\n",
    "\n",
    "# 载入模型，准备优化器（用于优化参数），以及scheduler\n",
    "# （在训练时调整学习率，以达到更好的微调效果）\n",
    "from transformers import AutoModelForSequenceClassification\n",
    "model = AutoModelForSequenceClassification.from_pretrained(\\\n",
    "    \"bert-base-cased\", num_labels=len(dataset.id2label))\n",
    "\n",
    "from torch.optim import AdamW\n",
    "optimizer = AdamW(model.parameters(), lr=5e-5)\n",
    "\n",
    "from transformers import get_scheduler\n",
    "num_epochs = 3\n",
    "num_training_steps = num_epochs * len(train_dataloader)\n",
    "lr_scheduler = get_scheduler(\n",
    "    name=\"linear\", optimizer=optimizer, num_warmup_steps=0,\\\n",
    "    num_training_steps=num_training_steps\n",
    ")\n",
    "\n",
    "import torch\n",
    "# 自动判断是否有GPU可以使用，如果可用，将model移动到GPU显存中\n",
    "device = torch.device(\"cuda\") if torch.cuda.is_available()\\\n",
    "    else torch.device(\"cpu\")\n",
    "model.to(device)\n",
    "\n",
    "# 训练流程\n",
    "from tqdm.auto import tqdm\n",
    "progress_bar = tqdm(range(num_training_steps))\n",
    "\n",
    "for epoch in range(num_epochs):\n",
    "    # 在每个epoch开始时将model的is_training设为True，\n",
    "    # 该变量将会影响到dropout等层的行为（训练时开启dropout）\n",
    "    model.train()\n",
    "    for batch in train_dataloader:\n",
    "        # 如果GPU可用，这一步将把数据转移到GPU显存中\n",
    "        batch = {k: v.to(device) for k, v in batch.items()}\n",
    "        outputs = model(**batch)\n",
    "        loss = outputs.loss\n",
    "        loss.backward()\n",
    "\n",
    "        optimizer.step()\n",
    "        lr_scheduler.step()\n",
    "        # 更新参数之后清除上一步的梯度\n",
    "        optimizer.zero_grad()\n",
    "        progress_bar.update(1)\n",
    "progress_bar.close()\n",
    "import evaluate\n",
    "\n",
    "# 训练结束时对测试集进行评估，得到模型分数\n",
    "model.eval()\n",
    "metric = evaluate.load(\"accuracy\")\n",
    "for batch in eval_dataloader:\n",
    "    batch = {k: v.to(device) for k, v in batch.items()}\n",
    "    with torch.no_grad():\n",
    "        outputs = model(**batch)\n",
    "\n",
    "    logits = outputs.logits\n",
    "    predictions = torch.argmax(logits, dim=-1)\n",
    "    metric.add_batch(predictions=predictions, references=batch[\"labels\"])\n",
    "acc = metric.compute()"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "f871e435",
   "metadata": {},
   "source": [
    "\n",
    "下面的代码演示了如何使用GPT-2进行训练。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 11,
   "id": "d31da145",
   "metadata": {},
   "outputs": [
    {
     "data": {
      "application/vnd.jupyter.widget-view+json": {
       "model_id": "d66c37023b7e440388f3a78fa10ac3da",
       "version_major": 2,
       "version_minor": 0
      },
      "text/plain": [
       "tokenizer_config.json:   0%|          | 0.00/26.0 [00:00<?, ?B/s]"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "E:\\Translations\\Python\\Python311\\site-packages\\huggingface_hub\\file_download.py:144: UserWarning: `huggingface_hub` cache-system uses symlinks by default to efficiently store duplicated files but your machine does not support them in C:\\Users\\Administrator\\.cache\\huggingface\\hub\\models--gpt2. Caching files will still work but in a degraded version that might require more space on your disk. This warning can be disabled by setting the `HF_HUB_DISABLE_SYMLINKS_WARNING` environment variable. For more details, see https://huggingface.co/docs/huggingface_hub/how-to-cache#limitations.\n",
      "To support symlinks on Windows, you either need to activate Developer Mode or to run Python as an administrator. In order to activate developer mode, see this article: https://docs.microsoft.com/en-us/windows/apps/get-started/enable-your-device-for-development\n",
      "  warnings.warn(message)\n"
     ]
    },
    {
     "data": {
      "application/vnd.jupyter.widget-view+json": {
       "model_id": "f2a29360cf3e47bdb7250306162bd747",
       "version_major": 2,
       "version_minor": 0
      },
      "text/plain": [
       "config.json:   0%|          | 0.00/377 [00:00<?, ?B/s]"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "data": {
      "application/vnd.jupyter.widget-view+json": {
       "model_id": "e1b097bed8864d2fa6d650a93fbc5eae",
       "version_major": 2,
       "version_minor": 0
      },
      "text/plain": [
       "vocab.json: 0.00B [00:00, ?B/s]"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "data": {
      "application/vnd.jupyter.widget-view+json": {
       "model_id": "28ff284616e642eebd65e038320f29c4",
       "version_major": 2,
       "version_minor": 0
      },
      "text/plain": [
       "merges.txt: 0.00B [00:00, ?B/s]"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "data": {
      "application/vnd.jupyter.widget-view+json": {
       "model_id": "00618e0c163a49449276f3644ba110a6",
       "version_major": 2,
       "version_minor": 0
      },
      "text/plain": [
       "tokenizer.json: 0.00B [00:00, ?B/s]"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "Token indices sequence length is longer than the specified maximum sequence length for this model (26022 > 1024). Running this sequence through the model will result in indexing errors\n"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "20817 5205\n",
      "['the', 'Ġlittle', 'Ġprince', 'Ġ', 'ĊĊ', 'Ċ', 'Ċ', 'anto', 'ine', 'Ġde']\n"
     ]
    }
   ],
   "source": [
    "\"\"\"\n",
    "代码来源于GitHub项目huggingface/transformers\n",
    "（Copyright (c) 2020, The HuggingFace Team, Apache-2.0 License（见附录））\n",
    "\"\"\"\n",
    "import sys\n",
    "\n",
    "# 导入第3章使用的《小王子》数据集\n",
    "sys.path.append('../code')\n",
    "from my_utils import TheLittlePrinceDataset\n",
    "\n",
    "full_text = TheLittlePrinceDataset(tokenize=False).text\n",
    "# 接下来载入GPT2模型的分词器并完成分词。\n",
    "from transformers import AutoTokenizer\n",
    "tokenizer = AutoTokenizer.from_pretrained('gpt2')\n",
    "\n",
    "full_tokens = tokenizer.tokenize(full_text.lower())\n",
    "train_size = int(len(full_tokens) * 0.8)\n",
    "train_tokens = full_tokens[:train_size]\n",
    "test_tokens = full_tokens[train_size:]\n",
    "print(len(train_tokens), len(test_tokens))\n",
    "print(train_tokens[:10])"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 12,
   "id": "b301b7e6",
   "metadata": {},
   "outputs": [],
   "source": [
    "import torch\n",
    "from torch.utils.data import TensorDataset\n",
    "\n",
    "# 将文本根据block_size分成小块\n",
    "block_size = 128\n",
    "\n",
    "def split_blocks(tokens):\n",
    "    token_ids = []\n",
    "    for i in range(len(tokens) // block_size):\n",
    "        _tokens = tokens[i*block_size:(i+1)*block_size]\n",
    "        if len(_tokens) < block_size:\n",
    "            _tokens += [tokenizer.pad_token] * (block_size - len(_tokens))\n",
    "        _token_ids = tokenizer.convert_tokens_to_ids(_tokens)\n",
    "        token_ids.append(_token_ids)\n",
    "    return token_ids\n",
    "\n",
    "train_dataset = split_blocks(train_tokens)\n",
    "test_dataset = split_blocks(test_tokens)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 14,
   "id": "94647ae7",
   "metadata": {},
   "outputs": [
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "Using the `WANDB_DISABLED` environment variable is deprecated and will be removed in v5. Use the --report_to flag to control the integrations used for logging result (for instance --report_to none).\n",
      "`loss_type=None` was set in the config but it is unrecognised.Using the default loss: `ForCausalLMLoss`.\n"
     ]
    },
    {
     "data": {
      "text/html": [
       "\n",
       "    <div>\n",
       "      \n",
       "      <progress value='63' max='63' style='width:300px; height:20px; vertical-align: middle;'></progress>\n",
       "      [63/63 02:21, Epoch 3/3]\n",
       "    </div>\n",
       "    <table border=\"1\" class=\"dataframe\">\n",
       "  <thead>\n",
       " <tr style=\"text-align: left;\">\n",
       "      <th>Epoch</th>\n",
       "      <th>Training Loss</th>\n",
       "      <th>Validation Loss</th>\n",
       "    </tr>\n",
       "  </thead>\n",
       "  <tbody>\n",
       "    <tr>\n",
       "      <td>1</td>\n",
       "      <td>No log</td>\n",
       "      <td>3.175316</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>2</td>\n",
       "      <td>No log</td>\n",
       "      <td>3.044463</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>3</td>\n",
       "      <td>No log</td>\n",
       "      <td>3.015755</td>\n",
       "    </tr>\n",
       "  </tbody>\n",
       "</table><p>"
      ],
      "text/plain": [
       "<IPython.core.display.HTML object>"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "data": {
      "text/html": [
       "\n",
       "    <div>\n",
       "      \n",
       "      <progress value='5' max='5' style='width:300px; height:20px; vertical-align: middle;'></progress>\n",
       "      [5/5 00:02]\n",
       "    </div>\n",
       "    "
      ],
      "text/plain": [
       "<IPython.core.display.HTML object>"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Perplexity: 20.40\n"
     ]
    }
   ],
   "source": [
    "# 创建一个DataCollator，用于在训练时把分词的结果转化为模型可以训练的张量\n",
    "# 注意此时微调的任务是语言模型，而不是掩码语言模型\n",
    "from transformers import DataCollatorForLanguageModeling\n",
    "\n",
    "tokenizer.pad_token = tokenizer.eos_token\n",
    "data_collator = DataCollatorForLanguageModeling(tokenizer=\\\n",
    "    tokenizer, mlm=False)\n",
    "\n",
    "# 导入模型，准备训练参数，调用Trainer类完成训练\n",
    "from transformers import AutoModelForCausalLM, TrainingArguments, Trainer\n",
    "\n",
    "model = AutoModelForCausalLM.from_pretrained(\"gpt2\")\n",
    "\n",
    "training_args = TrainingArguments(\n",
    "    output_dir=\"test_trainer\",\n",
    "    eval_strategy=\"epoch\",\n",
    "    learning_rate=2e-5,\n",
    "    weight_decay=0.01,\n",
    ")\n",
    "\n",
    "trainer = Trainer(\n",
    "    model=model,\n",
    "    args=training_args,\n",
    "    train_dataset=train_dataset,\n",
    "    eval_dataset=test_dataset,\n",
    "    data_collator=data_collator,\n",
    ")\n",
    "\n",
    "trainer.train()\n",
    "\n",
    "# 在测试集上测试得到困惑度\n",
    "import math\n",
    "eval_results = trainer.evaluate()\n",
    "print(f\"Perplexity: {math.exp(eval_results['eval_loss']):.2f}\")"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "dd0dbdb9",
   "metadata": {},
   "source": [
    "这里基于HuggingFace来展示如何使用GPT-2模型生成文本。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 15,
   "id": "399168ba",
   "metadata": {},
   "outputs": [
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "The attention mask is not set and cannot be inferred from input because pad token is same as eos token. As a consequence, you may observe unexpected behavior. Please pass your input's `attention_mask` to obtain reliable results.\n"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Output:\n",
      "----------------------------------------------------------------------------------------------------\n",
      "I enjoy learning with this book. I have been reading it for a while now and I am very happy with it. I have been reading it for a while now and I am very happy with it.\n",
      "\n",
      "I have been reading it for a\n",
      "Output:\n",
      "----------------------------------------------------------------------------------------------------\n",
      "I enjoy learning with this book, and I hope you enjoy reading it as much as I do.\n",
      "\n",
      "I hope you enjoy reading this book, and I hope you enjoy reading it as much as I do.\n",
      "\n",
      "I hope you enjoy reading\n",
      "Output:\n",
      "----------------------------------------------------------------------------------------------------\n",
      "0: I enjoy learning with this book, and I hope you enjoy reading it as much as I do.\n",
      "\n",
      "If you have any questions or comments, feel free to leave them in the comments below.\n",
      "1: I enjoy learning with this book, and I hope you enjoy reading it as much as I do.\n",
      "\n",
      "If you have any questions or comments, please feel free to leave them in the comments below.\n",
      "2: I enjoy learning with this book, and I hope you enjoy reading it as much as I do.\n",
      "\n",
      "If you have any questions or comments, feel free to leave them in the comment section below.\n",
      "3: I enjoy learning with this book, and I hope you enjoy reading it as much as I do.\n",
      "\n",
      "If you have any questions or comments, feel free to leave them in the comments section below.\n",
      "4: I enjoy learning with this book, and I hope you enjoy reading it as much as I do.\n",
      "\n",
      "If you have any questions or comments, feel free to leave them in the comments below!\n"
     ]
    }
   ],
   "source": [
    "\"\"\"\n",
    "代码来源于GitHub项目huggingface/transformers\n",
    "（Copyright (c) 2020, The HuggingFace Team, Apache-2.0 License（见附录））\n",
    "\"\"\"\n",
    "import torch\n",
    "from transformers import GPT2LMHeadModel, GPT2Tokenizer\n",
    "\n",
    "tokenizer = GPT2Tokenizer.from_pretrained('gpt2')\n",
    "model = GPT2LMHeadModel.from_pretrained('gpt2',\\\n",
    "    pad_token_id=tokenizer.eos_token_id)\n",
    "# 输入文本\n",
    "input_ids = tokenizer.encode('I enjoy learning with this book',\\\n",
    "    return_tensors='pt')\n",
    "\n",
    "# 输出文本\n",
    "greedy_output = model.generate(input_ids, max_length=50)\n",
    "print(\"Output:\\n\" + 100 * '-')\n",
    "print(tokenizer.decode(greedy_output[0], skip_special_tokens=True))\n",
    "\n",
    "# 通过束搜索来生成句子，一旦生成足够多的句子即停止搜索\n",
    "beam_output = model.generate(\n",
    "    input_ids, \n",
    "    max_length=50, \n",
    "    num_beams=5, \n",
    "    early_stopping=True\n",
    ")\n",
    "\n",
    "print(\"Output:\\n\" + 100 * '-')\n",
    "print(tokenizer.decode(beam_output[0], skip_special_tokens=True))\n",
    "\n",
    "# 输出多个句子\n",
    "beam_outputs = model.generate(\n",
    "    input_ids, \n",
    "    max_length=50, \n",
    "    num_beams=5, \n",
    "    no_repeat_ngram_size=2, \n",
    "    num_return_sequences=5, \n",
    "    early_stopping=True\n",
    ")\n",
    "\n",
    "print(\"Output:\\n\" + 100 * '-')\n",
    "for i, beam_output in enumerate(beam_outputs):\n",
    "    print(\"{}: {}\".format(i, tokenizer.decode(beam_output,\\\n",
    "        skip_special_tokens=True)))"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "00d323e9",
   "metadata": {},
   "source": [
    "HuggingFace中集成了许多预训练语言模型。你可以直接通过具体的接口调用某一个预训练语言模型，但这种方式相对复杂，需要对具体模型和接口有所了解。或者，你也可以通过pipeline模块黑箱地使用这些模型，pipeline模块会根据指定的任务自动分配一个合适的预训练语言模型，你也可以通过参数指定一个预训练语言模型。下面演示pipeline模块处理不同任务的代码，你也可以在HuggingFace官网上了解HuggingFace支持哪些模型。"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "5fcc3533",
   "metadata": {},
   "source": [
    "\n",
    "\n",
    "下面以情感分类为例演示文本分类任务上预训练语言模型的使用。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 16,
   "id": "62235e0e",
   "metadata": {},
   "outputs": [
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "No model was supplied, defaulted to distilbert/distilbert-base-uncased-finetuned-sst-2-english and revision 714eb0f (https://hf-mirror.com/distilbert/distilbert-base-uncased-finetuned-sst-2-english).\n",
      "Using a pipeline without specifying a model name and revision in production is not recommended.\n"
     ]
    },
    {
     "data": {
      "application/vnd.jupyter.widget-view+json": {
       "model_id": "129a1be88cee42e79028a656194ec346",
       "version_major": 2,
       "version_minor": 0
      },
      "text/plain": [
       "config.json:   0%|          | 0.00/385 [00:00<?, ?B/s]"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "E:\\Translations\\Python\\Python311\\site-packages\\huggingface_hub\\file_download.py:144: UserWarning: `huggingface_hub` cache-system uses symlinks by default to efficiently store duplicated files but your machine does not support them in C:\\Users\\Administrator\\.cache\\huggingface\\hub\\models--distilbert--distilbert-base-uncased-finetuned-sst-2-english. Caching files will still work but in a degraded version that might require more space on your disk. This warning can be disabled by setting the `HF_HUB_DISABLE_SYMLINKS_WARNING` environment variable. For more details, see https://huggingface.co/docs/huggingface_hub/how-to-cache#limitations.\n",
      "To support symlinks on Windows, you either need to activate Developer Mode or to run Python as an administrator. In order to activate developer mode, see this article: https://docs.microsoft.com/en-us/windows/apps/get-started/enable-your-device-for-development\n",
      "  warnings.warn(message)\n",
      "Xet Storage is enabled for this repo, but the 'hf_xet' package is not installed. Falling back to regular HTTP download. For better performance, install the package with: `pip install huggingface_hub[hf_xet]` or `pip install hf_xet`\n"
     ]
    },
    {
     "data": {
      "application/vnd.jupyter.widget-view+json": {
       "model_id": "f368513ba1ba429bba111e211fd00cd9",
       "version_major": 2,
       "version_minor": 0
      },
      "text/plain": [
       "model.safetensors:   0%|          | 0.00/268M [00:00<?, ?B/s]"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "data": {
      "application/vnd.jupyter.widget-view+json": {
       "model_id": "42ab1d19f05a4e8986c29c6e57c976ee",
       "version_major": 2,
       "version_minor": 0
      },
      "text/plain": [
       "tokenizer_config.json:   0%|          | 0.00/48.0 [00:00<?, ?B/s]"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "data": {
      "application/vnd.jupyter.widget-view+json": {
       "model_id": "6310839ae38e4ce78cb35dd236f3a7b0",
       "version_major": 2,
       "version_minor": 0
      },
      "text/plain": [
       "vocab.txt: 0.00B [00:00, ?B/s]"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "Device set to use cpu\n",
      "No model was supplied, defaulted to facebook/bart-large-mnli and revision d7645e1 (https://hf-mirror.com/facebook/bart-large-mnli).\n",
      "Using a pipeline without specifying a model name and revision in production is not recommended.\n"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "[{'label': 'POSITIVE', 'score': 0.9998708963394165}]\n",
      "[{'label': 'POSITIVE', 'score': 0.9998835325241089}, {'label': 'NEGATIVE', 'score': 0.9994825124740601}, {'label': 'POSITIVE', 'score': 0.9998630285263062}]\n"
     ]
    },
    {
     "data": {
      "application/vnd.jupyter.widget-view+json": {
       "model_id": "a51023dae5044bde9e134e01acb726e5",
       "version_major": 2,
       "version_minor": 0
      },
      "text/plain": [
       "config.json:   0%|          | 0.00/509 [00:00<?, ?B/s]"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "E:\\Translations\\Python\\Python311\\site-packages\\huggingface_hub\\file_download.py:144: UserWarning: `huggingface_hub` cache-system uses symlinks by default to efficiently store duplicated files but your machine does not support them in C:\\Users\\Administrator\\.cache\\huggingface\\hub\\models--facebook--bart-large-mnli. Caching files will still work but in a degraded version that might require more space on your disk. This warning can be disabled by setting the `HF_HUB_DISABLE_SYMLINKS_WARNING` environment variable. For more details, see https://huggingface.co/docs/huggingface_hub/how-to-cache#limitations.\n",
      "To support symlinks on Windows, you either need to activate Developer Mode or to run Python as an administrator. In order to activate developer mode, see this article: https://docs.microsoft.com/en-us/windows/apps/get-started/enable-your-device-for-development\n",
      "  warnings.warn(message)\n",
      "Xet Storage is enabled for this repo, but the 'hf_xet' package is not installed. Falling back to regular HTTP download. For better performance, install the package with: `pip install huggingface_hub[hf_xet]` or `pip install hf_xet`\n"
     ]
    },
    {
     "data": {
      "application/vnd.jupyter.widget-view+json": {
       "model_id": "80fac1599bec4f2da72b385f78bf0863",
       "version_major": 2,
       "version_minor": 0
      },
      "text/plain": [
       "model.safetensors:   0%|          | 0.00/1.63G [00:00<?, ?B/s]"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "data": {
      "application/vnd.jupyter.widget-view+json": {
       "model_id": "9e7cdbfc89264e37b7c0138dea4404f6",
       "version_major": 2,
       "version_minor": 0
      },
      "text/plain": [
       "tokenizer_config.json:   0%|          | 0.00/26.0 [00:00<?, ?B/s]"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "data": {
      "application/vnd.jupyter.widget-view+json": {
       "model_id": "c5f28bc449f94330bb17262a8e5e5846",
       "version_major": 2,
       "version_minor": 0
      },
      "text/plain": [
       "vocab.json: 0.00B [00:00, ?B/s]"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "data": {
      "application/vnd.jupyter.widget-view+json": {
       "model_id": "9279538a4e6c4b449ad5e55969968b7b",
       "version_major": 2,
       "version_minor": 0
      },
      "text/plain": [
       "merges.txt: 0.00B [00:00, ?B/s]"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "data": {
      "application/vnd.jupyter.widget-view+json": {
       "model_id": "3715bbbf55b24620b8523d46de24ad9d",
       "version_major": 2,
       "version_minor": 0
      },
      "text/plain": [
       "tokenizer.json: 0.00B [00:00, ?B/s]"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "Device set to use cpu\n"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "[{'sequence': 'A helicopter is flying in the sky', 'labels': ['machine', 'animal'], 'scores': [0.9938626885414124, 0.006137351039797068]}, {'sequence': 'A bird is flying in the sky', 'labels': ['animal', 'machine'], 'scores': [0.9987970590591431, 0.0012029324425384402]}]\n"
     ]
    }
   ],
   "source": [
    "\"\"\"\n",
    "代码来源于GitHub项目huggingface/transformers\n",
    "（Copyright (c) 2020, The HuggingFace Team, Apache-2.0 License（见附录））\n",
    "\"\"\"\n",
    "from transformers import pipeline\n",
    "\n",
    "clf = pipeline('sentiment-analysis')\n",
    "print(clf('Haha, today is a nice day!'))\n",
    "\n",
    "print(clf(['The food is amazing', 'The assignment is weigh too hard',\\\n",
    "           'NLP is so much fun']))\n",
    "\n",
    "clf = pipeline('zero-shot-classification')\n",
    "print(clf(sequences=['A helicopter is flying in the sky',\\\n",
    "                     'A bird is flying in the sky'],\n",
    "   candidate_labels=['animal', 'machine']))"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "3008cc7b",
   "metadata": {},
   "source": [
    "\n",
    "下面演示两种文本生成任务上预训练语言模型的使用。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 17,
   "id": "71d72551",
   "metadata": {},
   "outputs": [
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "No model was supplied, defaulted to openai-community/gpt2 and revision 607a30d (https://hf-mirror.com/openai-community/gpt2).\n",
      "Using a pipeline without specifying a model name and revision in production is not recommended.\n"
     ]
    },
    {
     "data": {
      "application/vnd.jupyter.widget-view+json": {
       "model_id": "e1fd945e509d46cf8fe6c1f73c22d8ed",
       "version_major": 2,
       "version_minor": 0
      },
      "text/plain": [
       "config.json:   0%|          | 0.00/377 [00:00<?, ?B/s]"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "E:\\Translations\\Python\\Python311\\site-packages\\huggingface_hub\\file_download.py:144: UserWarning: `huggingface_hub` cache-system uses symlinks by default to efficiently store duplicated files but your machine does not support them in C:\\Users\\Administrator\\.cache\\huggingface\\hub\\models--openai-community--gpt2. Caching files will still work but in a degraded version that might require more space on your disk. This warning can be disabled by setting the `HF_HUB_DISABLE_SYMLINKS_WARNING` environment variable. For more details, see https://huggingface.co/docs/huggingface_hub/how-to-cache#limitations.\n",
      "To support symlinks on Windows, you either need to activate Developer Mode or to run Python as an administrator. In order to activate developer mode, see this article: https://docs.microsoft.com/en-us/windows/apps/get-started/enable-your-device-for-development\n",
      "  warnings.warn(message)\n",
      "Xet Storage is enabled for this repo, but the 'hf_xet' package is not installed. Falling back to regular HTTP download. For better performance, install the package with: `pip install huggingface_hub[hf_xet]` or `pip install hf_xet`\n"
     ]
    },
    {
     "data": {
      "application/vnd.jupyter.widget-view+json": {
       "model_id": "1447534ee0a9437b8f4cc0fd456fbc4e",
       "version_major": 2,
       "version_minor": 0
      },
      "text/plain": [
       "model.safetensors:   0%|          | 0.00/548M [00:00<?, ?B/s]"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "data": {
      "application/vnd.jupyter.widget-view+json": {
       "model_id": "e78566459ef34a54a409a4a33d6e8ba6",
       "version_major": 2,
       "version_minor": 0
      },
      "text/plain": [
       "generation_config.json:   0%|          | 0.00/124 [00:00<?, ?B/s]"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "data": {
      "application/vnd.jupyter.widget-view+json": {
       "model_id": "f196fd4abe9346ce99bc45785fab778f",
       "version_major": 2,
       "version_minor": 0
      },
      "text/plain": [
       "tokenizer_config.json:   0%|          | 0.00/26.0 [00:00<?, ?B/s]"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "data": {
      "application/vnd.jupyter.widget-view+json": {
       "model_id": "74cde45d6f4f4bdf829b74e1ea90ae12",
       "version_major": 2,
       "version_minor": 0
      },
      "text/plain": [
       "vocab.json: 0.00B [00:00, ?B/s]"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "data": {
      "application/vnd.jupyter.widget-view+json": {
       "model_id": "a5192e8396564a09ac925e9ded979c5f",
       "version_major": 2,
       "version_minor": 0
      },
      "text/plain": [
       "merges.txt: 0.00B [00:00, ?B/s]"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "data": {
      "application/vnd.jupyter.widget-view+json": {
       "model_id": "4ff17804bf0b4793ad1d6cc5a35e7384",
       "version_major": 2,
       "version_minor": 0
      },
      "text/plain": [
       "tokenizer.json: 0.00B [00:00, ?B/s]"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "Device set to use cpu\n",
      "Setting `pad_token_id` to `eos_token_id`:50256 for open-end generation.\n",
      "No model was supplied, defaulted to distilbert/distilroberta-base and revision fb53ab8 (https://hf-mirror.com/distilbert/distilroberta-base).\n",
      "Using a pipeline without specifying a model name and revision in production is not recommended.\n"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "[{'generated_text': 'In this course, we will teach you how to read human mind as it is expressed using simple algorithms and then use them in applications.\\n\\nStudents will learn how to apply these algorithms quickly and accurately, how to use them in a number of applications'}]\n"
     ]
    },
    {
     "data": {
      "application/vnd.jupyter.widget-view+json": {
       "model_id": "dc5f85a3de0b4488ab1a1a641127bd51",
       "version_major": 2,
       "version_minor": 0
      },
      "text/plain": [
       "config.json:   0%|          | 0.00/480 [00:00<?, ?B/s]"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "E:\\Translations\\Python\\Python311\\site-packages\\huggingface_hub\\file_download.py:144: UserWarning: `huggingface_hub` cache-system uses symlinks by default to efficiently store duplicated files but your machine does not support them in C:\\Users\\Administrator\\.cache\\huggingface\\hub\\models--distilbert--distilroberta-base. Caching files will still work but in a degraded version that might require more space on your disk. This warning can be disabled by setting the `HF_HUB_DISABLE_SYMLINKS_WARNING` environment variable. For more details, see https://huggingface.co/docs/huggingface_hub/how-to-cache#limitations.\n",
      "To support symlinks on Windows, you either need to activate Developer Mode or to run Python as an administrator. In order to activate developer mode, see this article: https://docs.microsoft.com/en-us/windows/apps/get-started/enable-your-device-for-development\n",
      "  warnings.warn(message)\n",
      "Xet Storage is enabled for this repo, but the 'hf_xet' package is not installed. Falling back to regular HTTP download. For better performance, install the package with: `pip install huggingface_hub[hf_xet]` or `pip install hf_xet`\n"
     ]
    },
    {
     "data": {
      "application/vnd.jupyter.widget-view+json": {
       "model_id": "af229e78eba24b13bfa5a584efe4ec30",
       "version_major": 2,
       "version_minor": 0
      },
      "text/plain": [
       "model.safetensors:   0%|          | 0.00/331M [00:00<?, ?B/s]"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "Some weights of the model checkpoint at distilbert/distilroberta-base were not used when initializing RobertaForMaskedLM: ['roberta.pooler.dense.bias', 'roberta.pooler.dense.weight']\n",
      "- This IS expected if you are initializing RobertaForMaskedLM from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model).\n",
      "- This IS NOT expected if you are initializing RobertaForMaskedLM from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).\n"
     ]
    },
    {
     "data": {
      "application/vnd.jupyter.widget-view+json": {
       "model_id": "c481b247a3ad4c11aba00685b6a2b4cc",
       "version_major": 2,
       "version_minor": 0
      },
      "text/plain": [
       "tokenizer_config.json:   0%|          | 0.00/25.0 [00:00<?, ?B/s]"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "data": {
      "application/vnd.jupyter.widget-view+json": {
       "model_id": "d6907bbb172c47aaa596f36639ef5d74",
       "version_major": 2,
       "version_minor": 0
      },
      "text/plain": [
       "vocab.json: 0.00B [00:00, ?B/s]"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "data": {
      "application/vnd.jupyter.widget-view+json": {
       "model_id": "0afa9d809ceb4175871cf91d59bdd4f9",
       "version_major": 2,
       "version_minor": 0
      },
      "text/plain": [
       "merges.txt: 0.00B [00:00, ?B/s]"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "data": {
      "application/vnd.jupyter.widget-view+json": {
       "model_id": "c94524ed74d44fc5aa4edc8161be2782",
       "version_major": 2,
       "version_minor": 0
      },
      "text/plain": [
       "tokenizer.json: 0.00B [00:00, ?B/s]"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "Device set to use cpu\n"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "[{'score': 0.19619686901569366, 'token': 30412, 'token_str': ' mathematical', 'sequence': 'This course will teach you all about mathematical models.'}, {'score': 0.04052691534161568, 'token': 38163, 'token_str': ' computational', 'sequence': 'This course will teach you all about computational models.'}, {'score': 0.033017538487911224, 'token': 27930, 'token_str': ' predictive', 'sequence': 'This course will teach you all about predictive models.'}, {'score': 0.03194144740700722, 'token': 745, 'token_str': ' building', 'sequence': 'This course will teach you all about building models.'}, {'score': 0.024522796273231506, 'token': 3034, 'token_str': ' computer', 'sequence': 'This course will teach you all about computer models.'}]\n"
     ]
    }
   ],
   "source": [
    "\"\"\"\n",
    "代码来源于GitHub项目huggingface/transformers\n",
    "（Copyright (c) 2020, The HuggingFace Team, Apache-2.0 License（见附录））\n",
    "\"\"\"\n",
    "generator = pipeline('text-generation')\n",
    "print(generator('In this course, we will teach you how to'))\n",
    "\n",
    "unmasker = pipeline('fill-mask')\n",
    "print(unmasker('This course will teach you all about <mask> models.'))"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "f61e334c",
   "metadata": {},
   "source": [
    "\n",
    "\n",
    "输入任务“question-answering”，pipeline会自动返回默认的问答预训练语言模型“distilbert-base-cased-distilled-squad”，输入问题和上下文，就能得到答案。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 18,
   "id": "3f943e42",
   "metadata": {},
   "outputs": [
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "No model was supplied, defaulted to distilbert/distilbert-base-cased-distilled-squad and revision 564e9b5 (https://hf-mirror.com/distilbert/distilbert-base-cased-distilled-squad).\n",
      "Using a pipeline without specifying a model name and revision in production is not recommended.\n"
     ]
    },
    {
     "data": {
      "application/vnd.jupyter.widget-view+json": {
       "model_id": "cf86780e793e4be787999d4ffeeaffcf",
       "version_major": 2,
       "version_minor": 0
      },
      "text/plain": [
       "config.json:   0%|          | 0.00/473 [00:00<?, ?B/s]"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "E:\\Translations\\Python\\Python311\\site-packages\\huggingface_hub\\file_download.py:144: UserWarning: `huggingface_hub` cache-system uses symlinks by default to efficiently store duplicated files but your machine does not support them in C:\\Users\\Administrator\\.cache\\huggingface\\hub\\models--distilbert--distilbert-base-cased-distilled-squad. Caching files will still work but in a degraded version that might require more space on your disk. This warning can be disabled by setting the `HF_HUB_DISABLE_SYMLINKS_WARNING` environment variable. For more details, see https://huggingface.co/docs/huggingface_hub/how-to-cache#limitations.\n",
      "To support symlinks on Windows, you either need to activate Developer Mode or to run Python as an administrator. In order to activate developer mode, see this article: https://docs.microsoft.com/en-us/windows/apps/get-started/enable-your-device-for-development\n",
      "  warnings.warn(message)\n",
      "Xet Storage is enabled for this repo, but the 'hf_xet' package is not installed. Falling back to regular HTTP download. For better performance, install the package with: `pip install huggingface_hub[hf_xet]` or `pip install hf_xet`\n"
     ]
    },
    {
     "data": {
      "application/vnd.jupyter.widget-view+json": {
       "model_id": "682a5a678d834756b7c2efed6f19abbf",
       "version_major": 2,
       "version_minor": 0
      },
      "text/plain": [
       "model.safetensors:   0%|          | 0.00/261M [00:00<?, ?B/s]"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "data": {
      "application/vnd.jupyter.widget-view+json": {
       "model_id": "27ee7d785a9143548b0a783217ae632c",
       "version_major": 2,
       "version_minor": 0
      },
      "text/plain": [
       "tokenizer_config.json:   0%|          | 0.00/49.0 [00:00<?, ?B/s]"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "data": {
      "application/vnd.jupyter.widget-view+json": {
       "model_id": "37769cbcfc324525b57f5fc6668da3a3",
       "version_major": 2,
       "version_minor": 0
      },
      "text/plain": [
       "vocab.txt: 0.00B [00:00, ?B/s]"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "data": {
      "application/vnd.jupyter.widget-view+json": {
       "model_id": "1c24f2780e0a4531bcab1c6cf186b514",
       "version_major": 2,
       "version_minor": 0
      },
      "text/plain": [
       "tokenizer.json: 0.00B [00:00, ?B/s]"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "Device set to use cpu\n"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "{'score': 0.8236539363861084, 'start': 34, 'end': 62, 'answer': 'ShanghaiJiao Tong University'}\n"
     ]
    }
   ],
   "source": [
    "\"\"\"\n",
    "代码来源于GitHub项目huggingface/transformers\n",
    "（Copyright (c) 2020, The HuggingFace Team, Apache-2.0 License（见附录））\n",
    "\"\"\"\n",
    "question_answerer = pipeline('question-answering')\n",
    "print(question_answerer(question='Where do I graduate from?', \n",
    "    context=\"I received my bachlor\\'s degree at Shanghai\"+\\\n",
    "        \"Jiao Tong University (SJTU).\"))"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "68098304",
   "metadata": {},
   "source": [
    "\n",
    "\n",
    "输入任务“summarization”，pipeline会自动返回默认的预训练语言模型“sshleifer/distilbart-cnn-12-6”，输入一段文本，就能得到摘要。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 19,
   "id": "1977a486",
   "metadata": {},
   "outputs": [
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "No model was supplied, defaulted to sshleifer/distilbart-cnn-12-6 and revision a4f8f3e (https://hf-mirror.com/sshleifer/distilbart-cnn-12-6).\n",
      "Using a pipeline without specifying a model name and revision in production is not recommended.\n"
     ]
    },
    {
     "data": {
      "application/vnd.jupyter.widget-view+json": {
       "model_id": "3f495cc687ba4bb2b4a8fc9fb3ff3aca",
       "version_major": 2,
       "version_minor": 0
      },
      "text/plain": [
       "config.json:   0%|          | 0.00/730 [00:00<?, ?B/s]"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "E:\\Translations\\Python\\Python311\\site-packages\\huggingface_hub\\file_download.py:144: UserWarning: `huggingface_hub` cache-system uses symlinks by default to efficiently store duplicated files but your machine does not support them in C:\\Users\\Administrator\\.cache\\huggingface\\hub\\models--sshleifer--distilbart-cnn-12-6. Caching files will still work but in a degraded version that might require more space on your disk. This warning can be disabled by setting the `HF_HUB_DISABLE_SYMLINKS_WARNING` environment variable. For more details, see https://huggingface.co/docs/huggingface_hub/how-to-cache#limitations.\n",
      "To support symlinks on Windows, you either need to activate Developer Mode or to run Python as an administrator. In order to activate developer mode, see this article: https://docs.microsoft.com/en-us/windows/apps/get-started/enable-your-device-for-development\n",
      "  warnings.warn(message)\n"
     ]
    },
    {
     "data": {
      "application/vnd.jupyter.widget-view+json": {
       "model_id": "5eac1f1b8cc24cf896fd9a62e4628b72",
       "version_major": 2,
       "version_minor": 0
      },
      "text/plain": [
       "pytorch_model.bin:   0%|          | 0.00/1.22G [00:00<?, ?B/s]"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "Error while downloading from https://cdn-lfs.hf-mirror.com/sshleifer/distilbart-cnn-12-6/3bac65d18c99463302d12ca75c2220ea714f9c81ce235f205fa818efe71df6ea?response-content-disposition=inline%3B+filename*%3DUTF-8%27%27pytorch_model.bin%3B+filename%3D%22pytorch_model.bin%22%3B&response-content-type=application%2Foctet-stream&Expires=1745287527&Policy=eyJTdGF0ZW1lbnQiOlt7IkNvbmRpdGlvbiI6eyJEYXRlTGVzc1RoYW4iOnsiQVdTOkVwb2NoVGltZSI6MTc0NTI4NzUyN319LCJSZXNvdXJjZSI6Imh0dHBzOi8vY2RuLWxmcy5oZi5jby9zc2hsZWlmZXIvZGlzdGlsYmFydC1jbm4tMTItNi8zYmFjNjVkMThjOTk0NjMzMDJkMTJjYTc1YzIyMjBlYTcxNGY5YzgxY2UyMzVmMjA1ZmE4MThlZmU3MWRmNmVhP3Jlc3BvbnNlLWNvbnRlbnQtZGlzcG9zaXRpb249KiZyZXNwb25zZS1jb250ZW50LXR5cGU9KiJ9XX0_&Signature=nqPskAFDZ9yM8OPANAGyVKvux1PGY8TNYj3wVLdLWS5JUr1Kzo6Er2hc%7EI1DYhQvsnz%7E9qb7m9P2JUyzc5ReK89Csw9AOvSYM2Rfv-rW2GwIcjKwEibATo8HlPrlNUcqG8wNb598njXtXfD8KNkfb4n31c5Kf5cul52mAqX-DBTgsp3zCPiHKV3HjzkEXiBnYkevRQsmXrB5L4oCUNnpCWynPxenp7j2efOdwxHQYfYu-JIy8nsMY3W11l9Q7gve17AB8WJtf0cWKs-Kfm6V9zG5EUP7-P%7ELeslSwEDjSmDqqvtpO2WpvktjYbk6Bl8uTlDJy-RWfMjNwYjQpwfbsA__&Key-Pair-Id=K3RPWS32NSSJCE: HTTPSConnectionPool(host='cdn-lfs.hf-mirror.com', port=443): Read timed out.\n",
      "Trying to resume download...\n"
     ]
    },
    {
     "data": {
      "application/vnd.jupyter.widget-view+json": {
       "model_id": "492d2d0bd0634a879c4da4bfe617c2bc",
       "version_major": 2,
       "version_minor": 0
      },
      "text/plain": [
       "pytorch_model.bin:   3%|3         | 41.9M/1.22G [00:00<?, ?B/s]"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "Error while downloading from https://cdn-lfs.hf-mirror.com/sshleifer/distilbart-cnn-12-6/3bac65d18c99463302d12ca75c2220ea714f9c81ce235f205fa818efe71df6ea?response-content-disposition=inline%3B+filename*%3DUTF-8%27%27pytorch_model.bin%3B+filename%3D%22pytorch_model.bin%22%3B&response-content-type=application%2Foctet-stream&Expires=1745287527&Policy=eyJTdGF0ZW1lbnQiOlt7IkNvbmRpdGlvbiI6eyJEYXRlTGVzc1RoYW4iOnsiQVdTOkVwb2NoVGltZSI6MTc0NTI4NzUyN319LCJSZXNvdXJjZSI6Imh0dHBzOi8vY2RuLWxmcy5oZi5jby9zc2hsZWlmZXIvZGlzdGlsYmFydC1jbm4tMTItNi8zYmFjNjVkMThjOTk0NjMzMDJkMTJjYTc1YzIyMjBlYTcxNGY5YzgxY2UyMzVmMjA1ZmE4MThlZmU3MWRmNmVhP3Jlc3BvbnNlLWNvbnRlbnQtZGlzcG9zaXRpb249KiZyZXNwb25zZS1jb250ZW50LXR5cGU9KiJ9XX0_&Signature=nqPskAFDZ9yM8OPANAGyVKvux1PGY8TNYj3wVLdLWS5JUr1Kzo6Er2hc%7EI1DYhQvsnz%7E9qb7m9P2JUyzc5ReK89Csw9AOvSYM2Rfv-rW2GwIcjKwEibATo8HlPrlNUcqG8wNb598njXtXfD8KNkfb4n31c5Kf5cul52mAqX-DBTgsp3zCPiHKV3HjzkEXiBnYkevRQsmXrB5L4oCUNnpCWynPxenp7j2efOdwxHQYfYu-JIy8nsMY3W11l9Q7gve17AB8WJtf0cWKs-Kfm6V9zG5EUP7-P%7ELeslSwEDjSmDqqvtpO2WpvktjYbk6Bl8uTlDJy-RWfMjNwYjQpwfbsA__&Key-Pair-Id=K3RPWS32NSSJCE: HTTPSConnectionPool(host='cdn-lfs.hf-mirror.com', port=443): Read timed out.\n",
      "Trying to resume download...\n"
     ]
    },
    {
     "data": {
      "application/vnd.jupyter.widget-view+json": {
       "model_id": "f8d48b3066244fbda0421064de906459",
       "version_major": 2,
       "version_minor": 0
      },
      "text/plain": [
       "pytorch_model.bin:  14%|#3        | 168M/1.22G [00:00<?, ?B/s]"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "Error while downloading from https://cdn-lfs.hf-mirror.com/sshleifer/distilbart-cnn-12-6/3bac65d18c99463302d12ca75c2220ea714f9c81ce235f205fa818efe71df6ea?response-content-disposition=inline%3B+filename*%3DUTF-8%27%27pytorch_model.bin%3B+filename%3D%22pytorch_model.bin%22%3B&response-content-type=application%2Foctet-stream&Expires=1745287527&Policy=eyJTdGF0ZW1lbnQiOlt7IkNvbmRpdGlvbiI6eyJEYXRlTGVzc1RoYW4iOnsiQVdTOkVwb2NoVGltZSI6MTc0NTI4NzUyN319LCJSZXNvdXJjZSI6Imh0dHBzOi8vY2RuLWxmcy5oZi5jby9zc2hsZWlmZXIvZGlzdGlsYmFydC1jbm4tMTItNi8zYmFjNjVkMThjOTk0NjMzMDJkMTJjYTc1YzIyMjBlYTcxNGY5YzgxY2UyMzVmMjA1ZmE4MThlZmU3MWRmNmVhP3Jlc3BvbnNlLWNvbnRlbnQtZGlzcG9zaXRpb249KiZyZXNwb25zZS1jb250ZW50LXR5cGU9KiJ9XX0_&Signature=nqPskAFDZ9yM8OPANAGyVKvux1PGY8TNYj3wVLdLWS5JUr1Kzo6Er2hc%7EI1DYhQvsnz%7E9qb7m9P2JUyzc5ReK89Csw9AOvSYM2Rfv-rW2GwIcjKwEibATo8HlPrlNUcqG8wNb598njXtXfD8KNkfb4n31c5Kf5cul52mAqX-DBTgsp3zCPiHKV3HjzkEXiBnYkevRQsmXrB5L4oCUNnpCWynPxenp7j2efOdwxHQYfYu-JIy8nsMY3W11l9Q7gve17AB8WJtf0cWKs-Kfm6V9zG5EUP7-P%7ELeslSwEDjSmDqqvtpO2WpvktjYbk6Bl8uTlDJy-RWfMjNwYjQpwfbsA__&Key-Pair-Id=K3RPWS32NSSJCE: HTTPSConnectionPool(host='cdn-lfs.hf-mirror.com', port=443): Read timed out.\n",
      "Trying to resume download...\n"
     ]
    },
    {
     "data": {
      "application/vnd.jupyter.widget-view+json": {
       "model_id": "749abb03102f47f888e066c7559b4d87",
       "version_major": 2,
       "version_minor": 0
      },
      "text/plain": [
       "pytorch_model.bin:  14%|#3        | 168M/1.22G [00:00<?, ?B/s]"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "Error while downloading from https://cdn-lfs.hf-mirror.com/sshleifer/distilbart-cnn-12-6/3bac65d18c99463302d12ca75c2220ea714f9c81ce235f205fa818efe71df6ea?response-content-disposition=inline%3B+filename*%3DUTF-8%27%27pytorch_model.bin%3B+filename%3D%22pytorch_model.bin%22%3B&response-content-type=application%2Foctet-stream&Expires=1745287527&Policy=eyJTdGF0ZW1lbnQiOlt7IkNvbmRpdGlvbiI6eyJEYXRlTGVzc1RoYW4iOnsiQVdTOkVwb2NoVGltZSI6MTc0NTI4NzUyN319LCJSZXNvdXJjZSI6Imh0dHBzOi8vY2RuLWxmcy5oZi5jby9zc2hsZWlmZXIvZGlzdGlsYmFydC1jbm4tMTItNi8zYmFjNjVkMThjOTk0NjMzMDJkMTJjYTc1YzIyMjBlYTcxNGY5YzgxY2UyMzVmMjA1ZmE4MThlZmU3MWRmNmVhP3Jlc3BvbnNlLWNvbnRlbnQtZGlzcG9zaXRpb249KiZyZXNwb25zZS1jb250ZW50LXR5cGU9KiJ9XX0_&Signature=nqPskAFDZ9yM8OPANAGyVKvux1PGY8TNYj3wVLdLWS5JUr1Kzo6Er2hc%7EI1DYhQvsnz%7E9qb7m9P2JUyzc5ReK89Csw9AOvSYM2Rfv-rW2GwIcjKwEibATo8HlPrlNUcqG8wNb598njXtXfD8KNkfb4n31c5Kf5cul52mAqX-DBTgsp3zCPiHKV3HjzkEXiBnYkevRQsmXrB5L4oCUNnpCWynPxenp7j2efOdwxHQYfYu-JIy8nsMY3W11l9Q7gve17AB8WJtf0cWKs-Kfm6V9zG5EUP7-P%7ELeslSwEDjSmDqqvtpO2WpvktjYbk6Bl8uTlDJy-RWfMjNwYjQpwfbsA__&Key-Pair-Id=K3RPWS32NSSJCE: HTTPSConnectionPool(host='cdn-lfs.hf-mirror.com', port=443): Read timed out.\n",
      "Trying to resume download...\n"
     ]
    },
    {
     "data": {
      "application/vnd.jupyter.widget-view+json": {
       "model_id": "035ea626b5324348a402f2681683bd4f",
       "version_major": 2,
       "version_minor": 0
      },
      "text/plain": [
       "pytorch_model.bin:  15%|#4        | 178M/1.22G [00:00<?, ?B/s]"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "data": {
      "application/vnd.jupyter.widget-view+json": {
       "model_id": "26d2f6f872b047a7aebc53d46c8298b4",
       "version_major": 2,
       "version_minor": 0
      },
      "text/plain": [
       "tokenizer_config.json:   0%|          | 0.00/26.0 [00:00<?, ?B/s]"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "data": {
      "application/vnd.jupyter.widget-view+json": {
       "model_id": "dca5f5eda25c486fb7724e6abe24effb",
       "version_major": 2,
       "version_minor": 0
      },
      "text/plain": [
       "vocab.json: 0.00B [00:00, ?B/s]"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "data": {
      "application/vnd.jupyter.widget-view+json": {
       "model_id": "91ad4b8e3a234ef283dda8159ab36102",
       "version_major": 2,
       "version_minor": 0
      },
      "text/plain": [
       "model.safetensors:   0%|          | 0.00/1.22G [00:00<?, ?B/s]"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "data": {
      "application/vnd.jupyter.widget-view+json": {
       "model_id": "c285e25a45e84266a78ed70ddeeded77",
       "version_major": 2,
       "version_minor": 0
      },
      "text/plain": [
       "merges.txt: 0.00B [00:00, ?B/s]"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "Device set to use cpu\n"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "[{'summary_text': \" The 2022 Winter Olympics was held in Beijing, China, and surrounding areas . It was the 24th edition of the Winter Olympic Winter Games . The Games featured a record 109 events across 15 disciplines, with big air skiing and women's monobob making their Olympic debuts . Norway won 37 medals, of which 16 were gold, setting a new record for the largest number of gold medals won at a single Winter Olympics . China finished third with nine gold medals and eleventh place by total medals won .\"}]\n"
     ]
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "Error while downloading from https://cdn-lfs.hf-mirror.com/sshleifer/distilbart-cnn-12-6/1e46814333b97dfa0f866f58fd15cd7b48ffbe7fd4c1a929caa5f95c7b2fa592?response-content-disposition=inline%3B+filename*%3DUTF-8%27%27model.safetensors%3B+filename%3D%22model.safetensors%22%3B&Expires=1745286534&Policy=eyJTdGF0ZW1lbnQiOlt7IkNvbmRpdGlvbiI6eyJEYXRlTGVzc1RoYW4iOnsiQVdTOkVwb2NoVGltZSI6MTc0NTI4NjUzNH19LCJSZXNvdXJjZSI6Imh0dHBzOi8vY2RuLWxmcy5oZi5jby9zc2hsZWlmZXIvZGlzdGlsYmFydC1jbm4tMTItNi8xZTQ2ODE0MzMzYjk3ZGZhMGY4NjZmNThmZDE1Y2Q3YjQ4ZmZiZTdmZDRjMWE5MjljYWE1Zjk1YzdiMmZhNTkyP3Jlc3BvbnNlLWNvbnRlbnQtZGlzcG9zaXRpb249KiJ9XX0_&Signature=erqdzajmUVLEhdHe4%7ExkF2h91jIwQz6sE1%7Edsqvq2YDVhyC3OITpQq8wBl5xX47BhbvVY4YLFr1T01Op2BcuAKXGONglFiqZyonVBLhITbTR2kzi30SZi6mSRGAfkXW0sOjxeKvH0rDW2OLQuRXAi-x8qMXC8CiZ6RCM%7ELX-0fWMxk9XX%7E3%7EL59lmCSnS0CpVROgSkp-9oZiPWc3%7Eb7Vu6F%7EHea0TguNP2-g8iHYSzU4tSTCotKlhRyk%7ENvXXv5tbzHxpFedeR7M1U0f8fuPD1aspnnAEi7HhePhdubvSFwHZsa33rXmoXpOiHhk9PsuvWUF4Nofs9VxdE8Wqgm8xw__&Key-Pair-Id=K3RPWS32NSSJCE: HTTPSConnectionPool(host='cdn-lfs.hf-mirror.com', port=443): Read timed out.\n",
      "Trying to resume download...\n"
     ]
    },
    {
     "data": {
      "application/vnd.jupyter.widget-view+json": {
       "model_id": "d3495fc1238b4601a975ecae9dda1a91",
       "version_major": 2,
       "version_minor": 0
      },
      "text/plain": [
       "model.safetensors:   7%|6         | 83.9M/1.22G [00:00<?, ?B/s]"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    }
   ],
   "source": [
    "\"\"\"\n",
    "代码来源于GitHub项目huggingface/transformers\n",
    "（Copyright (c) 2020, The HuggingFace Team, Apache-2.0 License（见附录））\n",
    "\"\"\"\n",
    "summarizer = pipeline('summarization')\n",
    "print(summarizer(\n",
    "    \"\"\"\n",
    "    The 2022 Winter Olympics (2022年冬季奥林匹克运动会), officially \n",
    "    called the XXIV Olympic Winter Games (Chinese: 第二十四届冬季奥\n",
    "    林匹克运动会; pinyin: Dì Èrshísì Jiè Dōngjì Àolínpǐkè Yùndònghuì) \n",
    "    and commonly known as Beijing 2022 (北京2022), was an international \n",
    "    winter multi-sport event held from 4 to 20 February 2022 in Beijing, \n",
    "    China, and surrounding areas with competition in selected events \n",
    "    beginning 2 February 2022.[1] It was the 24th edition of the Winter \n",
    "    Olympic Games. Beijing was selected as host city in 2015 at the \n",
    "    128th IOC Session in Kuala Lumpur, Malaysia, marking its second \n",
    "    time hosting the Olympics, and the last of three consecutive \n",
    "    Olympics hosted in East Asia following the 2018 Winter Olympics \n",
    "    in Pyeongchang County, South Korea, and the 2020 Summer Olympics \n",
    "    in Tokyo, Japan. Having previously hosted the 2008 Summer Olympics, \n",
    "    Beijing became the first city to have hosted both the Summer and \n",
    "    Winter Olympics. The venues for the Games were concentrated around \n",
    "    Beijing, its suburb Yanqing District, and Zhangjiakou, with some \n",
    "    events (including the ceremonies and curling) repurposing venues \n",
    "    originally built for Beijing 2008 (such as Beijing National \n",
    "    Stadium and the Beijing National Aquatics Centre). The Games \n",
    "    featured a record 109 events across 15 disciplines, with big air \n",
    "    freestyle skiing and women's monobob making their Olympic debuts \n",
    "    as medal events, as well as several new mixed competitions. \n",
    "    A total of 2,871 athletes representing 91 teams competed in the \n",
    "    Games, with Haiti and Saudi Arabia making their Winter Olympic \n",
    "    debut. Norway finished at the top of the medal table \n",
    "    for the second successive Winter Olympics, winning a total of 37 \n",
    "    medals, of which 16 were gold, setting a new record for the \n",
    "    largest number of gold medals won at a single Winter Olympics. \n",
    "    The host nation China finished third with nine gold medals and \n",
    "    also eleventh place by total medals won, marking its most \n",
    "    successful performance in Winter Olympics history.[4]\n",
    "    \"\"\"\n",
    "))"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "8fe6db0c-22d7-4f89-b720-078441a1f14d",
   "metadata": {},
   "outputs": [],
   "source": []
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "ea95a333-41c4-4e55-b957-f353f2d9a5eb",
   "metadata": {},
   "outputs": [],
   "source": []
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3 (ipykernel)",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.11.7"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 5
}
